A couple of months ago the CometD Project released its third major version, CometD 3.0.0 (announcement).
Since then I wanted to write a blog about this major release, but work on HTTP 2 kept me busy.
Today CometD 3.0.1 was released, so it’s time for a new CometD blog entry.
CometD is an open source (Apache 2 licensed) project that started in 2008 under the umbrella of the Dojo Foundation, and already at the time it designed a web application messaging protocol named Bayeux.
Similar efforts have started more recently, see for example WAMP.
The Bayeux protocol is transport independent (works with HTTP and WebSocket) which allows CometD to easily fallback to HTTP when WebSocket, for any reason, does not work. Of course it works also with SPDY and HTTP 2.
The Bayeux protocol supports two types of message delivery: the pubsub (broadcast) message delivery, and the peer-to-peer message delivery.
In the pubsub style, a publisher sends a message to the server, which broadcast it to all clients that subscribed.
In the peer-to-peer style, a sender sends a message to the server, which then decide what to do: it may broadcast the message to certain clients only, or to only one, or back to the original sender. This latter case allows for an implementation to offer RPC (remote procedure call) message delivery.
Bayeux is also an extensible protocol, that allows to implement extensions on top of the protocol itself, so that implementations can add additional features such as time synchronization between client and server, and various grades of reliable message delivery (also called “quality of service” by MQTT).
The CometD Project is an implementation of the Bayeux protocol, and over the years implemented not only the major features that the Bayeux protocol enables (pubsub messaging, peer-to-peer messaging, RPC messaging, message acknowledgment, etc.), but also additional features that make CometD the right choice if you have a web messaging use case.
One such features is a clustering system, called Oort, that enables even more features such as tracking which node a client is connected to (Seti) and distributed objects and services.
CometD 3.x leverages the standards for its implementation.
In particular, it ships a HTTP transport based on the Servlet 3.1 Async I/O API, which makes CometD very scalable (no threads will be blocked by I/O operations).
Furthermore, it ships a WebSocket transport based on the standard Java WebSocket API, also using asynchronous features, that make CometD even more scalable.
If you have a web messaging use case, being it RPC style, pubsub style or peer-to-peer style, CometD is the one stop shop for you.
Webtide provides commercial support for CometD, so you are not alone when using CometD. You don’t have to spend countless hours googling around in search of solutions when you can have the CometD committers helping you to design, deploy and scale your project.
Contact us.
Tag: websocket
-
CometD 3: RPC, PubSub, Peer-to-Peer Web Messaging
-
Jetty, SPDY and HAProxy
The SPDY protocol will be the next web revolution.
The HTTP-bis working group has been rechartered to use SPDY as the basis for HTTP 2.0, so network and server vendors are starting to update their offerings to include SPDY support.
Jetty has a long story of staying cutting edge when it is about web features and network protocols.- Jetty first implemented web continuations (2005) as a portable library, deployed them successfully for years to customers, until web continuations eventually become part of the Servlet 3.0 standard.
- Jetty first supported the WebSocket protocol within the Servlet model (2009), deployed it successfully for years to customers, and now the WebSocket APIs are in the course of becoming a standard via JSR 356.
Jetty is the first and today practically the only Java server that offers complete SPDY support, with advanced features that we demonstrated at JavaOne (watch the demo if you’re not convinced).
If you have not switched to Jetty yet, you are missing the revolutions that are happening on the web, you are probably going to lose technical ground to your competitors, and lose money upgrading too late when it will cost (or already costs) you a lot more.
Jetty is open source, released with friendly licenses, and with full commercial support in case you need our expertise about developer advice, training, tuning, configuring and using Jetty.
While SPDY is now well supported by browsers and its support is increasing in servers, it is still lagging a bit behind in intermediaries such as load balancers, proxies and firewalls.
To exploit the full power of SPDY, you want not only SPDY in the communication between the browser and the load balancer, but also between the load balancer and the servers.
We are actively opening discussion channels with the providers of such products, and one of them is HAProxy. With the collaboration of Willy Tarreau, HAProxy mindmaster, we have recently been able to perform a full SPDY communication between a SPDY client (we tested latest Chrome, latest Firefox and Jetty’s Java SPDY client) through HAProxy to a Jetty SPDY server.
This sets a new milestone in the adoption of the SPDY protocol because now large deployments can leverage the goodness of HAProxy as load balancer *and* leverage the goodness of SPDY as well as provided by Jetty SPDY servers.
The HAProxy SPDY features are available in the latest development snapshots of HAProxy. A few details will probably be subject to changes (in particular the HAProxy configuration keywords), but SPDY support in HAProxy is there.
The Jetty SPDY features are already available in Jetty 7, 8 and 9.
If you are interested in knowing how you can use SPDY in your deployments, don’t hesitate to contact us. Most likely, you will be contacting us using the SPDY protocol from your browser to our server 🙂 -
WebSocket over SSL in Jetty
Jetty has always been in the front line on the implementation of the WebSocket Protocol.
The CometD project leverages the Jetty WebSocket implementation to its maximum, to achieve great scalability and minimal latencies.
Until now, however, support for WebSocket over SSL was lacking in Jetty.
In Jetty 7.6.x a redesign of the connection layer allows for more pluggability of SSL encryption/decryption and of connection upgrade (from HTTP to WebSocket), and these changes combined allowed to implement very easily WebSocket over SSL.
These changes are now merged into Jetty’smasterbranch, and will be shipped with the next version of Jetty.
Developers will now be able to use thewss://protocol in web pages in conjunction with Jetty on the server side, or just rely on the CometD framework to forget about transport details and always have the fastest, most reliable and now also confidential transport available, and concentrate in writing application logic rather than transport logic.
WebSocket over SSL is of course also available in the Java WebSocket client provided by Jetty.
Enjoy ! -
CometD 2.4.0 WebSocket Benchmarks
Slightly more than one year has passed since the last CometD 2 benchmarks, and more than three years since the CometD 1 benchmark. During this year we have done a lot of work on CometD, both by adding features and by continuously improving performance and stability to make it faster and more scalable.
With the upcoming CometD 2.4.0 release, one of the biggest changes is the implementation of a WebSocket transport for both the Java client and the Java server.
The WebSocket protocol is finalizing at the IETF, major browsers all support various draft versions of the protocol (and Jetty supports all draft versions), so while WebSocket is slowly picking up, it is interesting to compare how WebSocket behaves with respect to HTTP for the typical scenarios that use CometD.
We conducted several benchmarks using the CometD load tools on Amazon EC2 instances.HTTP Benchmark Results
Below you can find the benchmark result graph when using the CometD long-polling transport, based on plain HTTP.

Differently from the previous benchmark, where we reported the average latency, this time we report the median latency, which is a better indicator of the latencies seen by the clients.
Comparison with the previous benchmark would be unfair, since the hosts were different (both in number and computing power), and the JVM also was different.
As you can see from the graph above, the median latency is pretty much the same no matter the number of clients, with the exception of 50k clients at 50k messages/s.
The median latency stays well under 200 ms even at more than 50k messages/s, and it is in the range of 2-4 ms until 10k messages/s, and around 50 ms for 20k messages/s, even for 50k clients.
The result for 50k clients and 50k messages/s is a bit strange, since the hosts (both server and clients) had plenty of CPU available and plenty of threads available (which rules out locking contention issues in the code that would have bumped up threads use).
Could it be possible that at that message rate we hit some limit of the EC2 platform ? It might be possible and this blog post confirms that indeed there are limits in the virtualization of the network interfaces between host and guest. I have words from other people who have performed benchmarks on EC2 that they also hit limits very close to what the blog post above describes.
In any case, one server with 20k clients serving 50k messages/s with 150 ms median latency is a very good result.
For completeness, the 99th percentile latency is around 350 ms for 20k and 50k clients at 20k messages/s and around 1500 ms for 20k clients at 50k messages/s, and much less–quite close to the median latency–for the other results.WebSocket Benchmark Results
The results for the same benchmarks using the WebSocket transport were quite impressive, and you can see them below.

Note that this graph uses a totally different scale for latencies and number of clients.
Whereas for HTTP we had a 800 ms as maximum latency (on the Y axis), for WebSocket we have 6 ms (yes you read that right); and whereas for HTTP we somehow topped at 50k clients per server, here we could go up to 200k.
We did not merge the two graphs into a single one to avoid that the WebSocket resulting trend lines were collapsed onto the X axis.
With HTTP, having more than 50k clients on the server was troublesome at any message rate, but with WebSocket 200k clients were stable up to 20k messages/s. Beyond that, we probably hit EC2 limits again, and the results were unstable–some runs could complete successfully, others could not.- The median latencies, for almost any number of clients and any message rate, are below 10 ms, which is quite impressive.
- The 99th percentile latency is around 300 ms for 200k clients at 20k messages/s, and around 200 ms for 50k clients at 50k messages/s.
We have also conducted some benchmarks by varying the payload size from the default of 50 bytes to 500 bytes to 2000 bytes, but the results we obtained with different payload size were very similar, so we can say that payload size has a very little impact (if any) on latencies in this benchmark configuration.
We have also monitored memory consumption in “idle” state (that is, with clients connected and sending meta connect requests every 30 seconds, but not sending messages):- HTTP: 50k clients occupy around 2.1 GiB
- WebSocket: 50k clients occupy around 1.2 GiB, and 200k clients occupy 3.2 GiB.
The benefits of WebSocket being a lighter weight protocol with respect to HTTP are clear in all cases.
Conclusions
The conclusions are:
- The work the CometD project has done to improve performances and scalability were worth the effort, and CometD offers a truly scalable solution for server-side event driven web applications, for both HTTP and WebSocket.
- As the WebSocket protocol gains adoption, CometD can leverage the new protocol without any change required to applications; they will just perform faster.
- Server-to-server CometD communication can now be extremely fast by using WebSocket. We have already updated the CometD scalability cluster Oort to take advantage of these enhancements.
Appendix–Benchmark Details
The server was one EC2 instance of type “m2.4xlarge” (67 GiB RAM, 8 cores Intel(R) Xeon(R) X5550 @2.67GHz) running Ubuntu Linux 11.04 (2.6.38-11-virtual #48-Ubuntu SMP 64-bit).
The clients were 10 EC2 instances of type “c1.xlarge” (7 GiB RAM, 8 cores Intel Xeon E5410 @2.33GHz) running Ubuntu Linux 11.04 (2.6.38-11-virtual #48-Ubuntu SMP 64-bit).
The JVM used was Oracle’s Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode) version 1.7.0 for both clients and server.
The server was started with the following options:-Xmx32g -Xms32g -Xmn16g -XX:-UseSplitVerifier -XX:+UseParallelOldGC -XX:-UseAdaptiveSizePolicy -XX:+UseNUMA
while the clients were started with the following options:
-Xmx6g -Xms6g -Xmn3g -XX:-UseSplitVerifier -XX:+UseParallelOldGC -XX:-UseAdaptiveSizePolicy -XX:+UseNUMA
The OS was tuned for allowing a larger number of file descriptors, as described here.
-
CometD 2.4.0.beta1 Released
CometD 2.4.0.beta1 has been released.
This is a major release that brings in a few new Java API (see this issue) – client-side channels can now be released to save memory, along with an API deprecation (see this issue) – client-side publish() should not specify the message id.
On the WebSocket front, the WebSocket transports have been overhauled and made up-to-date with the latest WebSocket drafts (currently Jetty implements up to draft 13, while browsers are still a bit back on draft 7/8 or so), and made scalable as well in both threading and memory usage.
Following these changes, BayeuxClient has been updated to negotiate transports with the server, and Oort has also been updated to use WebSocket by default for server-to-server communication, making server-to-server communication more efficient and with less latency.
WebSocket is now supported on Firefox 6 through the use of the Firefox-specific MozWebSocket object in the javascript library.
We have performed some preliminary benchmarks with WebSocket; they look really promising, although have been done before the latest changes to the CometD WebSocket transports.
We plan to do a more accurate benchmarking in the next days/weeks.
The other major change is the pluggability of the JSON library to handle JSON generation and parsing (see this issue).
CometD has been long time based on Jetty’s JSON library, but now also Jackson can be used (the default will still be Jetty’s however, to avoid breaking deployed applications that were using the Jetty JSON classes).
Jackson proved to be faster than Jetty in both parsing and generation, and will likely to become the default in few releases, to allow gradual migration of application that made use of Jetty JSON classes directly.
The applications should be written independently of the JSON library used.
Of course Jackson also brings in its powerful configurability and annotation processing so that your custom classes can be de/serialized from/to JSON.
Here you can find the release notes.
Download it, use it, and report back, any feedback is important before the final 2.4.0 release. -
Websocket Example: Server, Client and LoadTest
The websocket protocol specification is approaching final and the Jetty implementation and API have been tracking the draft and is ready when the spec and browsers are available. More over, Jetty release 7.5.0 now includes a capable websocket java client that can be used for non browser applications or load testing. It is fully asynchronous and can create thousands of connections simultaneously.
This blog uses the classic chat example to introduce a websocket server, client and load test.
The project
The websocket example has been created as a maven project with groupid com.example. The entire project can be downloaded from here. The pom.xml defines a dependency on org.eclipse.jetty:jetty-websocket-7.5.0.RC1 (you should update to 7.5.0 when the final release is available), which provides the websocket API and transitively the jetty implementation. There is also a dependency on org.eclipse.jetty:jetty-servlet which provides the ability to create an embedded servlet container to run the server example.
While the project implements a Servlet, it is not in a typical webapp layout, as I wanted to provide both client and server in the same project. Instead of a webapp, this project uses embedded jetty in a simple Main class to provide the server and the static content is served from the classpath from src/resources/com/example/docroot.
Typically developers will want to build a war file containing a webapp, but I leave it as an exercise for the reader to put the servlet and static content described here into a webapp format.
The Servlet
The Websocket connection starts with a HTTP handshake. Thus the websocket API in jetty also initiated by the handling of a HTTP request (typically) by a Servlet. The advantage of this approach is that it means that websocket connections are terminated in the same rich application space provided by HTTP servers, thus a websocket enabled web application can be developed in a single environment rather than by collaboration between a HTTP server and a separate websocket server.
We create the ChatServlet with an init() method that instantiates and configures a WebSocketFactory instance:
public class ChatServlet extends HttpServlet { private WebSocketFactory _wsFactory; private final Set _members = new CopyOnWriteArraySet(); @Override public void init() throws ServletException { // Create and configure WS factory _wsFactory=new WebSocketFactory(new WebSocketFactory.Acceptor() { public boolean checkOrigin(HttpServletRequest request, String origin) { // Allow all origins return true; } public WebSocket doWebSocketConnect(HttpServletRequest request, String protocol) { if ("chat".equals(protocol)) return new ChatWebSocket(); return null; } }); _wsFactory.setBufferSize(4096); _wsFactory.setMaxIdleTime(60000); } ...The WebSocketFactory is instantiated by passing it an Acceptor instance, which in this case is an anonymous instance. The Acceptor must implement two methods: checkOrigin, which in this case accepts all; and doWebSocketConnect, which must accept a WebSocket connection by creating and returning an instance of the WebSocket interface to handle incoming messages. In this case, an instance of the nested ChatWebSocket class is created if the protocol is “chat”. The other WebSocketFactory fields have been initialised with hard coded buffers size and timeout, but typically these would be configurable from servlet init parameters.
The servlet handles get requests by passing them to the WebSocketFactory to be accepted or not:
... protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException { if (_wsFactory.acceptWebSocket(request,response)) return; response.sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE, "Websocket only"); } ...All that is left for the Servlet, is the ChatWebSocket itself. This is just a POJO that receives callbacks for events. For this example we have implemented the WebSocket.OnTextMessage interface to restrict the call backs to only connection management and full messages:
private class ChatWebSocket implements WebSocket.OnTextMessage { Connection _connection; public void onOpen(Connection connection) { _connection=connection; _members.add(this); } public void onClose(int closeCode, String message) { _members.remove(this); } public void onMessage(String data) { for (ChatWebSocket member : _members) { try { member._connection.sendMessage(data); } catch(IOException e) { e.printStackTrace(); } } } }The handling of the onOpen callback is to add the ChatWebSocket to the set of all members (and remembering the Connection object for subsequent sends). The onClose handling simply removes the member from the set. The onMessage handling iterates through all the members and sends the received message to them (and prints any resulting exceptions).
The Server
To run the servlet, there is a simple Main method that creates an embedded Jetty server with a ServletHandler for the chat servlet, as ResourceHandler for the static content needed by the browser client and a DefaultHandler to generate errors for all other requests:
public class Main { public static void main(String[] arg) throws Exception { int port=arg.length>1?Integer.parseInt(arg[1]):8080; Server server = new Server(port); ServletHandler servletHandler = new ServletHandler(); servletHandler.addServletWithMapping(ChatServlet.class,"/chat/*"); ResourceHandler resourceHandler = new ResourceHandler(); resourceHandler.setBaseResource(Resource.newClassPathResource("com/example/docroot/")); DefaultHandler defaultHandler = new DefaultHandler(); HandlerList handlers = new HandlerList(); handlers.setHandlers(new Handler[] {servletHandler,resourceHandler,defaultHandler}); server.setHandler(handlers); server.start(); server.join(); } }The server can be run from an IDE or via maven using the following command line:
mvn mvn -Pserver exec:exec
The Browser Client
The HTML for the chat room simply imports some CSS and the javascript before creating a few simple divs to contain the chat text, the join dialog and the joined dialog:
<html> <head> <title>WebSocket Chat Example</title> <script type='text/javascript' src="chat.js"></script> <link rel="stylesheet" type="text/css" href="chat.css" /> </head> <body> <div id='chat'></div> <div id='input'> <div id='join' > Username: <input id='username' type='text'/> <input id='joinB' class='button' type='submit' name='join' value='Join'/> </div> <div id='joined' class='hidden'> Chat: <input id='phrase' type='text'/> <input id='sendB' class='button' type='submit' name='join' value='Send'/> </div> </div> <script type='text/javascript'>init();</script> </body> </html>The javascript create a room object with methods to handle the various operations of a chat room. The first operation is to join the chat room, which is triggered by entering a user name. This creates a new WebSocket object pointing to the /chat URL path on the same server the HTML was loaded from:
var room = { join : function(name) { this._username = name; var location = document.location.toString() .replace('http://', 'ws://') .replace('https://', 'wss://')+ "chat"; this._ws = new WebSocket(location, "chat"); this._ws.onopen = this.onopen; this._ws.onmessage = this.onmessage; this._ws.onclose = this.onclose; }, onopen : function() { $('join').className = 'hidden'; $('joined').className = ''; $('phrase').focus(); room.send(room._username, 'has joined!'); }, ...The javascript websocket object is initialised with call backs for onopen, onclose and onmessage. The onopen callback is handled above by switching the join div to the joined div and sending a “has joined” message.
Sending is implemented by creating a string of username:message and sending that via the WebSocket instance:
... send : function(user, message) { user = user.replace(':', '_'); if (this._ws) this._ws.send(user + ':' + message); }, ...If the chat room receives a message, the onmessage callback is called, which sanitises the message, parses out the username and appends the text to the chat div:
... onmessage : function(m) { if (m.data) { var c = m.data.indexOf(':'); var from = m.data.substring(0, c) .replace('<','<') .replace('>','>'); var text = m.data.substring(c + 1) .replace('<', '<') .replace('>', '>'); var chat = $('chat'); var spanFrom = document.createElement('span'); spanFrom.className = 'from'; spanFrom.innerHTML = from + ': '; var spanText = document.createElement('span'); spanText.className = 'text'; spanText.innerHTML = text; var lineBreak = document.createElement('br'); chat.appendChild(spanFrom); chat.appendChild(spanText); chat.appendChild(lineBreak); chat.scrollTop = chat.scrollHeight - chat.clientHeight; } }, ...Finally, the onclose handling empties the chat div and switches back to the join div so that a new username may be entered:
... onclose : function(m) { this._ws = null; $('join').className = ''; $('joined').className = 'hidden'; $('username').focus(); $('chat').innerHTML = ''; } };With this simple client being served from the server, you can now point your websocket capable browsers at http://localhost:8080 and interact with the chat room. Of course this example glosses over a lot of detail and complications a real chat application would need, so I suggest you read my blog is websocket chat simpler to learn what else needs to be handled.
The Load Test Client
The jetty websocket java client is an excellent tool for both functional and load testing of a websocket based service. It uses the same endpoint API as the server side and for this example we create a simple implementation of the OnTextMessage interface that keeps track of the all the open connection and counts the number of messages sent and received:
public class ChatLoadClient implements WebSocket.OnTextMessage { private static final AtomicLong sent = new AtomicLong(0); private static final AtomicLong received = new AtomicLong(0); private static final Set<ChatLoadClient> members = new CopyOnWriteArraySet<ChatLoadClient>(); private final String name; private final Connection connection; public ChatLoadClient(String username,WebSocketClient client,String host, int port) throws Exception { name=username; connection=client.open(new URI("ws://"+host+":"+port+"/chat"),this).get(); } public void send(String message) throws IOException { connection.sendMessage(name+":"+message); } public void onOpen(Connection connection) { members.add(this); } public void onClose(int closeCode, String message) { members.remove(this); } public void onMessage(String data) { received.incrementAndGet(); } public void disconnect() throws IOException { connection.disconnect(); }The Websocket is initialized by calling open on the WebSocketClient instance passed to the constructor. The WebSocketClient instance is shared by multiple connections and contains the thread pool and other common resources for the client.
This load test example comes with a main method that creates a WebSocketClient from command line options and then creates a number of ChatLoadClient instances:
public static void main(String... arg) throws Exception { String host=arg.length>0?arg[0]:"localhost"; int port=arg.length>1?Integer.parseInt(arg[1]):8080; int clients=arg.length>2?Integer.parseInt(arg[2]):1000; int mesgs=arg.length>3?Integer.parseInt(arg[3]):1000; WebSocketClient client = new WebSocketClient(); client.setBufferSize(4096); client.setMaxIdleTime(30000); client.setProtocol("chat"); client.start(); // Create client serially ChatLoadClient[] chat = new ChatLoadClient[clients]; for (int i=0;i<chat.length;i++) chat[i]=new ChatLoadClient("user"+i,client,host,port); ...Once the connections are opened, the main method loops around picking a random client to speak in the chat room
... // Send messages Random random = new Random(); for (int i=0;i<mesgs;i++) { ChatLoadClient c = chat[random.nextInt(chat.length)]; String msg = "Hello random "+random.nextLong(); c.send(msg); } ...Once all the messages have been sent and all the replies have been received, the connections are closed:
... // close all connections for (int i=0;i<chat.length;i++) chat[i].disconnect();The project is setup so that the load client can be run with the following maven command:
mvn -Pclient exec:exec
And the resulting output should look something like:
Opened 1000 of 1000 connections to localhost:8080 in 1109ms Sent/Received 10000/10000000 messages in 15394ms: 649603msg/s Closed 1000 connections to localhost:8080 in 45ms
Yes that is 649603 messages per second!!!!!!!!!!! This is a pretty simple easy test, but it is still scheduling 1000 local sockets plus generating and parsing all the websocket frames. Real applications on real networks are unlikely to achieve close to this level, but the indications are good for the capability of high throughput and stand by for more rigorous bench marks shortly.
-
Prelim Cometd WebSocket Benchmarks
I have done some very rough preliminary benchmarks on the latest cometd-2.4.0-SNAPSHOT with the latest Jetty-7.5.0-SNAPSHOT and the results are rather impressive. The features that these two releases have added are:
- Optimised Jetty NIO with latest JVMs and JITs considered.
- Latest websocket draft implemented and optimised.
- Websocket client implemented.
- Jackson JSON parser/generator used for cometd
- Websocket cometd transport for the server improved.
- Websocket cometd transport for the bayeux client implemented.
The benchmarks that I’ve done have all been on my notebook using the localhost network, which is not the most realistic of environments, but it still does tell us a lot about the raw performance of the cometd/jetty. Specifically:
- Both the server and the client are running on the same machine, so they are effectively sharing the 8 CPUs available. The client typically takes 3x more CPU than the server (for the same load), so this is kind of like running the server on a dual core and the client on a 6 core machine.
- The local network has very high throughput which would only be matched by gigabit networks. It also has practically no latency, which is unlike any real network. The long polling transport is more dependent on good network latency than the websocket transport, so the true comparison between these transports will need testing on a real network.
The Test
The cometd load test is a simulated chat application. For this test I tried long-polling and websocket transports for 100, 1000 and 10,000 clients that were each logged into 10 randomly selected chat rooms from a total of 100 rooms. The messages sent were all 50 characters long and were published in batches of 10 messages at once, each to randomly selected rooms. There was a pause between batches that was adjusted to find a good throughput that didn’t have bad latency. However little effort was put into finding the optimal settings to maximise throughput.
The runs were all done on JVM’s that had been warmed up, but the runs were moderately short (approx 30s), so steady state was not guaranteed and the margin of error on these numbers will be pretty high. However, I also did a long run test at one setting just to make sure that steady state can be achieved.
The Results
The bubble chart above plots messages per second against number of clients for both long-polling and websocket transports. The size of the bubble is the maximal latency of the test, with the smallest bubble being 109ms and the largest is 646ms. Observations from the results are:
- Regardless of transport we achieved 100’s of 1000’s messages per second! These are great numbers and show that we can cycle the cometd infrastructure at high rates.
- The long-polling throughput is probably a over reported because there are many messages being queued into each HTTP response. The most HTTP responses I saw was 22,000 responses per second, so for many application it will be the HTTP rate that limits the throughput rather than the cometd rate. However the websocket throughput did not benefit from any such batching.
- The maximal latency for all websocket measurements was significantly better than long polling, with all websocket messages being delivered in < 200ms and the average was < 1ms.
- The websocket throughput increased with connections, which probably indicates that at low numbers of connections we were not generating a maximal load.
A Long Run
The throughput tests above need to be redone on a real network and longer runs. However I did do one long run ( 3 hours) of 1,000,013,657 messages at 93,856/sec. T results suggest no immediate problems with long runs. Neither the client nor the server needed to do a old generation collection and all young generation collections took on average only 12ms.
The output from the client is below:
Statistics Started at Fri Aug 19 15:44:48 EST 2011 Operative System: Linux 2.6.38-10-generic amd64 JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM runtime 17.1-b03 1.6.0_22-b04 Processors: 8 System Memory: 55.35461% used of 7.747429 GiB Used Heap Size: 215.7406 MiB Max Heap Size: 1984.0 MiB Young Generation Heap Size: 448.0 MiB - - - - - - - - - - - - - - - - - - - - Testing 1000 clients in 100 rooms, 10 rooms/client Sending 1000000 batches of 10x50 bytes messages every 10000 µs - - - - - - - - - - - - - - - - - - - - Statistics Ended at Fri Aug 19 18:42:23 EST 2011 Elapsed time: 10654717 ms Time in JIT compilation: 57 ms Time in Young Generation GC: 118473 ms (8354 collections) Time in Old Generation GC: 0 ms (0 collections) Garbage Generated in Young Generation: 2576746.8 MiB Garbage Generated in Survivor Generation: 336.53125 MiB Garbage Generated in Old Generation: 532.35156 MiB Average CPU Load: 433.23907/800 ---------------------------------------- Outgoing: Elapsed = 10654716 ms | Rate = 938 msg/s = 93 req/s = 0.4 Mbs All messages arrived 1000013657/1000013657 Messages - Success/Expected = 1000013657/1000013657 Incoming - Elapsed = 10654716 ms | Rate = 93856 msg/s = 90101 resp/s(96.00%) = 35.8 Mbs Thread Pool - Queue Max = 972 | Latency avg/max = 3/62 ms Messages - Wall Latency Min/Ave/Max = 0/8/135 ms
Note that the client was using 433/800 of the available CPU, while you can see that the server (below) was using only 170/800. This suggests that the server has plenty of spare capacity if it were given the entire machine.
Statistics Started at Fri Aug 19 15:44:47 EST 2011 Operative System: Linux 2.6.38-10-generic amd64 JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM runtime 17.1-b03 1.6.0_22-b04 Processors: 8 System Memory: 55.27913% used of 7.747429 GiB Used Heap Size: 82.58406 MiB Max Heap Size: 2016.0 MiB Young Generation Heap Size: 224.0 MiB - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Statistics Ended at Fri Aug 19 18:42:23 EST 2011 Elapsed time: 10655706 ms Time in JIT compilation: 187 ms Time in Young Generation GC: 140973 ms (12073 collections) Time in Old Generation GC: 0 ms (0 collections) Garbage Generated in Young Generation: 1652646.0 MiB Garbage Generated in Survivor Generation: 767.625 MiB Garbage Generated in Old Generation: 1472.6484 MiB Average CPU Load: 170.20532/800
Conclusion
These results are preliminary, but excellent none the less! The final releases of jetty 7.5.0 and cometd 2.4.0 will be out within a week or two and we will be working to bring you some more rigorous benchmarks with those releases.
