SPDY is Google’s protocol that is intended to improve user experience on the web, by reducing the latency of web pages, sometimes up to a factor of 3. Yes, three times faster.
How does SPDY accomplish that ?
SPDY reduces roundtrips with the server, reduces the HTTP verboseness by compressing HTTP headers, improves the utilization of the TCP connection, multiplexes requests into a single TCP connection (instead of using a limited number of connections, each serving only one request), and allows for server to push secondary resources (like CSS, images, scripts, etc.) associated with a primary resource (typically a web page) without incurring in additional round-trips.
Now, the really cool thing is that Jetty has an implementation of SPDY (see the documentation) in the newly released 7.6.2 and 8.1.2 releases.
Your web applications can immediately and transparently benefit of many of the SPDY improvements without changes, because Jetty does the heavy lifting for you under the covers.
With Chromium/Chrome already supporting SPDY, and Firefox 11 supporting it also (although it needs to be enabled, see how here), more than 50% of the web browsers will be supporting it, so servers needs to catch up, and where Jetty shines.
The Jetty project continues to foster innovation by supporting emerging web protocols: first WebSocket and now SPDY.
A corollary project that came out from the SPDY implementation is a pure Java implementation of the Next Protocol Negotiation (NPN) TLS Extension, also available in Jetty 7.6.2 and 8.1.2.
To prove that this is no fluke, we have updated Webtide’s website with Jetty’s SPDY implementation, and now the website can be served via SPDY, if the browser supports it.
We encourage early adopters to test out Jetty’s SPDY and feedback us on jetty-dev@eclipse.org.
Enjoy !
Author: Simone Bordet
-
SPDY support in Jetty
-
WebSocket over SSL in Jetty
Jetty has always been in the front line on the implementation of the WebSocket Protocol.
The CometD project leverages the Jetty WebSocket implementation to its maximum, to achieve great scalability and minimal latencies.
Until now, however, support for WebSocket over SSL was lacking in Jetty.
In Jetty 7.6.x a redesign of the connection layer allows for more pluggability of SSL encryption/decryption and of connection upgrade (from HTTP to WebSocket), and these changes combined allowed to implement very easily WebSocket over SSL.
These changes are now merged into Jetty’smasterbranch, and will be shipped with the next version of Jetty.
Developers will now be able to use thewss://protocol in web pages in conjunction with Jetty on the server side, or just rely on the CometD framework to forget about transport details and always have the fastest, most reliable and now also confidential transport available, and concentrate in writing application logic rather than transport logic.
WebSocket over SSL is of course also available in the Java WebSocket client provided by Jetty.
Enjoy ! -
CometD, Dojo and XDomainRequest
The CometD project implements various Comet techniques to implement a web messaging bus.
You can find an introduction to CometD here.
Web applications often need to access resources residing on different servers, making the request to access those resources a cross origin request and therefore subject to the same origin policy.
Fortunately, all modern browsers implement the Cross Origin Resource Sharing (CORS) specification, and with the support of Jetty‘s Cross Origin Filter, it’s a breeze to write applications that allow cross origin resource sharing.
That is, all modern browsers apart Internet Explorer 8 and 9.
Without CORS support, CometD fallbacks to another Comet technique known as JSONP.
While JSONP is much less efficient than a CORS request, it guarantees the CometD functionality, but it’s 2011 and JSONP should be a relic of the past.
Microsoft’s browsers have another JavaScript object that allows to make cross origin request: XDomainRequest.
Unfortunately this object is non-standard, and it is not, in general, supported by the JavaScript toolkits on which CometD relies for the actual communication with the server.
I cannot really blame toolkits authors for this lack of support.
However, I recently found a way to make XDomain request work with CometD 2.4.0 and the Dojo toolkit library.
The solution (see this blog post for reference) is the following:
Add this code to your JavaScript application:dojo.require("dojox.io.xhrPlugins"); ... dojox.io.xhrPlugins.addCrossSiteXhr("http://<crossOriginHost>:<crossOriginPort>");What remains is to configure CometD with the
crossOriginHost:dojox.cometd.configure({ url: "http://<crossOriginHost>:<crossOriginPort>" });The last glitch is that XDomainRequest does not seem to allow to send the Content-Type HTTP header, so all of the above will only work in CometD 2.4.0.RC1 or greater where this improvement has been made.
I do not particularly recommend this hack, but sometimes it’s the only way to support cross origin requests for the obsolete Internet Explorers. -
CometD and Opera
The Opera browser is working well with the CometD JavaScript library.
However, recently a problem was reported by the BlastChat guys: with Opera, long-polling requests were strangely disconnecting and immediately reconnecting. This problem was only happening if the long poll request was held by the CometD server for the whole duration of the long-polling timeout.
Reducing the long-polling timeout from the default 30 seconds to 20 seconds made the problem disappear.
This made me think that some other entity had a 30 seconds timeout, and it was killing the request just before it had the chance to be responded by the CometD server.
Such entities may be front-end web servers (such as when Apache Httpd is deployed in front of the CometD server), as well as firewalls or other network components.
But in this case, all other major browsers were working fine, only Opera was failing.
So I typed about:config in Opera’s address bar to access Opera’s configuration options, and filtered with the keyword timeout in the “Quick find” text field.
The second entry is “HTTP Loading Delayed Timeout” and it is set at 30 seconds.
Increasing that value to 45 seconds made the problem disappear.
In my opinion, that value is a bit too aggressive, especially these days where Comet techniques are commonly used and where WebSocket is not yet widely deployed.
The simple workaround is to set the CometD long poll timeout to 20-25 seconds as explained here, but it would be great if Opera’s default was set to a bigger value. -
CometD 2.4.0 WebSocket Benchmarks
Slightly more than one year has passed since the last CometD 2 benchmarks, and more than three years since the CometD 1 benchmark. During this year we have done a lot of work on CometD, both by adding features and by continuously improving performance and stability to make it faster and more scalable.
With the upcoming CometD 2.4.0 release, one of the biggest changes is the implementation of a WebSocket transport for both the Java client and the Java server.
The WebSocket protocol is finalizing at the IETF, major browsers all support various draft versions of the protocol (and Jetty supports all draft versions), so while WebSocket is slowly picking up, it is interesting to compare how WebSocket behaves with respect to HTTP for the typical scenarios that use CometD.
We conducted several benchmarks using the CometD load tools on Amazon EC2 instances.HTTP Benchmark Results
Below you can find the benchmark result graph when using the CometD long-polling transport, based on plain HTTP.

Differently from the previous benchmark, where we reported the average latency, this time we report the median latency, which is a better indicator of the latencies seen by the clients.
Comparison with the previous benchmark would be unfair, since the hosts were different (both in number and computing power), and the JVM also was different.
As you can see from the graph above, the median latency is pretty much the same no matter the number of clients, with the exception of 50k clients at 50k messages/s.
The median latency stays well under 200 ms even at more than 50k messages/s, and it is in the range of 2-4 ms until 10k messages/s, and around 50 ms for 20k messages/s, even for 50k clients.
The result for 50k clients and 50k messages/s is a bit strange, since the hosts (both server and clients) had plenty of CPU available and plenty of threads available (which rules out locking contention issues in the code that would have bumped up threads use).
Could it be possible that at that message rate we hit some limit of the EC2 platform ? It might be possible and this blog post confirms that indeed there are limits in the virtualization of the network interfaces between host and guest. I have words from other people who have performed benchmarks on EC2 that they also hit limits very close to what the blog post above describes.
In any case, one server with 20k clients serving 50k messages/s with 150 ms median latency is a very good result.
For completeness, the 99th percentile latency is around 350 ms for 20k and 50k clients at 20k messages/s and around 1500 ms for 20k clients at 50k messages/s, and much less–quite close to the median latency–for the other results.WebSocket Benchmark Results
The results for the same benchmarks using the WebSocket transport were quite impressive, and you can see them below.

Note that this graph uses a totally different scale for latencies and number of clients.
Whereas for HTTP we had a 800 ms as maximum latency (on the Y axis), for WebSocket we have 6 ms (yes you read that right); and whereas for HTTP we somehow topped at 50k clients per server, here we could go up to 200k.
We did not merge the two graphs into a single one to avoid that the WebSocket resulting trend lines were collapsed onto the X axis.
With HTTP, having more than 50k clients on the server was troublesome at any message rate, but with WebSocket 200k clients were stable up to 20k messages/s. Beyond that, we probably hit EC2 limits again, and the results were unstable–some runs could complete successfully, others could not.- The median latencies, for almost any number of clients and any message rate, are below 10 ms, which is quite impressive.
- The 99th percentile latency is around 300 ms for 200k clients at 20k messages/s, and around 200 ms for 50k clients at 50k messages/s.
We have also conducted some benchmarks by varying the payload size from the default of 50 bytes to 500 bytes to 2000 bytes, but the results we obtained with different payload size were very similar, so we can say that payload size has a very little impact (if any) on latencies in this benchmark configuration.
We have also monitored memory consumption in “idle” state (that is, with clients connected and sending meta connect requests every 30 seconds, but not sending messages):- HTTP: 50k clients occupy around 2.1 GiB
- WebSocket: 50k clients occupy around 1.2 GiB, and 200k clients occupy 3.2 GiB.
The benefits of WebSocket being a lighter weight protocol with respect to HTTP are clear in all cases.
Conclusions
The conclusions are:
- The work the CometD project has done to improve performances and scalability were worth the effort, and CometD offers a truly scalable solution for server-side event driven web applications, for both HTTP and WebSocket.
- As the WebSocket protocol gains adoption, CometD can leverage the new protocol without any change required to applications; they will just perform faster.
- Server-to-server CometD communication can now be extremely fast by using WebSocket. We have already updated the CometD scalability cluster Oort to take advantage of these enhancements.
Appendix–Benchmark Details
The server was one EC2 instance of type “m2.4xlarge” (67 GiB RAM, 8 cores Intel(R) Xeon(R) X5550 @2.67GHz) running Ubuntu Linux 11.04 (2.6.38-11-virtual #48-Ubuntu SMP 64-bit).
The clients were 10 EC2 instances of type “c1.xlarge” (7 GiB RAM, 8 cores Intel Xeon E5410 @2.33GHz) running Ubuntu Linux 11.04 (2.6.38-11-virtual #48-Ubuntu SMP 64-bit).
The JVM used was Oracle’s Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode) version 1.7.0 for both clients and server.
The server was started with the following options:-Xmx32g -Xms32g -Xmn16g -XX:-UseSplitVerifier -XX:+UseParallelOldGC -XX:-UseAdaptiveSizePolicy -XX:+UseNUMA
while the clients were started with the following options:
-Xmx6g -Xms6g -Xmn3g -XX:-UseSplitVerifier -XX:+UseParallelOldGC -XX:-UseAdaptiveSizePolicy -XX:+UseNUMA
The OS was tuned for allowing a larger number of file descriptors, as described here.
-
CometD 2.4.0.beta1 Released
CometD 2.4.0.beta1 has been released.
This is a major release that brings in a few new Java API (see this issue) – client-side channels can now be released to save memory, along with an API deprecation (see this issue) – client-side publish() should not specify the message id.
On the WebSocket front, the WebSocket transports have been overhauled and made up-to-date with the latest WebSocket drafts (currently Jetty implements up to draft 13, while browsers are still a bit back on draft 7/8 or so), and made scalable as well in both threading and memory usage.
Following these changes, BayeuxClient has been updated to negotiate transports with the server, and Oort has also been updated to use WebSocket by default for server-to-server communication, making server-to-server communication more efficient and with less latency.
WebSocket is now supported on Firefox 6 through the use of the Firefox-specific MozWebSocket object in the javascript library.
We have performed some preliminary benchmarks with WebSocket; they look really promising, although have been done before the latest changes to the CometD WebSocket transports.
We plan to do a more accurate benchmarking in the next days/weeks.
The other major change is the pluggability of the JSON library to handle JSON generation and parsing (see this issue).
CometD has been long time based on Jetty’s JSON library, but now also Jackson can be used (the default will still be Jetty’s however, to avoid breaking deployed applications that were using the Jetty JSON classes).
Jackson proved to be faster than Jetty in both parsing and generation, and will likely to become the default in few releases, to allow gradual migration of application that made use of Jetty JSON classes directly.
The applications should be written independently of the JSON library used.
Of course Jackson also brings in its powerful configurability and annotation processing so that your custom classes can be de/serialized from/to JSON.
Here you can find the release notes.
Download it, use it, and report back, any feedback is important before the final 2.4.0 release. -
CometD JSON library pluggability
It all started when my colleague Joakim showed me the results of some JSON libraries benchmarks he was doing, which showed Jackson to be the clear winner among many libraries.
So I decided that for the upcoming CometD 2.4.0 release it would have been good to make CometD independent of the JSON library used, so that Jackson or other libraries could have been plugged in.
Historically, CometD made use of the Jetty‘s JSON library, and this is still the default if no other library is configured.
Running a CometD specific benchmark using Jetty’s JSON library and Jackson (see this test case) shows, on my laptop, this sample output:Parsing: ... jackson context iteration 1: 946 ms jackson context iteration 2: 949 ms jackson context iteration 3: 944 ms jackson context iteration 4: 922 ms jetty context iteration 1: 634 ms jetty context iteration 2: 634 ms jetty context iteration 3: 636 ms jetty context iteration 4: 639 ms Generating: ... jackson context iteration 1: 548 ms jackson context iteration 2: 549 ms jackson context iteration 3: 552 ms jackson context iteration 4: 561 ms jetty context iteration 1: 788 ms jetty context iteration 2: 796 ms jetty context iteration 3: 798 ms jetty context iteration 4: 805 ms
Jackson is roughly 45% slower in parsing and 45% faster in generating, so not bad for Jetty’s JSON compared to the best in class.
Apart from efficiency, Jackson has certainly more features than Jetty’s JSON library with respect to serializing/deserializing custom classes, so having a pluggable JSON library in CometD is only better for end users, that can now choose the solution that fits them best.
Unfortunately, I could not integrate the Gson library, which does not seem to have the capability of deserializing arbitrary JSON intojava.util.Mapobject graphs, like Jetty’s JSON and Jackson are able to do in one line of code.
If you have insights on how to make Gson work, I’ll be glad to hear.
The documentation on how to configure CometD’s JSON library can be found here.
UPDATE
After a suggestion from Tatu Saloranta of Jackson, the Jackson parsing is now faster than Jetty’s JSON library by roughly 20%:... jackson context iteration 1: 555 ms jackson context iteration 2: 506 ms jackson context iteration 3: 506 ms jackson context iteration 4: 532 ms jetty context iteration 1: 632 ms jetty context iteration 2: 637 ms jetty context iteration 3: 639 ms jetty context iteration 4: 635 ms
-
CometD Message Flow Control with Listeners
In the last blog entry I talked about message flow control using CometD‘s lazy channels.
Now I want to show how it is possible to achieve a similar flow control using specialized listeners that allow to manipulate theServerSessionmessage queue.
TheServerSessionmessage queue is a data structure that is accessed concurrently when messages are published and delivered to clients, so it needs appropriate synchronization when accessed.
In order to simplify this synchronization requirements, CometD allows you to addDeQueueListeners toServerSessions, with the guarantee that these listeners will be called with the appropriate locks acquired, to allow user code to freely modify the queue’s content.
Below you can find an example of aDeQueueListenerthat keeps only the first message of a series of message published to the same channel within a tolerance period of 1000 ms, and removes the others (it relies on the timestamp extension):String channelName = "/stock/GOOG"; long tolerance = 1000; ServerSession session = ...; session.addListener(new ServerSession.DeQueueListener() { public void deQueue(ServerSession session, Queuequeue) { long lastTimeStamp = 0; for (Iterator iterator = queue.iterator(); iterator.hasNext();) { ServerMessage message = iterator.next(); if (channelName.equals(message.getChannel())) { long timeStamp = Long.parseLong(message.get(Message.TIMESTAMP_FIELD).toString()); if (timeStamp <= lastTimeStamp + tolerance) { System.err.println("removed " + message); iterator.remove(); } else { System.err.println("kept " + message); lastTimeStamp = timeStamp; } } } } }); Other possibilities include keeping the last message (instead of the first), coalescing the message fields following a particular logic, or even clearing the queue completely.
DeQueueListeners are called when CometD is about to deliver messages to the client, so clearing the queue completely results in an empty response being sent to the client.
This is different from the behavior of lazy channels, that allowed to delay the message delivery until a configurable timeout expired.
However, lazy channels do not alter the number of messages being sent, whileDeQueueListeners can manipulate the message queue.
Therefore, CometD message control flow is often best accomplished by using both mechanisms: lazy channels to delay message delivery, andDeQueueListeners to reduce/coalesce the number of messages sent. -
CometD Message Flow Control with Lazy Channels
In the CometD introduction post, I explained how the CometD project provides a solution for writing low-latency server-side event-driven web applications.
Examples of this kind of applications are financial applications that provide stock quote price updates, or online games, or position tracking systems for fast moving objects (think a motorbike on a circuit).
These applications have in common the fact that they generate a high rate of server-side events, say in the order of around 10 events per second.
With such an event rate, most of the times you start wondering if it is appropriate to really send to clients every event (and therefore 10 events/s) or if it not better to save bandwidth and computing resources and send to clients events at a lower rate.
For example, even if the stock quote price changes 10 times a second, it will probably be enough to deliver changes once a second to a web application that is conceived to be used by humans: I will be surprised if a person can make any use (or even see it and remember it) of the stock price that was updated 2 tenths of a seconds ago (and that in the meanwhile already changed 2 or 3 times). (Disclaimer: I am not involved in financial applications, I am just making a hypothesis here for the sake of explaining the concept).
The CometD project provides lazy channels to implement this kind of message flow control (it also provides other message flow control means, of which I’ll speak in a future entry).
A channel can be marked as lazy during its initialization on server-side:BayeuxServer bayeux = ...; bayeux.createIfAbsent("/stock/GOOG", new ConfigurableServerChannel.Initializer() { public void configureChannel(ConfigurableServerChannel channel) { channel.setLazy(true); } });Any message sent to that channel will be marked to be a lazy message, and will be delivered lazily: either when a timeout (the max lazy timeout) expires, or when the long poll returns, whichever comes first.
It is possible to configure the duration of the max lazy timeout, for example to be 1 second, inweb.xml: ...cometd org.cometd.server.CometdServlet ...maxLazyTimeout 1000 With this configuration, lazy channels will have a max lazy timeout of 1000 ms and messages published to a lazy channel will be delivered in a batch once a second.
Assuming, for example, that you have a steady rate of 8 messages per second arriving to server-side that update the GOOG stock quote, you will be delivering a batch of 8 messages to clients every second, instead of delivering 1 message every 125 ms.
Lazy channels do not immediately reduce the bandwidth consumption (since no messages are discarded), but combined with a GZip filter that compresses the output allow bandwidth savings by compressing more messages for each delivery (as in general it is better to compress a larger text than many small ones).
You can browse the CometD documentation for more information, look at the online javadocs, post to the mailing list or pop up in the IRC channel #cometd on irc.freenode.org.