Blog

  • Jetty 9 – it's coming!

    Development on Jetty-9 has been chugging along for quite some time now and it looks like we’ll start releasing milestones in around the end of September.  This is exciting because we have a lot of cool improvements and features coming that I’ll leave to others to blog about in specific on over the next couple of months and things come closer to release.
    What I wanted to highlight in this blog post are the plans moving forward for jetty version wise pinpointed with a bit of context where appropriate.

    • Jetty-9 will require java 1.7

    While Oracle has relented a couple of times now about when the EOL is of java 1.6, it looks like it will be over within the next few months and since native support for SPDY (more below) is one of the really big deals about jetty-9 and SPDY requires java 1.7 that is going to be the requirement.

    • Jetty-9 will be servlet-api 3.0

    We had planned on jetty-9 being servlet-api 3.1 but since that api release doesn’t appear to be coming anytime soon, the current plan is to just make jetty-9 support servlet 3.0 and once servlet-api 3.1 is released we’ll make a minor release update of jetty-9 to support it.  Most of the work for supporting servlet-api 3.1 already exists in the current versions of jetty anyway so it shouldn’t be a huge deal.

    • Jetty-7 and Jetty-8 will still be supported as ‘mature’ production releases

    Jetty-9 has some extremely important changes in the IO layers that make supporting it moving forward far easier then jetty 7 and 8.  For much of the life of Java 1.6 and Java 1.7 there have been annoying ‘issues’ in the jvm NIO implementation that we (well greg to be honest) have piled on work around after work around until some of the work arounds would start to act up once the underlying jvm issue were resolved.  Most of this has been addressed in jetty-7.6.x and jetty-8.1.x releases assuming the latest jvm’s are being used (basically make sure you avoid anything in the 1.6u20-29 range).  Anyway, jetty-9 contains a heavily refactored IO layer which should make it easier to respond to these situations in the future should they arise in a more…well…deterministic fashion. 🙂

    • Jetty-9 IO is a major overhaul

    This deserves it’s own blog entry which it will get eventually I am sure, however it can’t be overstated how much the inner workings of jetty have evolved with jetty-9. Since its inception jetty has always been a very modular or component oriented http server. The key being ‘http’ server, and with Jetty-9 that is changing. Jetty-9 has been rearchitected from the IO layer up to directly support the separation of wire protocol from semantic, so it is now possible to support HTTP over HTTP, HTTP over SPDY, Websocket over SPDY, multiplexing etc with all protocols being first class citizens and no need to mock out
    inappropriate interfaces. While these are mostly internal changes, they ripple out to give many benefits to users in the form of better performance, smaller software and simpler and more appropriate configuration. For example instead of having multiples of differennt connector types, each with unique SSL and/or SPDY variants, there is now a single connector into which various connections factories are configured to support SSL, HTTP, SPDY, Websocket etc. This means moving forward jetty will be able to adapt easily and quickly to new protocols as they come onto the scene.

    • Jetty-6…for the love of god, please update

    Jetty-5 used to hold the title for ‘venerable’ but that title is really shifting to jetty-6 at this point.  I am constantly amazed with folks on places like stack overflow starting a project using jetty-6.  The linux distributions really need to update, so if you work on those and need help, please ping us.  Many other projects that embed jetty really need to update as well, looking at you Google App Engine and GWT!  If you are a company and would like help updating your jetty version or are interested in taking advantage of the newer protocols, feel free to contact webtide and we can help you make it easier.  If you’re an open source project, reach out to us on the mailing lists and we can assist there as much as time allows.  But please…add migrating to 7, 8 or 9 to your TODO list!

    • No more split production versions

    One of our more confusing situations has been releasing both jetty 7 and jetty 8 as stable production versions.  The reasons for our doing this were many and varied but with servlet 3.0 being ‘live’ for a while now we are going to shift back to the singular supported production version moving forward.  The Servlet API is backwards compatible anyway so we’ll be hopefully reducing some of the confusion on which version of jetty to use moving forward.

    • Documentation

    Finally, our goal starting with jetty-9 moving forward will be to release versioned documentation (generated with docbook)  to a common url under the eclipse.org domain as well as bundling the html and pdf to fit in the new plugin architecture we are working with.  So the days of floundering around for documentation on jetty should be coming to an end soon.
    Lots of exciting things coming in Jetty-9 that you’ll hear about in the coming weeks! Feel free to follow @jmcconnell on twitter for release updates!

  • HTTP/2.0 Expressions of interest

    The IETF HTTPbis Working Group recently called for expressions of interest in the development of the HTTP/2.0 protocol, with SPDY being one of the candidates to use as a basis.

    As a HTTP server and an early implementer of the SPDY protocol, the Jetty project certainly has an interest in HTTP/2.0 and this blog contains the text of our response below.  However it is also very interest to have a read all of the expressions of interest received from industry heavy hitters like:

    Reading through these and the thousands of replies, it is clear that there is significant interest and some momentum towards replacing HTTP/1.1, but that the solution is not quite as simple as s/SPDY/HTTP/2.0/.

    There is a lot of heat around the suggestion of mandatory encryption (even though no proposal actually has mandatory encryption), and it looks like there is a big divide in the community.

    I also think that many of the concerns of the intermediaries (F5, haproxy, squid) are not being well addressed.  This is a mistake often made in previous protocol iterations and we would be well served by taking the time to listen and understand their concerns.  Even simple features such as easy access to host headers for quick routing may have significant benefits.

    The Jetty Expression of Interest in HTTP/2.0

    (see also the original post and responses)

    I’m the project leader of the Jetty project (http://eclipse.org/jetty) and am making this initial response on behalf of the project and not for eclipse as a whole (although we will solicit further feedback from other projects within eclipse). I work for Webtide|Intalio who sell support services around Jetty.

    Jetty is an open source server and client written in java that supports the 3.0 Servlet API, HTTP/1.1, Websocket 1.0 and SPDY v3. We have a reasonable market share of java servers (>10% < 30%) and are deployed on everything from tiny embedded servers to very large deployments with over 100k connections per server.

    The Jetty project is very interested in the development and standardisation of HTTP/2.0 and intend to be contributors to the WG and early implementers. We are well acquainted with the limitations of HTTP/1.1 and have a desire to see the problems of pipelining and multiple connections (>2) resolved.

    The Jetty project SPDY effort is lead by Simone Bordet and it has implemented SPDY v3 with flow control and push. This is available in the main releases of our Jetty-7 and jetty-8 servers (we also have a java SPDY client). The project has also provided an extension to the JVM to implement the TLS NPN extension needed by SPDY, and we understand that several other java SPDY implementations are using this.

    We chose SPDY to implement rather than any other HTTP/2.0 proposal mainly because of the support available in deployed browsers, so that we can achieve real world feedback. However, we were also encouraged in our adoption of SPDY by the open, methodical and congenial approach of the SPDY project at Google (not always our experience with projects at Google or elsewhere).

    We definitely see the potential of SPDY and it is already being used by some sites. However we still lack the feedback from widespread deployment (it is early days) or from large deployments. We are actively seeking significant sites who are interested in working with us to deploy SPDY.

    There are several key features of SPDY that we see as promising:

    Header compression greatly improves data density. In our use of Ajax and Comet over HTTP/1.1 we have often hit scalability limits due to network saturation with very poor data density of small massages in large HTTP framing. While websocket is doing a lot to resolve this, we are hoping that SPDY will provide improvement without the need to redevelop applications.

    Multiplexing of multiple streams over a single connection is also a good development. Reducing the number of connections that the server must handle is key to scalability, specially as modern HTTP browsers are now exceeding the 2 connection limit. The ability to send out of order responses is good and we also suspect that receiving messages from a single client over a single connection may help reduce some of the non deterministic behaviours that can develop as multiple connections from the same client set cookies or update session state. It will also avoid the issue of load balancers directing connections from the same client to different nodes in a cluster. We recognise the additional cost of multiplexing (extra copies and flow control), but currently believe that it is worth the effort.

    We see the potential of server push for content, but are struggling with the lack of meta data knowledge available to know what to push and when. We are currently working on strategies that use the referrer header to identify associated resources that can be pushed together. We also check for if-modified-since headers as an indication that associated content may already be cached and thus a push is not required. We see the challenge of push as not being the protocol to send the content, but in working out the standards for meta data, cache control etc so that we know what to push and when.

    We have not yet implemented websocket over SPDY, but do intend to do so if it is supported by the browsers. We see a lot of similarities in the base framing of these two protocols and would hope that eventually only one would need to be well supported.

    We are bit ambivalent about the use of NPN and TLS only connections. There is a question to be asked regarding if we should be sending any web content in the clear, and how intermediaries should be able to (or not) filter/inspect/mutate content. However, I personally feel that this is essentially a non technical issue and we should not use a protocol to push any particular agenda. The RTT argument for not supporting in the clear connections is weak as there are several easy technical solutions available. Furthermore, the lack of support for NPN is a barrier to adoption (albeit one that we have broken down for some JVMs at least). Debugging over TLS is and will always be difficult. We would like to HTTP/2.0 support standardised non encrypted connections (at least from TLS offload to server). If a higher level debate determines that web deployments only accept TLS connections, then we are fine with that non technical determination.

    I repeat that we selected SPDY to implement because it’s availability in the browsers and not as the result of a technical review against other alternatives. However we are generally pleased with the direction and results obtained so far and look forward to gaining more experience and feedback as it is more widely deployed.

    However we do recognise that much of the “goodness” of SPDY can be provided by the other proposals. I’m particularly interested in the HTTP/speed/mobility’s use of websockets as it’s framing layer (as that addressed the concern I raised above). But we currently do not have any plans to implement the alternatives, mainly because of resource limitations and lack of browser support. So currently we are advocates of the SPDY approach in the starship troopers sense: ie we support SPDY until it is dead or we find something better. Of course Jetty is an open platform and we would really welcome and assist any contributors who would like to build on our websocket support to implement HTTP/SM.

    We believe that there is high demand for a significant improvement over HTTP/1.1 and that the environment is ripe for a rapid rollout of an alternative/improved protocol and expect that HTTP/1.1 can quickly be replaced. Because of this, we have begun development of jetty-9 which replaces the HTTP protocol centric architecture of jetty-7/8 with something that is much better suited to multiple protocols and multiplexed HTTP semantics. SPDY, Websocket and HTTP/1.1 are true peers in Jetty-9 rather than the newer protocols being implemented as HTTP facades. We believe that jetty-9 will be the ideal platform on which to develop and deploy HTTP/2.0 and we invite anybody with an interest to come contribute to the project.

  • Fully functional SPDY-Proxy

    We keep pushing our SPDY implementation and with the upcoming Jetty release we provide a fully functional SPDY proxy server out of the box.
    Simply by configuration you can setup Jetty to provide a SPDY connector where clients can connect to via SPDY and will be transparently proxied to a target host speaking SPDY as well or another web protocol.
    Here’s some details about the internals. The implementation is modular and can easily be extended. There’s a HTTPSPDYProxyConnector that accepts incoming requests and forwards them to a ProxyEngineSelector. ProxyEngineSelector will forward the request to an appropiate ProxyEngine for the given target host protocol.
    Which ProxyEngine to use is determined by the configured ProxyServerInfos which hold the information about known target hosts and  the protocol they speak.
    Up to now we only have a ProxyEngine implementation for SPDY. But implementing other protocols like HTTP should be pretty straight forward and will follow. Contributions are like always highly welcome!
    https://www.webtide.com is already served through a proxy connector forwarding to a plain SPDY connector on localhost.
    For more details and an example configuration check the SPDY proxy documentation out.

  • SPDY – non representative benchmark for plain http vs. spdy+push on webtide.com

    I’ve done a quick run with the Page Benchmarker Extension on chromium to measure the difference between http and SPDY + push. Enabling benchmarks restricts chromium to SPDY draft 2 so we’ll run without flow control.
    Note that the website is not the fastest (in fact it’s pretty slow). But if these results will prove themselves valid in real benchmarks than a reduced latency of ~473ms is pretty awesome.
    Here’s the promising result:

    I’ve done several iterations of this benchmark test with ten runs each. The advantage of spdy was always between 350-550ms.
    Disclaimer: This is in no way a representative benchmark. This has neither been run in an isolated test environment, nor is webtide.com the right website to do such benchmarks! This is just a promising result, nothing more. We’ll do proper benchmarking soon, I promise.

  • SPDY – we push!

    SPDY, Google’s web protocol, is gaining momentum. Intending to improve the user’s web experience it aims at severely reducing page load times.
    We’ve blogged about the protocol and jetty’s straight forward SPDY support already: Jetty-SPDY is joining the revolution! and SPDY support in Jetty.
    No we’re taking this a step further and we push!
    SPDY push is one of the coolest features in the SPDY protocol portfolio.
    In the traditional http approach the browser will have to request a html resource (the main resource) and do subsequent requests for each sub resource. Every request/response roundtrip will add latency.
    E.g.:
    GET /index.html – wait for response before before browser can request sub resources
    GET /img.jpg
    GET /style.css – wait for response before we can request sub resources of the css
    GET /style_image.css (referenced in style.css)
    This means a single request – response roundtrip for each resource (main and sub resources). Worse some of them have to be done sequentially. For a page with lots of sub resources, the amount of connections to the server (traditionally browsers tend to open 6 connections) will also limit the amount of sub resources that can be fetched in parallel.
    Now SPDY will reduce the need to open multiple connections by multiplexing requests over a single connection and does more improvements to reduce latency as described in previous blog posts and the SPDY spec.
    SPDY push will enable the server to push resources to the browser/client without having a request for that resource. For example if the server knows that index.html contains a reference to img.jpg, style.css and that style.css contains a reference to style_image.css, the server can push those resources to the client.
    To take the previous example:
    GET /index.html
    PUSH /img.jpg
    PUSH /style.css
    PUSH /style_image.css
    That means only a single request/response roundtrip for the main resource. And the server immediately sends out the responses for all sub resources. This heavily reduces overall latency, especially for pages with high roundtrip delays (bad/busy network connections, etc.).
    We’ve written a unit test to benchmark the differences between plain http, SPDY and SPDY + push. Note that this is not a real benchmark and the roundtrip delay is emulated! Proper benchmarks are already in our task queue, so stay tuned. However, here’s the results:
    HTTP: roundtrip delay 100 ms, average = 414
    SPDY(None): roundtrip delay 100 ms, average = 213
    SPDY(ReferrerPushStrategy): roundtrip delay 100 ms, average = 160
    Sounds cool? Yes, I guess that sounds cool! 🙂
    Even better in jetty this means only exchanging a Connector with another, provide our implementation of the push strategy – done. Yes, that’s it. Only by changing some lines of jetty config you’ll get SPDY and SPDY + push without touching your application.
    Have a look at the Jetty Docs to enable SPDY. (will be updated soon on how to add a push strategy to a SPDY connector.)
    Here’s the only thing you need to configure in jetty to get your application served with SPDY + push transparently:
    <New id=”pushStrategy”>
    <Arg type=”List”>
    <Array type=”String”>
    <Item>.*.css</Item>
    <Item>.*.js</Item>
    <Item>.*.png</Item>
    <Item>.*.jpg</Item>
    <Item>.*.gif</Item>
    </Array>
    </Arg>
    <Set name=”referrerPushPeriod”>15000</Set>
    </New>
    <Call name=”addConnector”>
    <Arg>
    <New>
    <Arg>
    <Ref id=”sslContextFactory” />
    </Arg>
    <Arg>
    <Ref id=”pushStrategy” />
    </Arg>
    <Set name=”Port”>11081</Set>
    <Set name=”maxIdleTime”>30000</Set>
    <Set name=”Acceptors”>2</Set>
    <Set name=”AcceptQueueSize”>100</Set>
    <Set name=”initialWindowSize”>131072</Set>
    </New>
    </Arg>
    </Call>
    So how do we push?
    We’ve implemented a pluggable mechanism to add a push strategy to a SPDY connector. Our default strategy, called ReferrerPushStrategy is using the “referer” header to identify push resources on the first time a page is requested.
    The browser will request the main resource and quickly afterwards it usually requests all sub resources needed for that page. ReferrerPushStrategy will use the referer header used in the sub requests to identify sub resources for the main resource defined in the referer header. It will remember those sub resources and on the next request of the main resource, it’ll push all sub resources it knows about to the client.
    Now if the user will click on a link on the main resource, it’ll also contain a referer header for the main resource. However linked resources should not be pushed to the client in advance! To avoid that ReferrerPushStrategy has a configurable push period. The push strategy will only remember sub resources if they’ve been requested within that period from the very first request of the main resource since application start.
    So this is some kind of best effort strategy. It does not know which resources to push at startup, but it’ll learn on a best effort basis.
    What does best effort mean? It means that if the browser doesn’t request the sub resources fast enough (within the push period timeframe) after the initial request of the main resource it’ll never learn those sub resources. Or if the user is fast enough clicking links, it might push resources which should not be pushed.
    Now you might be wondering what happens if the browser has the resources already cached? Aren’t we sending data over the wire which the browser actually already has? Well, usually we don’t. First we use the if-modified-since header to identify if we should push sub resources or not and second the browser can refuse push streams. If the browser gets a syn for a sub resource it already has, then it can simply reset the push stream. Then the only thing that has been send is the syn frame for the push stream. Not a big drawback considering the advantages this has.
    There has to be more drawbacks?!
    Yes, there are. SPDY implementation in jetty is still experimental. The whole protocol is bleeding edge and implementations in browsers as well as the server still have some rough edges. There is already broad support amoung browsers for the SPDY protocol. Stable releases of firefox and chromium/chrome support SPDY draft2 out of the box and it already works really well. SPDY draft 3 however is only supported with more recent builds of the current browsers. SPDY push seems to work properly only with SPDY draft 3 and the latest chrome/chromium browsers. However we’re all working hard on getting the rough edges smooth and I presume SPDY draft 3 and push will be working in all stable browsers soon.
    We also had to disable push for draft 2 as this seemed to have negative effects on chromium up to regular browser crashes.
    Try it!
    As we keep eating our own dog-food, https://www.webtide.com is already updated with the latest code and has push enabled. If you want to test the push functionality get a chrome canary or a chromium nightly build and access our company’s website.
    This is how it’ll look in the developer tools and on chrome://net-internals page.
    developer-tools (note that the request has been done with an empty cache and the pushed resources are being marked as read from cache):

    net-internals (note the pushed and claimed resource count):

    Pretty exciting! We keep “pushing” for more and better SPDY support. Improve our push strategy and support getting SPDY a better protocol. Stay tuned for more stuff to come.
    Note that SPDY stuff is not in any official jetty release, yet. But most probably will be in the next release. Documentation for jetty will be updated soon as well.

  • JMiniX JMX console in Jetty

    Jetty has long had a rich set of JMX mbeans that give very detailed status, configuration and control over the server and applications, which can now simply be accessed with the JMiniX web console:

    The usability of JMX has been somewhat let down due to a lack of quality JMX management consoles.  JConsole and JVirtualVM do give good access to MBeans, but they rely on a RMI connection which can be tricky to setup to a remote machine.   JMiniX avoids the RMI by allowing access to the MBeans via a servlet you can add to your webapplication.

    The instructions were straight forward to follow and the steps were simply:

    1. Add dependency to your pom
    2. Add a repository to your pom (bummer – needs restlet.org which is not in maven central – if it was I’d consider adding JMiniX to our released test webapp)
    3. Define the servlet in your web.xml
    4. Build and run!

    You can see by the screen shot above that the console gives a nice rendering of the available mbeans from the JVM and Jetty (and cometd if running). Attributes can be viewed and updated, and operations can be called – all the normal stuff.   It only gives direct mbean access and does not provide any higher level management functions, but this is not a big problem if the mbeans are well designed and self documented.

    Also if you wanted to develop more advanced management functions, then the restful nature of JMiniX should make this fairly straight forward.  For example attributes can be retrieved with simple requests like:

    http://localhost:8080/jmx/servers/0/domains/org.eclipse.jetty.server/
    mbeans/type=server,id=0/attributes/startupTime/

    That returns JSON like:

    {"value":"1339059648877","label":"startupTime"}

    JMiniX looks like a great tool to improve the management of your servers and applications and to leverage the value already built into the Jetty JMX mbeans.

    We had been working on a similar effort for restful access to JMX, but JMiniX is more advanced.  It does lack some of the features that we had been working on like aggregate access to repeated attributes, but considering the state of JMiniX, we may consider contributing those features to that project instead.

  • Truth in Benchmarking!

    One of my pet peeves is misleading benchmarks, as discussed in my Lies, Damned Lies and Benchmarks blog.  Recently there has been a bit of interest in Vert.x, some of it resulting from apparently good benchmark results against node.js. The author gave a disclaimer that the tests were non-rigorous and just for fun, but they have already lead some people to ask if Jetty can scale like Vert.x.

    I know absolutely nothing about Vert.x, but I do know that their benchmark is next to useless to demonstrate any kind of scalability of a server.  So I’d like to analyse their benchmarks and compare them to how we benchmark jetty/cometd to try to give some understanding about how benchmarks should be designed and interpreted.

    The benchmark

    The vert.x benchmark uses 6 clients, each with 10 connections, each with up to 2000 pipelines HTTP requests for a trivial 200 OK or tiny static file. The tests were run for a minute and the average request rate was taken. So lets break this down:

    6 Clients of 10 connections!

    However you look at this (6 users each with a browser with 10 connections, or 60 individual users), 6 or 60 users does not represent any significant scalability.  We benchmark jetty/comet with 10,000 to 200,000 connections and have production sites that run with similar numbers.

    Testing 60 connections does not tell you anything about scalability. So why do so many benchmarks get performed on low numbers of connections?  It’s because it is really really hard to generate realistic load for hundreds of thousands of connections.  To do so, we use the jetty asynchronous HTTP client, which has been designed specifically for this purpose, and we still need to use multiple load generating machines to achieve high numbers of connections.

    2000 pipelined requests!

    Really? HTTP pipelining is not turned on by default in most web browsers, and even if it was, I cannot think of any realistic application that would be generate 2000 requests in a pipeline. Why is this important?  Because with pipelined requests a server that does:

    byte[] buffer = new byte[8192];
    socket.getInputStream().read(buffer);

    will read many requests into that buffer in a single read.  A trivial HTTP request is a few 10s of bytes (and I’m guessing they didn’t send any of the verbose complex headers that real browsers do), so the vert.x benchmark would be reading 30 or more requests on each read.  Thus this benchmark is not really testing any IO performance, but simply how fast they can iterate over a buffer and parse simple requests. At best it is telling you about the latency in their parsing and request handling.

    Handling reads is not the hard part of scaling IO.  It is handling the idle pauses between the reads that is difficult.  It is these idle periods that almost all real load profiles have that requires the server to carefully allocate resources so that idle connections do not consume resources that could be better used by non idle connections.    2000 connections each with 6 pipelined requests would be more realistic, or better yet 20000 connections with 6 requests that are sent with 10ms delays between them.

    Trivial 200 OK or Tiny static resource

    Creating a scalable server for non trivial applications is all about trying to ensure that maximal resources are applied to performing real business logic in preparing dynamic responses.   If all the responses are trivial or static, then the server is free to be more wasteful.  Worse still for realistic benchmarks, a trivial response generation can probably be in-lined by the hotspot compiler is a way that no real application ever could be.

    Run for a minute

    A minute is insufficient time for a JVM to achieve steady state.  For the first few minutes of a run the Hotspot JIT compiler will be using CPU to analyse and compile code. A trivial application might be able to be hotspot compiled in a minute, but any reasonably complex server/application is going to take much longer.  Try watching your application with jvisualvm and watch the perm generation continue to grow for many minutes while more and more classes are compiled. Only after the JVM has warmed up your application and CPU is no longer being used to compile, can any meaningful results be obtained.

    The other big killer of performance are full garbage collections that can stop the entire VM for many seconds.  Running fast for 60 seconds does not do you much good if a second later you pause for 10s while collecting the garbage from those fast 60 seconds.

    Benchmark result need to be reported for steady state over longer periods of time and you need to consider GC performance.  The jetty/cometd benchmark tools specifically measures and reports both JIT and GC actions during the benchmark runs and we can perform many benchmark runs in the same JVM.  Below is example output showing that for a 30s run some JIT was still performed, so the VM is not fully warmed up yet:

    Statistics Started at Mon Jun 21 15:50:58 UTC 2010
    Operative System: Linux 2.6.32-305-ec2 amd64
    JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server
    VM runtime 16.3-b01 1.6.0_20-b02
    Processors: 2
    System Memory: 93.82409% used of 7.5002174 GiB
    Used Heap Size: 2453.7236 MiB
    Max Heap Size: 5895.0 MiB
    Young Generation Heap Size: 2823.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    Testing 2500 clients in 100 rooms
    Sending 3000 batches of 1x50B messages every 8000µs
    - - - - - - - - - - - - - - - - - - - -
    Statistics Ended at Mon Jun 21 15:51:29 UTC 2010
    Elapsed time: 30164 ms
            Time in JIT compilation: 12 ms
            Time in Young Generation GC: 0 ms (0 collections)
            Time in Old Generation GC: 0 ms (0 collections)
    Garbage Generated in Young Generation: 1848.7974 MiB
    Garbage Generated in Survivor Generation: 0.0 MiB
    Garbage Generated in Old Generation: 0.0 MiB
    Average CPU Load: 109.96191/200

    Conclusion

    I’m sure the vert.x guys had every good intent when doing their micro-benchmark, and it may well be that vert.x scales really well.  However I wish that when developers consider benchmarking servers, that instead of thinking: “let’s send a lot of requests at it”, that their first thought was “let’s open a lot of connections at it”.  Better yet, a benchmark (micro or otherwise) should be modelled on some real application and the load that it might generate.

    The jetty/cometd benchmark is of a real chat application, that really works and has real features like member lists, private messages etc.  Thus the results that we achieve in benchmarks are able to be reproduced by real applications in production.

     
     
     
     
     

  • Jetty-SPDY blogged

    Jos Dirksen has written a nice blog about Jetty-SPDY, thanks Jos !
    In the upcoming Jetty 7.6.3 and 8.1.3 (due in the next days), the Jetty-SPDY module has been enhanced with support for prioritized streams and for SPDY push (although the latter only available via the pure SPDY API), and we have fixed a few bugs that we spotted and were reported by early adopters.
    Also, we are working on making really easy for Jetty users to enable SPDY, so that the configuration changes needed to enable SPDY in Jetty will be minimal.
    After these releases we will be working on full support for SPDY/3 (currently Jetty-SPDY supports SPDY/2, with some feature of SPDY/3).
    Browsers such as Chromium and Firefox are already updating their implementations to support also SPDY/3, so we will soon have support for the new version of the SPDY protocol also in the browsers.
    Stay tuned !

  • Jetty-SPDY is joining the revolution!

    There is a revolution quietly happening on the web and if you blink you might miss it. The revolution is in the speed and latency with which some browsers can load some web pages, and what used to take 100’s of ms is now often reduced to 10’s.  The revolution is Google’s  SPDY protocol which I predict will soon replace HTTP as the primary protocol of the web, and  Jetty-SPDY is joining this revolution.

    SPDY is a fundamental rethink of how HTTP is transported over the internet, based on careful analysis of the interaction between TCP/IP, Browsers and web page design .  It does not entirely replace HTTP (it still uses HTTP GET’s and POST’s), but makes HTTP semantics available over a much more efficient wire protocol. It also opens up the possibility of new semantics that can be used on the web (eg server push/hint).  Improved latency, throughput and efficiency will improve user experience and facilitate better and cheaper services in environments like the mobile web.

    When is the revolution?

    So when is SPDY going to be available?  It already is!!! The SPDY protocol is deployed in the current Chrome browsers and on the Amazon Kindle, and it is optionally supported by firefox 11.  Thus it is already on 25% of clients and will soon be over 50%. On the server side, Google supports SPDY on all their primary services and Twitter switched on SPDY support this month.  As the webs most popular browsers and servers are talking SPDY, this is a significant shift in the way data is moved on the web.   Since Jetty 7.6.2/8.1.2, SPDY is supported in  Jetty and you can start using it without any changes to your web application!

    Is it a revolution or a coup?

    By deploying SPDY on it’s popular browser and web services, Google has used it’s market share to make a fundamental shift in the web (but not as we know it)!  and there are some rumblings that this may be an abuse of Google’s market power.  I’ve not been shy in the past of pointing out google’s failings to engage with the community in good faith, but in this case I think they have done an excellent job.  The SPDY protocol has been an open project for over two years and they have published specs and actively solicited feedback and participation.  More over, they are intending to take the protocol to the IETF for standardisation and have already submitted a draft to the httpbis working group.   Openly developing the protocol to the point of wide deployment is a good fit with the IETF’s approach of “rough consensus and working code“.

    Note also that Google are not tying any functionality to SPDY, so it is not as if they are saying that we must use their new protocol or else we can’t access their services.  We are free to disable or block SPDY on our own networks and the browsers will happily fallback to normal HTTP.  Currently SPDY is a totally transparent upgrade to the user.

    Is there a problem?

    So why would anybody be upset about Google making the web run faster?  One of the most significant changes in the SPDY protocol, is that all traffic is encrypted with TLS. For most users, this can be considered a significant security enhancement, as they will no longer need to consider if a page/form is secure enough for the transaction they are conducting.

    However, if you are the administrator of a firewall that is enforcing some kind of content filtering policy, then having all traffic be opaque to your filters will make it impossible to check content (which may be great if you are a dissident in a rogue state, but not so great if you are responsible for a primary school network).  Similarly, caching proxies will no longer be able to cache shareable content as it will also be opaque to them, which may reduce some of the latency/throughput benefits of SPDY.

    Mike Belshe, who has lead the development of SPDY, points out that SPDY does not prevent proxies, it just prevents implicit (aka transparent) proxies.  Since SPDY traffic is encrypted, the browser and any intermediaries must negotiate a session to pass TLS traffic, so the browser will need to give it’s consent before a proxy can see or modify any content.  This is probably workable for the primary school use-case, but no so much for the rouge state.

    Policy or Necessity?

    There is nothing intrinsic about the SPDY protocol that requires TLS, and there are versions of it that operate in the clear.  I believe it was a policy rather than a technical decision to required TLS only. There are some technical justification by the argument that it reduces round trips needed to negotiate a SPDY and/or HTTP connection,  but I don’t see that encryption is the only answer to those problems.  Thus I suspect that there is also a little bit of an agenda in the decision and it will probably be the most contentious aspect of SPDY going forward.  It will be interesting to see if the TLS-only policy survives the IETF process, but then I might be hard to argue for a policy change that benefits rogue states and less personal privacy.

    Other than rouge states, another victim of the TLS-only policy is eas of debugging, as highlighted by Mike’s blog, where he is having trouble working out how the kindle uses SPDY because all the traffic is encrypted.  As a developer/debugger of a HTTP server, I cannot over stress how important it is to be able to see a TCP dump of a problematic session.  This argument is one of the reasons why the IETF has historically favoured clear text protocols.  It remains to be seen if this argument will continue to prevail or if we will have to rely on better tools and browser/servers coughing up TLS sessions keys in order to debug?

    In Summary

    Google and the other contributors to the SPDY project have done great work to develop a protocol that promises to take the web a significant step forward and to open up the prospects for many new semantics and developments.  While they have done this some what unilaterally, it has been done openly and with out any evidence of any intent other than to improve user experience/privacy and to reduce server costs.

    SPDY is a great development for the web and the Jetty team is please to be a part of it.

  • SPDY support in Jetty

    SPDY is Google’s protocol that is intended to improve user experience on the web, by reducing the latency of web pages, sometimes up to a factor of 3. Yes, three times faster.
    How does SPDY accomplish that ?
    SPDY reduces roundtrips with the server, reduces the HTTP verboseness by compressing HTTP headers, improves the utilization of the TCP connection, multiplexes requests into a single TCP connection (instead of using a limited number of connections, each serving only one request), and allows for server to push secondary resources (like CSS, images, scripts, etc.) associated with a primary resource (typically a web page) without incurring in additional round-trips.
    Now, the really cool thing is that Jetty has an implementation of SPDY (see the documentation) in the newly released 7.6.2 and 8.1.2 releases.
    Your web applications can immediately and transparently benefit of many of the SPDY improvements without changes, because Jetty does the heavy lifting for you under the covers.
    With Chromium/Chrome already supporting SPDY, and Firefox 11 supporting it also (although it needs to be enabled, see how here), more than 50% of the web browsers will be supporting it, so servers needs to catch up, and where Jetty shines.
    The Jetty project continues to foster innovation by supporting emerging web protocols: first WebSocket and now SPDY.
    A corollary project that came out from the SPDY implementation is a pure Java implementation of the Next Protocol Negotiation (NPN) TLS Extension, also available in Jetty 7.6.2 and 8.1.2.
    To prove that this is no fluke, we have updated Webtide’s website with Jetty’s SPDY implementation, and now the website can be served via SPDY, if the browser supports it.
    We encourage early adopters to test out Jetty’s SPDY and feedback us on jetty-dev@eclipse.org.
    Enjoy !