Category: Webtide

  • Jetty @ JavaOne 2014

    I’ll be attending JavaOne Sept 29 to Oct 1 and will be presenting several talks on Jetty:

    • CON2236 Servlet Async IO: I’ll be looking at the servlet 3.1 asynchronous IO API and how to use it for scale and low latency.  Also covers a little bit about how we are using it with http2.  There is an introduction video but the talk will be a lot more detailed and hopefully interesting.Screenshot from 2014-09-19 15:56:50
    • BOF2237 Jetty Features: This will be a free form two way discussion about new features in jetty and it’s future direction: http2, modules, admin consoles, dockers etc. are all good topics for discussion.
    • CON5100 Java in the Cloud: This is primarily a Google session, but I’ve been invited to present the work we have done improving the integration of Jetty into their cloud offerings.

    I’ll be in the Bay area from the 23rd and I’d be really pleased to meet up with Jetty users in the area before or during the conference for anything from an informal chat/drink/coffee up to the full sales pitch of Intalio|Webtide services – or even both!) – <gregw@intalio.com>

  • HTTP/2 draft 14 is live !

    Greg Wilkins (@gregwilkins) and I (@simonebordet) have been working on implementing HTTP/2 draft 14 (h2-14), which is the draft that will probably undergo the “last call” at the IETF.

    We will blog very soon with our opinions about HTTP/2 (stay tuned, it’ll be interesting!), but for the time being Jetty proves once again to be a trailblazer when it comes with new web technologies and web protocols.

    Jetty started to innovate with Jetty Continuations, that were standardized (with improvements) into Servlet 3.0.

    Jetty was one of the first Java server to offer support for asynchronous I/O back in 2006 with Jetty 6.

    In 2012 we were the first Java server to implement SPDY, we have written libraries that provide support for NPN in Java (that are now used by many other Java servers that provide SPDY support). We also were the first to implement a completely automatic way of leveraging SPDY Push, that can boost your web site performance.

    Today, to my knowledge, we are again the first Java server exposing the implementation of the HTTP/2 protocol, draft 14, live on our own website.

    Along with HTTP/2 support, that will be coming in Jetty 9.3, we have also implemented a library that provides support for ALPN in Java (the successor of NPN), allowing every Java application (client or server) to implement HTTP/2 over SSL. This library is already available in the Jetty 9.2.x series. We want other implementers (client and server) to test our HTTP/2 implementation in order to generate feedback about HTTP/2 that can be reported at the IETF.

    As of today, both Mozilla Firefox and Google Chrome only support HTTP/2 draft 13 (h2-13). They are keeping the pace at implementing new drafts, so expect both browsers to offer draft 14 support in matter of days (in their nightly/unstable versions). When that will happen, you will be able to use those browsers to connect to our HTTP/2 enabled website.

    The Jetty project offers not only a server, but a HTTP/2 client as well. You can take a look at how it’s used to connect to a HTTP/2 server here.

    Where is it ? https://webtide.com.

    Lastly, contact us for any news or information about what Jetty can do for you in the realms of async I/O, PubSub over the web (via CometD), SPDY and HTTP/2.

  • Jetty 9 – Features

    Jetty 9 milestone 0 has landed! We are very excited about getting this release of jetty out and into the hands of everyone. A lot of work as gone into reworking fundamentals and this is going to be the best version of jetty yet!

    Anyway, as promised a few weeks back, here is a list of some of the big features in jetty-9. By no means an authoritative list of things that have changed, these are many of the high points we think are worthy of a bit of initial focus in jetty-9. One of the features will land in a subsequent milestone releases (pluggable modules) as that is still being refined somewhat, but the rest of them are largely in place and working in our initial testing.
    We’ll blog in depth on some of these features over the course of the next couple of months. We are targeting a November official release of Jetty 9.0.0 so keep an eye out. The improved documentation is coming along well and we’ll introduce that shortly. In the meantime, give the initial milestones a whirl and give us feedback on the mailing lists, on twitter (#jettyserver hashtag pls) or directly at some of the conferences we’ll be attending over the next couple of months.
    Next Generation Protocols – SPDY, WebSockets, MUX and HTTP/2.0 are actively replacing the venerable HTTP/1.1 protocol. Jetty directly supports these protocols as equals and first class siblings to HTTP/1.1. This means a lighter faster container that is simpler and more flexible to deal with the rapidly changing mix of protocols currently being experienced as HTTP/1.1 is replaced.
    Content Push – SPDY v3 supporting including content push within both the client and server. This is a potentially huge optimization for websites that know what a browser will need in terms of javascript files or images, instead of waiting for a browser to ask first.
    Improved WebSocket Server and Client

    • Fast websocket implementation
    • Supporting classic Listener approach and @WebSocket annotations
    • Fully compliant to RFC6455 spec (validated via autobahn test suite http://autobahn.ws/testsuite)
    • Support for latest versions of Draft WebSocket extensions (permessage-compression, and fragment)

    Java 7 – We have removed some areas of abstraction within jetty in order to take advantage of improved APIs in the JVM regarding concurrency and nio, this leads to a leaner implementation and improved performance.
    Servlet 3.1 ready – We actively track this developing spec and will be with support, in fact much of the support is already in place.
    Asynchronous HTTP client – refactored to simplify API, while retaining the ability to run many thousands of simultaneous requests, used as a basis for much of our own testing and http client needs.
    Pluggable Modules – one distribution with integration with libraries, third party technologies, and web applications available for download through a simple command line interface
    Improved SSL Support – the proliferation of mobile devices that use SSL has manifested in many atypical client implementations, support for these edge cases in SSL has been thoroughly refactored such that support is now understandable and maintainable by humans
    Lightweight – Jetty continues its history of having a very small memory footprint while still being able to scale to many ten’s of thousands of connections on commodity hardware.
    Eminently Embeddable – Years of embedding support pays off in your own application, webapp, or testing. Use embedded jetty to unit test your web projects. Add a web server to your existing application. Bundle your web app as a standalone application.

  • Jetty 9 – it's coming!

    Development on Jetty-9 has been chugging along for quite some time now and it looks like we’ll start releasing milestones in around the end of September.  This is exciting because we have a lot of cool improvements and features coming that I’ll leave to others to blog about in specific on over the next couple of months and things come closer to release.
    What I wanted to highlight in this blog post are the plans moving forward for jetty version wise pinpointed with a bit of context where appropriate.

    • Jetty-9 will require java 1.7

    While Oracle has relented a couple of times now about when the EOL is of java 1.6, it looks like it will be over within the next few months and since native support for SPDY (more below) is one of the really big deals about jetty-9 and SPDY requires java 1.7 that is going to be the requirement.

    • Jetty-9 will be servlet-api 3.0

    We had planned on jetty-9 being servlet-api 3.1 but since that api release doesn’t appear to be coming anytime soon, the current plan is to just make jetty-9 support servlet 3.0 and once servlet-api 3.1 is released we’ll make a minor release update of jetty-9 to support it.  Most of the work for supporting servlet-api 3.1 already exists in the current versions of jetty anyway so it shouldn’t be a huge deal.

    • Jetty-7 and Jetty-8 will still be supported as ‘mature’ production releases

    Jetty-9 has some extremely important changes in the IO layers that make supporting it moving forward far easier then jetty 7 and 8.  For much of the life of Java 1.6 and Java 1.7 there have been annoying ‘issues’ in the jvm NIO implementation that we (well greg to be honest) have piled on work around after work around until some of the work arounds would start to act up once the underlying jvm issue were resolved.  Most of this has been addressed in jetty-7.6.x and jetty-8.1.x releases assuming the latest jvm’s are being used (basically make sure you avoid anything in the 1.6u20-29 range).  Anyway, jetty-9 contains a heavily refactored IO layer which should make it easier to respond to these situations in the future should they arise in a more…well…deterministic fashion. 🙂

    • Jetty-9 IO is a major overhaul

    This deserves it’s own blog entry which it will get eventually I am sure, however it can’t be overstated how much the inner workings of jetty have evolved with jetty-9. Since its inception jetty has always been a very modular or component oriented http server. The key being ‘http’ server, and with Jetty-9 that is changing. Jetty-9 has been rearchitected from the IO layer up to directly support the separation of wire protocol from semantic, so it is now possible to support HTTP over HTTP, HTTP over SPDY, Websocket over SPDY, multiplexing etc with all protocols being first class citizens and no need to mock out
    inappropriate interfaces. While these are mostly internal changes, they ripple out to give many benefits to users in the form of better performance, smaller software and simpler and more appropriate configuration. For example instead of having multiples of differennt connector types, each with unique SSL and/or SPDY variants, there is now a single connector into which various connections factories are configured to support SSL, HTTP, SPDY, Websocket etc. This means moving forward jetty will be able to adapt easily and quickly to new protocols as they come onto the scene.

    • Jetty-6…for the love of god, please update

    Jetty-5 used to hold the title for ‘venerable’ but that title is really shifting to jetty-6 at this point.  I am constantly amazed with folks on places like stack overflow starting a project using jetty-6.  The linux distributions really need to update, so if you work on those and need help, please ping us.  Many other projects that embed jetty really need to update as well, looking at you Google App Engine and GWT!  If you are a company and would like help updating your jetty version or are interested in taking advantage of the newer protocols, feel free to contact webtide and we can help you make it easier.  If you’re an open source project, reach out to us on the mailing lists and we can assist there as much as time allows.  But please…add migrating to 7, 8 or 9 to your TODO list!

    • No more split production versions

    One of our more confusing situations has been releasing both jetty 7 and jetty 8 as stable production versions.  The reasons for our doing this were many and varied but with servlet 3.0 being ‘live’ for a while now we are going to shift back to the singular supported production version moving forward.  The Servlet API is backwards compatible anyway so we’ll be hopefully reducing some of the confusion on which version of jetty to use moving forward.

    • Documentation

    Finally, our goal starting with jetty-9 moving forward will be to release versioned documentation (generated with docbook)  to a common url under the eclipse.org domain as well as bundling the html and pdf to fit in the new plugin architecture we are working with.  So the days of floundering around for documentation on jetty should be coming to an end soon.
    Lots of exciting things coming in Jetty-9 that you’ll hear about in the coming weeks! Feel free to follow @jmcconnell on twitter for release updates!

  • SPDY – non representative benchmark for plain http vs. spdy+push on webtide.com

    I’ve done a quick run with the Page Benchmarker Extension on chromium to measure the difference between http and SPDY + push. Enabling benchmarks restricts chromium to SPDY draft 2 so we’ll run without flow control.
    Note that the website is not the fastest (in fact it’s pretty slow). But if these results will prove themselves valid in real benchmarks than a reduced latency of ~473ms is pretty awesome.
    Here’s the promising result:

    I’ve done several iterations of this benchmark test with ten runs each. The advantage of spdy was always between 350-550ms.
    Disclaimer: This is in no way a representative benchmark. This has neither been run in an isolated test environment, nor is webtide.com the right website to do such benchmarks! This is just a promising result, nothing more. We’ll do proper benchmarking soon, I promise.

  • SPDY – we push!

    SPDY, Google’s web protocol, is gaining momentum. Intending to improve the user’s web experience it aims at severely reducing page load times.
    We’ve blogged about the protocol and jetty’s straight forward SPDY support already: Jetty-SPDY is joining the revolution! and SPDY support in Jetty.
    No we’re taking this a step further and we push!
    SPDY push is one of the coolest features in the SPDY protocol portfolio.
    In the traditional http approach the browser will have to request a html resource (the main resource) and do subsequent requests for each sub resource. Every request/response roundtrip will add latency.
    E.g.:
    GET /index.html – wait for response before before browser can request sub resources
    GET /img.jpg
    GET /style.css – wait for response before we can request sub resources of the css
    GET /style_image.css (referenced in style.css)
    This means a single request – response roundtrip for each resource (main and sub resources). Worse some of them have to be done sequentially. For a page with lots of sub resources, the amount of connections to the server (traditionally browsers tend to open 6 connections) will also limit the amount of sub resources that can be fetched in parallel.
    Now SPDY will reduce the need to open multiple connections by multiplexing requests over a single connection and does more improvements to reduce latency as described in previous blog posts and the SPDY spec.
    SPDY push will enable the server to push resources to the browser/client without having a request for that resource. For example if the server knows that index.html contains a reference to img.jpg, style.css and that style.css contains a reference to style_image.css, the server can push those resources to the client.
    To take the previous example:
    GET /index.html
    PUSH /img.jpg
    PUSH /style.css
    PUSH /style_image.css
    That means only a single request/response roundtrip for the main resource. And the server immediately sends out the responses for all sub resources. This heavily reduces overall latency, especially for pages with high roundtrip delays (bad/busy network connections, etc.).
    We’ve written a unit test to benchmark the differences between plain http, SPDY and SPDY + push. Note that this is not a real benchmark and the roundtrip delay is emulated! Proper benchmarks are already in our task queue, so stay tuned. However, here’s the results:
    HTTP: roundtrip delay 100 ms, average = 414
    SPDY(None): roundtrip delay 100 ms, average = 213
    SPDY(ReferrerPushStrategy): roundtrip delay 100 ms, average = 160
    Sounds cool? Yes, I guess that sounds cool! 🙂
    Even better in jetty this means only exchanging a Connector with another, provide our implementation of the push strategy – done. Yes, that’s it. Only by changing some lines of jetty config you’ll get SPDY and SPDY + push without touching your application.
    Have a look at the Jetty Docs to enable SPDY. (will be updated soon on how to add a push strategy to a SPDY connector.)
    Here’s the only thing you need to configure in jetty to get your application served with SPDY + push transparently:
    <New id=”pushStrategy”>
    <Arg type=”List”>
    <Array type=”String”>
    <Item>.*.css</Item>
    <Item>.*.js</Item>
    <Item>.*.png</Item>
    <Item>.*.jpg</Item>
    <Item>.*.gif</Item>
    </Array>
    </Arg>
    <Set name=”referrerPushPeriod”>15000</Set>
    </New>
    <Call name=”addConnector”>
    <Arg>
    <New>
    <Arg>
    <Ref id=”sslContextFactory” />
    </Arg>
    <Arg>
    <Ref id=”pushStrategy” />
    </Arg>
    <Set name=”Port”>11081</Set>
    <Set name=”maxIdleTime”>30000</Set>
    <Set name=”Acceptors”>2</Set>
    <Set name=”AcceptQueueSize”>100</Set>
    <Set name=”initialWindowSize”>131072</Set>
    </New>
    </Arg>
    </Call>
    So how do we push?
    We’ve implemented a pluggable mechanism to add a push strategy to a SPDY connector. Our default strategy, called ReferrerPushStrategy is using the “referer” header to identify push resources on the first time a page is requested.
    The browser will request the main resource and quickly afterwards it usually requests all sub resources needed for that page. ReferrerPushStrategy will use the referer header used in the sub requests to identify sub resources for the main resource defined in the referer header. It will remember those sub resources and on the next request of the main resource, it’ll push all sub resources it knows about to the client.
    Now if the user will click on a link on the main resource, it’ll also contain a referer header for the main resource. However linked resources should not be pushed to the client in advance! To avoid that ReferrerPushStrategy has a configurable push period. The push strategy will only remember sub resources if they’ve been requested within that period from the very first request of the main resource since application start.
    So this is some kind of best effort strategy. It does not know which resources to push at startup, but it’ll learn on a best effort basis.
    What does best effort mean? It means that if the browser doesn’t request the sub resources fast enough (within the push period timeframe) after the initial request of the main resource it’ll never learn those sub resources. Or if the user is fast enough clicking links, it might push resources which should not be pushed.
    Now you might be wondering what happens if the browser has the resources already cached? Aren’t we sending data over the wire which the browser actually already has? Well, usually we don’t. First we use the if-modified-since header to identify if we should push sub resources or not and second the browser can refuse push streams. If the browser gets a syn for a sub resource it already has, then it can simply reset the push stream. Then the only thing that has been send is the syn frame for the push stream. Not a big drawback considering the advantages this has.
    There has to be more drawbacks?!
    Yes, there are. SPDY implementation in jetty is still experimental. The whole protocol is bleeding edge and implementations in browsers as well as the server still have some rough edges. There is already broad support amoung browsers for the SPDY protocol. Stable releases of firefox and chromium/chrome support SPDY draft2 out of the box and it already works really well. SPDY draft 3 however is only supported with more recent builds of the current browsers. SPDY push seems to work properly only with SPDY draft 3 and the latest chrome/chromium browsers. However we’re all working hard on getting the rough edges smooth and I presume SPDY draft 3 and push will be working in all stable browsers soon.
    We also had to disable push for draft 2 as this seemed to have negative effects on chromium up to regular browser crashes.
    Try it!
    As we keep eating our own dog-food, https://www.webtide.com is already updated with the latest code and has push enabled. If you want to test the push functionality get a chrome canary or a chromium nightly build and access our company’s website.
    This is how it’ll look in the developer tools and on chrome://net-internals page.
    developer-tools (note that the request has been done with an empty cache and the pushed resources are being marked as read from cache):

    net-internals (note the pushed and claimed resource count):

    Pretty exciting! We keep “pushing” for more and better SPDY support. Improve our push strategy and support getting SPDY a better protocol. Stay tuned for more stuff to come.
    Note that SPDY stuff is not in any official jetty release, yet. But most probably will be in the next release. Documentation for jetty will be updated soon as well.