Category: Jetty

  • Jetty 9 – Updated WebSocket API

    Build Your Magic HereCreating WebSockets in Jetty is even easier with Jetty 9!
    While the networking gurus in Jetty have been working on the awesome improvements to the I/O layers in core Jetty 9, the WebSocket fanatics in the community have been working on making writing WebSockets even easier.
    The initial WebSocket implementation in Jetty was started back in November of 2009, well before the WebSocket protocol was finalized.
    It has grown in response to Jetty’s involvement with the WebSocket draft discussions to the finalization of RFC6455, and onwards into the changes being influenced on our design as a result of WebSocket extensions drafts such as x-webkit-perframe-deflate, permessage-deflate, fragment, and ongoing mux discussions.
    The Jetty 7.x and Jetty 8.x codebases provided WebSockets to developers required a complex set of knowledge about how WebSockets work and how Jetty implemented WebSockets.  This complexity was as a result of this rather organic growth of WebSocket knowledge around intermediaries and WebSocket Extensions impacted the original design.
    With Jetty 9.x we were given an opportunity to correct our mistakes.

    The new WebSockets API in Jetty 9.x

    Note: this information represents what is in the jetty-9 branch on git, which has changed in small but important ways since 9.0.0.M0 was released.

    With the growing interest in next generation protocols like SPDY and HTTP/2.0, along with evolving standards being tracked for Servlet API 3.1 and Java API for WebSockets (JSR-356), the time Jetty 9.x was at hand.  We dove head first into cleaning up the codebase, performing some needed refactoring, and upgrading the codebase to Java 7.
    Along the way, Jetty 9.x started to shed the old blocking I/O layers, and all of the nasty logic surrounding it, resulting on a Async I/O focused Jetty core.  We love this new layer, and we expect you will to, even if you don’t see it directly.  This change benefits Jetty with a smaller / cleaner / easier to maintain and test codebase, along with various performance improvements such as speed, CPU use, and even less memory use.
    In parallel, the Jetty WebSocket codebase changed to soak up the knowledge gained in our early adoption of WebSockets and also to utilize the benefits of the new Jetty Async I/O layers better.   It is important to note that Jetty 9.x WebSockets is NOT backward compatible with prior Jetty versions.
    The most significant changes:

    • Requires Java 7
    • Only supporting WebSocket version 13 (RFC-6455)
    • Artifact Split

    The monolithic jetty-websocket artifact has been to split up the various websocket artifacts so that developers can pick and choose what’s important to them.

    The new artifacts are all under the org.eclipse.jetty.websocket groupId on maven central.

    • websocket-core.jar – where the basic API classes reside, plus internal implementation details that are common between server & client.
    • websocket-server.jar – the server specific classes
    • websocket-client.jar – the client specific classes
    • Only 1 Listener now (WebSocketListener)
    • Now Supports Annotated WebSocket classes
    • Focus is on Messages not Frames

    In our prior WebSocket API we assumed, incorrectly, that developers would want to work with the raw WebSocket framing.   This change brings us in line with how every other WebSocket API behaves, working with messages, not frames.

    • WebSocketServlet only configures for a WebSocketCreator

    This subtle change means that the Servlet no longer creates websockets of its own, and instead this work is done by the WebSocketCreator of your choice (don’t worry, there is a default creator).
    This is important to properly support the mux extensions and future Java API for WebSockets (JSR-356)

    Jetty 9.x WebSockets Quick Start:

    Before we get started, some important WebSocket Basics & Gotchas

    1. A WebSocket Frame is the most fundamental part of the protocol, however it is not really the best way to read/write to websockets.
    2. A WebSocket Message can be 1 or more frames, this is the model of interaction with a WebSocket in Jetty 9.x
    3. A WebSocket TEXT Message can only ever be UTF-8 encoded. (it you need other forms of encoding, use a BINARY Message)
    4. A WebSocket BINARY Message can be anything that will fit in a byte array.
    5. Use the WebSocketPolicy (available in the WebSocketServerFactory) to configure some constraints on what the maximum text and binary message size should be for your socket (to prevent clients from sending massive messages or frames)

    First, we need the servlet to provide the glue.
    We’ll be overriding the configure(WebSocketServerFactory) here to configure a basic MyEchoSocket to run when an incoming request to upgrade occurs.

    package examples;
    import org.eclipse.jetty.websocket.server.WebSocketServerFactory;
    import org.eclipse.jetty.websocket.server.WebSocketServlet;
    public class MyEchoServlet extends WebSocketServlet
    {
        @Override
        public void configure(WebSocketServerFactory factory)
        {
            // register a socket class as default
            factory.register(MyEchoSocket.class);
        }
    }

    The responsibility of your WebSocketServlet class is to configure the WebSocketServerFactory.     The most important aspect is describing how WebSocket implementations are to be created when request for new sockets arrive.  This is accomplished by configuring an appropriate WebSocketCreator object.  In the above example, the default WebSocketCreator is being used to register a specific class to instantiate on each new incoming Upgrade request.
    If you wish to use your own WebSocketCreator implementation, you can provide it during this configure step.
    Check the examples/echo to see how this is done with factory.setCreator() and EchoCreator.
    Note that request for new websockets can arrive from a number of different code paths, not all of which will result in your WebSocketServlet being executed.  Mux for example will result in a new WebSocket request arriving as a logic channel within the MuxExtension itself.
    As for implementing the MyEchoSocket, you have 3 choices.

    1. Implementing Listener
    2. Using an Adapter
    3. Using Annotations

    Choice 1: implementing WebSocketListener interface.

    Implementing WebSocketListener is the oldest and most fundamental approach available to you for working with WebSocket in a traditional listener approach (be sure you read the other approaches below before you settle on this approach).
    It is your responsibility to handle the connection open/close events appropriately when using the WebSocketListener. Once you obtain a reference to the WebSocketConnection, you have a variety of NIO/Async based write() methods to write content back out the connection.

    package examples;
    import java.io.IOException;
    import org.eclipse.jetty.util.Callback;
    import org.eclipse.jetty.util.FutureCallback;
    import org.eclipse.jetty.websocket.core.api.WebSocketConnection;
    import org.eclipse.jetty.websocket.core.api.WebSocketException;
    import org.eclipse.jetty.websocket.core.api.WebSocketListener;
    public class MyEchoSocket implements WebSocketListener
    {
        private WebSocketConnection outbound;
        @Override
        public void onWebSocketBinary(byte[] payload, int offset,
                                      int len)
        {
            /* only interested in text messages */
        }
        @Override
        public void onWebSocketClose(int statusCode, String reason)
        {
            this.outbound = null;
        }
        @Override
        public void onWebSocketConnect(WebSocketConnection connection)
        {
            this.outbound = connection;
        }
        @Override
        public void onWebSocketException(WebSocketException error)
        {
            error.printStackTrace();
        }
        @Override
        public void onWebSocketText(String message)
        {
            if (outbound == null)
            {
                return;
            }
            try
            {
                String context = null;
                Callback callback = new FutureCallback<>();
                outbound.write(context,callback,message);
            }
            catch (IOException e)
            {
                e.printStackTrace();
            }
        }
    }

    Choice 2: extending from WebSocketAdapter

    Using the provided WebSocketAdapter, the management of the Connection is handled for you, and access to a simplified WebSocketBlockingConnection is also available (as well as using the NIO based write signature seen above)

    package examples;
    import java.io.IOException;
    import org.eclipse.jetty.websocket.core.api.WebSocketAdapter;
    public class MyEchoSocket extends WebSocketAdapter
    {
        @Override
        public void onWebSocketText(String message)
        {
            if (isNotConnected())
            {
                return;
            }
            try
            {
                // echo the data back
                getBlockingConnection().write(message);
            }
            catch (IOException e)
            {
                e.printStackTrace();
            }
        }
    }

    Choice 3: decorating your POJO with @WebSocket annotations.

    This the easiest WebSocket you can create, and you have some flexibility in the parameters of the methods as well.

    package examples;
    import java.io.IOException;
    import org.eclipse.jetty.util.FutureCallback;
    import org.eclipse.jetty.websocket.core.annotations.OnWebSocketMessage;
    import org.eclipse.jetty.websocket.core.annotations.WebSocket;
    import org.eclipse.jetty.websocket.core.api.WebSocketConnection;
    @WebSocket(maxTextSize = 64 * 1024)
    public class MyEchoSocket
    {
        @OnWebSocketMessage
        public void onText(WebSocketConnection conn, String message)
        {
            if (conn.isOpen())
            {
                return;
            }
            try
            {
                conn.write(null,new FutureCallback(),message);
            }
            catch (IOException e)
            {
                e.printStackTrace();
            }
        }
    }

    The annotations you have available:
    @OnWebSocketMessage: To receive websocket message events.
    Examples:

      @OnWebSocketMessage
      public void onTextMethod(String message) {
         // simple TEXT message received
      }
      @OnWebSocketMessage
      public void onTextMethod(WebSocketConnection connection,
                               String message) {
         // simple TEXT message received, with Connection
         // that it occurred on.
      }
      @OnWebSocketMessage
      public void onBinaryMethod(byte data[], int offset,
                                 int length) {
         // simple BINARY message received
      }
      @OnWebSocketMessage
      public void onBinaryMethod(WebSocketConnection connection,
                                 byte data[], int offset,
                                 int length) {
         // simple BINARY message received, with Connection
         // that it occurred on.
      }

    @OnWebSocketConnect: To receive websocket connection connected event (will only occur once).
    Example:

      @OnWebSocketConnect
      public void onConnect(WebSocketConnection connection) {
         // WebSocket is now connected
      }

    @OnWebSocketClose: To receive websocket connection closed events (will only occur once).
    Example:

      @OnWebSocketClose
      public void onClose(int statusCode, String reason) {
         // WebSocket is now disconnected
      }
      @OnWebSocketClose
      public void onClose(WebSocketConnection connection,
                          int statusCode, String reason) {
         // WebSocket is now disconnected
      }

    @OnWebSocketFrame: To receive websocket framing events (read only access to the raw Frame details).
    Example:

      @OnWebSocketFrame
      public void onFrame(Frame frame) {
         // WebSocket frame received
      }
      @OnWebSocketFrame
      public void onFrame(WebSocketConnection connection,
                          Frame frame) {
         // WebSocket frame received
      }

    One More Thing … The Future

    We aren’t done with our changes to Jetty 9.x and the WebSocket API, we are actively working on the following features as well…

    • Mux Extension

    The multiplex extension being drafted will allow for multiple virtual WebSocket connections over a single physical TCP/IP connection.  This extension will allow browsers to better utilize their connection limits/counts, and allow web proxy intermediaries to bundle multiple websocket connections to a server together over a single physical connection.

    • Streaming APIs

    There has been some expressed interest in providing read and write of text or binary messages using the standard Java IO Writer/Reader (for TEXT messages) and OutputStream/InputStream (for BINARY messages) APIs.

    Current plans for streamed reading includes new @OnWebSocketMessage interface patterns.

      // In the near future, we will have the following some Streaming
      // forms also available.  This is a delicate thing to
      // implement and currently does not work properly, but is
      // scheduled.
      @OnWebSocketMessage
      public void onTextMethod(Reader stream) {
         // TEXT message received, and reported to your socket as a
         // Reader. (can handle 1 message, regardless of size or
         // number of frames)
      }
      @OnWebSocketMessage
      public void onTextMethod(WebSocketConnection connection,
                               Reader stream) {
         // TEXT message received, and reported to your socket as a
         // Reader. (can handle 1 message, regardless of size or
         // number of frames).  Connection that message occurs
         // on is reported as well.
      }
      @OnWebSocketMessage
      public void onBinaryMethod(InputStream stream) {
         // BINARY message received, and reported to your socket
         // as a InputStream. (can handle 1 message, regardless
         // of size or number of frames).
      }
      @OnWebSocketMessage
      public void onBinaryMethod(WebSocketConnection connection,
                                 InputStream stream) {
         // BINARY message received, and reported to your socket
         // as a InputStream. (can handle 1 message, regardless
         // of size or number of frames).  Connection that
         // message occurs on is reported as well.
      }

    And for streaming writes, we plan to provide Writer and OutputStream implementations that simply wrap the provided WebSocketConnection.

    • Android Compatible Client Library

    While Android is currently not Java 7 compatible, a modified websocket-client library suitable for use with Android is on our TODO list.

    • Support Java API for WebSocket API (JSR356)

    We are actively tracking the work being done with this JSR group, it is coming, but is still some way off from being a complete and finished API (heck, the current EDR still doesn’t support extensions). Jetty 9.x will definitely support it, and we have tried to build our Jetty 9.x WebSocket API so that the the Java API for WebSockets can live above it.

  • Jetty 9 – Features

    Jetty 9 milestone 0 has landed! We are very excited about getting this release of jetty out and into the hands of everyone. A lot of work as gone into reworking fundamentals and this is going to be the best version of jetty yet!

    Anyway, as promised a few weeks back, here is a list of some of the big features in jetty-9. By no means an authoritative list of things that have changed, these are many of the high points we think are worthy of a bit of initial focus in jetty-9. One of the features will land in a subsequent milestone releases (pluggable modules) as that is still being refined somewhat, but the rest of them are largely in place and working in our initial testing.
    We’ll blog in depth on some of these features over the course of the next couple of months. We are targeting a November official release of Jetty 9.0.0 so keep an eye out. The improved documentation is coming along well and we’ll introduce that shortly. In the meantime, give the initial milestones a whirl and give us feedback on the mailing lists, on twitter (#jettyserver hashtag pls) or directly at some of the conferences we’ll be attending over the next couple of months.
    Next Generation Protocols – SPDY, WebSockets, MUX and HTTP/2.0 are actively replacing the venerable HTTP/1.1 protocol. Jetty directly supports these protocols as equals and first class siblings to HTTP/1.1. This means a lighter faster container that is simpler and more flexible to deal with the rapidly changing mix of protocols currently being experienced as HTTP/1.1 is replaced.
    Content Push – SPDY v3 supporting including content push within both the client and server. This is a potentially huge optimization for websites that know what a browser will need in terms of javascript files or images, instead of waiting for a browser to ask first.
    Improved WebSocket Server and Client

    • Fast websocket implementation
    • Supporting classic Listener approach and @WebSocket annotations
    • Fully compliant to RFC6455 spec (validated via autobahn test suite http://autobahn.ws/testsuite)
    • Support for latest versions of Draft WebSocket extensions (permessage-compression, and fragment)

    Java 7 – We have removed some areas of abstraction within jetty in order to take advantage of improved APIs in the JVM regarding concurrency and nio, this leads to a leaner implementation and improved performance.
    Servlet 3.1 ready – We actively track this developing spec and will be with support, in fact much of the support is already in place.
    Asynchronous HTTP client – refactored to simplify API, while retaining the ability to run many thousands of simultaneous requests, used as a basis for much of our own testing and http client needs.
    Pluggable Modules – one distribution with integration with libraries, third party technologies, and web applications available for download through a simple command line interface
    Improved SSL Support – the proliferation of mobile devices that use SSL has manifested in many atypical client implementations, support for these edge cases in SSL has been thoroughly refactored such that support is now understandable and maintainable by humans
    Lightweight – Jetty continues its history of having a very small memory footprint while still being able to scale to many ten’s of thousands of connections on commodity hardware.
    Eminently Embeddable – Years of embedding support pays off in your own application, webapp, or testing. Use embedded jetty to unit test your web projects. Add a web server to your existing application. Bundle your web app as a standalone application.

  • Spend money on free software?

     
    Here we are with summer coming to an end, the kids going back to school, and lots of software projects yet to complete before year end. What are the options we have for our development projects? Should we buy commercially packaged software, have our own people create what we need, outsource the development, etc? Don’t forget to include “use open source code” on your short list of viable options!
    Those of you who have used open source know that it can speed up project completion, help avoid vendor lock-in, lower your risk, and lower the life cycle cost of your business applications. Those of you who have not yet tried open source are missing out on these benefits. Open source software is produced and maintained by some of the brightest minds and most dedicated people you will ever find. They do it because it is their passion, and they take personal pride in delivering some of the highest quality software available. Is it really free? Yes, you can download the software and documentation and use it for no charge. Are there hidden costs? Well there are costs, but not hidden. Someone on your team will need to take ownership of the correct use of open source code. Does this mean they need to be a full time expert? No. If you do pay them to learn and become an expert, or if you hire a seasoned full time expert, then yes, this will cost something and it can be substantial.
    You have a third choice. What we hear from our customers is that buying commercial support directly from the open source committers to supplement limited in-house knowledge is far and away the most cost effective choice. You probably want your in-house software engineering staff to focus on work that gives you a unique and direct competitive advantage for your business. You also probably want to take full advantage of the open source code features and best practices for security, performance, and scalability. Our open source committers have helped thousands of customers meet their project goals on time and within budget. Contact us today and let us help you compare costs and explore how to get started with using open source code.

  • Jetty 9 – it's coming!

    Development on Jetty-9 has been chugging along for quite some time now and it looks like we’ll start releasing milestones in around the end of September.  This is exciting because we have a lot of cool improvements and features coming that I’ll leave to others to blog about in specific on over the next couple of months and things come closer to release.
    What I wanted to highlight in this blog post are the plans moving forward for jetty version wise pinpointed with a bit of context where appropriate.

    • Jetty-9 will require java 1.7

    While Oracle has relented a couple of times now about when the EOL is of java 1.6, it looks like it will be over within the next few months and since native support for SPDY (more below) is one of the really big deals about jetty-9 and SPDY requires java 1.7 that is going to be the requirement.

    • Jetty-9 will be servlet-api 3.0

    We had planned on jetty-9 being servlet-api 3.1 but since that api release doesn’t appear to be coming anytime soon, the current plan is to just make jetty-9 support servlet 3.0 and once servlet-api 3.1 is released we’ll make a minor release update of jetty-9 to support it.  Most of the work for supporting servlet-api 3.1 already exists in the current versions of jetty anyway so it shouldn’t be a huge deal.

    • Jetty-7 and Jetty-8 will still be supported as ‘mature’ production releases

    Jetty-9 has some extremely important changes in the IO layers that make supporting it moving forward far easier then jetty 7 and 8.  For much of the life of Java 1.6 and Java 1.7 there have been annoying ‘issues’ in the jvm NIO implementation that we (well greg to be honest) have piled on work around after work around until some of the work arounds would start to act up once the underlying jvm issue were resolved.  Most of this has been addressed in jetty-7.6.x and jetty-8.1.x releases assuming the latest jvm’s are being used (basically make sure you avoid anything in the 1.6u20-29 range).  Anyway, jetty-9 contains a heavily refactored IO layer which should make it easier to respond to these situations in the future should they arise in a more…well…deterministic fashion. 🙂

    • Jetty-9 IO is a major overhaul

    This deserves it’s own blog entry which it will get eventually I am sure, however it can’t be overstated how much the inner workings of jetty have evolved with jetty-9. Since its inception jetty has always been a very modular or component oriented http server. The key being ‘http’ server, and with Jetty-9 that is changing. Jetty-9 has been rearchitected from the IO layer up to directly support the separation of wire protocol from semantic, so it is now possible to support HTTP over HTTP, HTTP over SPDY, Websocket over SPDY, multiplexing etc with all protocols being first class citizens and no need to mock out
    inappropriate interfaces. While these are mostly internal changes, they ripple out to give many benefits to users in the form of better performance, smaller software and simpler and more appropriate configuration. For example instead of having multiples of differennt connector types, each with unique SSL and/or SPDY variants, there is now a single connector into which various connections factories are configured to support SSL, HTTP, SPDY, Websocket etc. This means moving forward jetty will be able to adapt easily and quickly to new protocols as they come onto the scene.

    • Jetty-6…for the love of god, please update

    Jetty-5 used to hold the title for ‘venerable’ but that title is really shifting to jetty-6 at this point.  I am constantly amazed with folks on places like stack overflow starting a project using jetty-6.  The linux distributions really need to update, so if you work on those and need help, please ping us.  Many other projects that embed jetty really need to update as well, looking at you Google App Engine and GWT!  If you are a company and would like help updating your jetty version or are interested in taking advantage of the newer protocols, feel free to contact webtide and we can help you make it easier.  If you’re an open source project, reach out to us on the mailing lists and we can assist there as much as time allows.  But please…add migrating to 7, 8 or 9 to your TODO list!

    • No more split production versions

    One of our more confusing situations has been releasing both jetty 7 and jetty 8 as stable production versions.  The reasons for our doing this were many and varied but with servlet 3.0 being ‘live’ for a while now we are going to shift back to the singular supported production version moving forward.  The Servlet API is backwards compatible anyway so we’ll be hopefully reducing some of the confusion on which version of jetty to use moving forward.

    • Documentation

    Finally, our goal starting with jetty-9 moving forward will be to release versioned documentation (generated with docbook)  to a common url under the eclipse.org domain as well as bundling the html and pdf to fit in the new plugin architecture we are working with.  So the days of floundering around for documentation on jetty should be coming to an end soon.
    Lots of exciting things coming in Jetty-9 that you’ll hear about in the coming weeks! Feel free to follow @jmcconnell on twitter for release updates!

  • HTTP/2.0 Expressions of interest

    The IETF HTTPbis Working Group recently called for expressions of interest in the development of the HTTP/2.0 protocol, with SPDY being one of the candidates to use as a basis.

    As a HTTP server and an early implementer of the SPDY protocol, the Jetty project certainly has an interest in HTTP/2.0 and this blog contains the text of our response below.  However it is also very interest to have a read all of the expressions of interest received from industry heavy hitters like:

    Reading through these and the thousands of replies, it is clear that there is significant interest and some momentum towards replacing HTTP/1.1, but that the solution is not quite as simple as s/SPDY/HTTP/2.0/.

    There is a lot of heat around the suggestion of mandatory encryption (even though no proposal actually has mandatory encryption), and it looks like there is a big divide in the community.

    I also think that many of the concerns of the intermediaries (F5, haproxy, squid) are not being well addressed.  This is a mistake often made in previous protocol iterations and we would be well served by taking the time to listen and understand their concerns.  Even simple features such as easy access to host headers for quick routing may have significant benefits.

    The Jetty Expression of Interest in HTTP/2.0

    (see also the original post and responses)

    I’m the project leader of the Jetty project (http://eclipse.org/jetty) and am making this initial response on behalf of the project and not for eclipse as a whole (although we will solicit further feedback from other projects within eclipse). I work for Webtide|Intalio who sell support services around Jetty.

    Jetty is an open source server and client written in java that supports the 3.0 Servlet API, HTTP/1.1, Websocket 1.0 and SPDY v3. We have a reasonable market share of java servers (>10% < 30%) and are deployed on everything from tiny embedded servers to very large deployments with over 100k connections per server.

    The Jetty project is very interested in the development and standardisation of HTTP/2.0 and intend to be contributors to the WG and early implementers. We are well acquainted with the limitations of HTTP/1.1 and have a desire to see the problems of pipelining and multiple connections (>2) resolved.

    The Jetty project SPDY effort is lead by Simone Bordet and it has implemented SPDY v3 with flow control and push. This is available in the main releases of our Jetty-7 and jetty-8 servers (we also have a java SPDY client). The project has also provided an extension to the JVM to implement the TLS NPN extension needed by SPDY, and we understand that several other java SPDY implementations are using this.

    We chose SPDY to implement rather than any other HTTP/2.0 proposal mainly because of the support available in deployed browsers, so that we can achieve real world feedback. However, we were also encouraged in our adoption of SPDY by the open, methodical and congenial approach of the SPDY project at Google (not always our experience with projects at Google or elsewhere).

    We definitely see the potential of SPDY and it is already being used by some sites. However we still lack the feedback from widespread deployment (it is early days) or from large deployments. We are actively seeking significant sites who are interested in working with us to deploy SPDY.

    There are several key features of SPDY that we see as promising:

    Header compression greatly improves data density. In our use of Ajax and Comet over HTTP/1.1 we have often hit scalability limits due to network saturation with very poor data density of small massages in large HTTP framing. While websocket is doing a lot to resolve this, we are hoping that SPDY will provide improvement without the need to redevelop applications.

    Multiplexing of multiple streams over a single connection is also a good development. Reducing the number of connections that the server must handle is key to scalability, specially as modern HTTP browsers are now exceeding the 2 connection limit. The ability to send out of order responses is good and we also suspect that receiving messages from a single client over a single connection may help reduce some of the non deterministic behaviours that can develop as multiple connections from the same client set cookies or update session state. It will also avoid the issue of load balancers directing connections from the same client to different nodes in a cluster. We recognise the additional cost of multiplexing (extra copies and flow control), but currently believe that it is worth the effort.

    We see the potential of server push for content, but are struggling with the lack of meta data knowledge available to know what to push and when. We are currently working on strategies that use the referrer header to identify associated resources that can be pushed together. We also check for if-modified-since headers as an indication that associated content may already be cached and thus a push is not required. We see the challenge of push as not being the protocol to send the content, but in working out the standards for meta data, cache control etc so that we know what to push and when.

    We have not yet implemented websocket over SPDY, but do intend to do so if it is supported by the browsers. We see a lot of similarities in the base framing of these two protocols and would hope that eventually only one would need to be well supported.

    We are bit ambivalent about the use of NPN and TLS only connections. There is a question to be asked regarding if we should be sending any web content in the clear, and how intermediaries should be able to (or not) filter/inspect/mutate content. However, I personally feel that this is essentially a non technical issue and we should not use a protocol to push any particular agenda. The RTT argument for not supporting in the clear connections is weak as there are several easy technical solutions available. Furthermore, the lack of support for NPN is a barrier to adoption (albeit one that we have broken down for some JVMs at least). Debugging over TLS is and will always be difficult. We would like to HTTP/2.0 support standardised non encrypted connections (at least from TLS offload to server). If a higher level debate determines that web deployments only accept TLS connections, then we are fine with that non technical determination.

    I repeat that we selected SPDY to implement because it’s availability in the browsers and not as the result of a technical review against other alternatives. However we are generally pleased with the direction and results obtained so far and look forward to gaining more experience and feedback as it is more widely deployed.

    However we do recognise that much of the “goodness” of SPDY can be provided by the other proposals. I’m particularly interested in the HTTP/speed/mobility’s use of websockets as it’s framing layer (as that addressed the concern I raised above). But we currently do not have any plans to implement the alternatives, mainly because of resource limitations and lack of browser support. So currently we are advocates of the SPDY approach in the starship troopers sense: ie we support SPDY until it is dead or we find something better. Of course Jetty is an open platform and we would really welcome and assist any contributors who would like to build on our websocket support to implement HTTP/SM.

    We believe that there is high demand for a significant improvement over HTTP/1.1 and that the environment is ripe for a rapid rollout of an alternative/improved protocol and expect that HTTP/1.1 can quickly be replaced. Because of this, we have begun development of jetty-9 which replaces the HTTP protocol centric architecture of jetty-7/8 with something that is much better suited to multiple protocols and multiplexed HTTP semantics. SPDY, Websocket and HTTP/1.1 are true peers in Jetty-9 rather than the newer protocols being implemented as HTTP facades. We believe that jetty-9 will be the ideal platform on which to develop and deploy HTTP/2.0 and we invite anybody with an interest to come contribute to the project.

  • Fully functional SPDY-Proxy

    We keep pushing our SPDY implementation and with the upcoming Jetty release we provide a fully functional SPDY proxy server out of the box.
    Simply by configuration you can setup Jetty to provide a SPDY connector where clients can connect to via SPDY and will be transparently proxied to a target host speaking SPDY as well or another web protocol.
    Here’s some details about the internals. The implementation is modular and can easily be extended. There’s a HTTPSPDYProxyConnector that accepts incoming requests and forwards them to a ProxyEngineSelector. ProxyEngineSelector will forward the request to an appropiate ProxyEngine for the given target host protocol.
    Which ProxyEngine to use is determined by the configured ProxyServerInfos which hold the information about known target hosts and  the protocol they speak.
    Up to now we only have a ProxyEngine implementation for SPDY. But implementing other protocols like HTTP should be pretty straight forward and will follow. Contributions are like always highly welcome!
    https://www.webtide.com is already served through a proxy connector forwarding to a plain SPDY connector on localhost.
    For more details and an example configuration check the SPDY proxy documentation out.

  • SPDY – non representative benchmark for plain http vs. spdy+push on webtide.com

    I’ve done a quick run with the Page Benchmarker Extension on chromium to measure the difference between http and SPDY + push. Enabling benchmarks restricts chromium to SPDY draft 2 so we’ll run without flow control.
    Note that the website is not the fastest (in fact it’s pretty slow). But if these results will prove themselves valid in real benchmarks than a reduced latency of ~473ms is pretty awesome.
    Here’s the promising result:

    I’ve done several iterations of this benchmark test with ten runs each. The advantage of spdy was always between 350-550ms.
    Disclaimer: This is in no way a representative benchmark. This has neither been run in an isolated test environment, nor is webtide.com the right website to do such benchmarks! This is just a promising result, nothing more. We’ll do proper benchmarking soon, I promise.

  • SPDY – we push!

    SPDY, Google’s web protocol, is gaining momentum. Intending to improve the user’s web experience it aims at severely reducing page load times.
    We’ve blogged about the protocol and jetty’s straight forward SPDY support already: Jetty-SPDY is joining the revolution! and SPDY support in Jetty.
    No we’re taking this a step further and we push!
    SPDY push is one of the coolest features in the SPDY protocol portfolio.
    In the traditional http approach the browser will have to request a html resource (the main resource) and do subsequent requests for each sub resource. Every request/response roundtrip will add latency.
    E.g.:
    GET /index.html – wait for response before before browser can request sub resources
    GET /img.jpg
    GET /style.css – wait for response before we can request sub resources of the css
    GET /style_image.css (referenced in style.css)
    This means a single request – response roundtrip for each resource (main and sub resources). Worse some of them have to be done sequentially. For a page with lots of sub resources, the amount of connections to the server (traditionally browsers tend to open 6 connections) will also limit the amount of sub resources that can be fetched in parallel.
    Now SPDY will reduce the need to open multiple connections by multiplexing requests over a single connection and does more improvements to reduce latency as described in previous blog posts and the SPDY spec.
    SPDY push will enable the server to push resources to the browser/client without having a request for that resource. For example if the server knows that index.html contains a reference to img.jpg, style.css and that style.css contains a reference to style_image.css, the server can push those resources to the client.
    To take the previous example:
    GET /index.html
    PUSH /img.jpg
    PUSH /style.css
    PUSH /style_image.css
    That means only a single request/response roundtrip for the main resource. And the server immediately sends out the responses for all sub resources. This heavily reduces overall latency, especially for pages with high roundtrip delays (bad/busy network connections, etc.).
    We’ve written a unit test to benchmark the differences between plain http, SPDY and SPDY + push. Note that this is not a real benchmark and the roundtrip delay is emulated! Proper benchmarks are already in our task queue, so stay tuned. However, here’s the results:
    HTTP: roundtrip delay 100 ms, average = 414
    SPDY(None): roundtrip delay 100 ms, average = 213
    SPDY(ReferrerPushStrategy): roundtrip delay 100 ms, average = 160
    Sounds cool? Yes, I guess that sounds cool! 🙂
    Even better in jetty this means only exchanging a Connector with another, provide our implementation of the push strategy – done. Yes, that’s it. Only by changing some lines of jetty config you’ll get SPDY and SPDY + push without touching your application.
    Have a look at the Jetty Docs to enable SPDY. (will be updated soon on how to add a push strategy to a SPDY connector.)
    Here’s the only thing you need to configure in jetty to get your application served with SPDY + push transparently:
    <New id=”pushStrategy”>
    <Arg type=”List”>
    <Array type=”String”>
    <Item>.*.css</Item>
    <Item>.*.js</Item>
    <Item>.*.png</Item>
    <Item>.*.jpg</Item>
    <Item>.*.gif</Item>
    </Array>
    </Arg>
    <Set name=”referrerPushPeriod”>15000</Set>
    </New>
    <Call name=”addConnector”>
    <Arg>
    <New>
    <Arg>
    <Ref id=”sslContextFactory” />
    </Arg>
    <Arg>
    <Ref id=”pushStrategy” />
    </Arg>
    <Set name=”Port”>11081</Set>
    <Set name=”maxIdleTime”>30000</Set>
    <Set name=”Acceptors”>2</Set>
    <Set name=”AcceptQueueSize”>100</Set>
    <Set name=”initialWindowSize”>131072</Set>
    </New>
    </Arg>
    </Call>
    So how do we push?
    We’ve implemented a pluggable mechanism to add a push strategy to a SPDY connector. Our default strategy, called ReferrerPushStrategy is using the “referer” header to identify push resources on the first time a page is requested.
    The browser will request the main resource and quickly afterwards it usually requests all sub resources needed for that page. ReferrerPushStrategy will use the referer header used in the sub requests to identify sub resources for the main resource defined in the referer header. It will remember those sub resources and on the next request of the main resource, it’ll push all sub resources it knows about to the client.
    Now if the user will click on a link on the main resource, it’ll also contain a referer header for the main resource. However linked resources should not be pushed to the client in advance! To avoid that ReferrerPushStrategy has a configurable push period. The push strategy will only remember sub resources if they’ve been requested within that period from the very first request of the main resource since application start.
    So this is some kind of best effort strategy. It does not know which resources to push at startup, but it’ll learn on a best effort basis.
    What does best effort mean? It means that if the browser doesn’t request the sub resources fast enough (within the push period timeframe) after the initial request of the main resource it’ll never learn those sub resources. Or if the user is fast enough clicking links, it might push resources which should not be pushed.
    Now you might be wondering what happens if the browser has the resources already cached? Aren’t we sending data over the wire which the browser actually already has? Well, usually we don’t. First we use the if-modified-since header to identify if we should push sub resources or not and second the browser can refuse push streams. If the browser gets a syn for a sub resource it already has, then it can simply reset the push stream. Then the only thing that has been send is the syn frame for the push stream. Not a big drawback considering the advantages this has.
    There has to be more drawbacks?!
    Yes, there are. SPDY implementation in jetty is still experimental. The whole protocol is bleeding edge and implementations in browsers as well as the server still have some rough edges. There is already broad support amoung browsers for the SPDY protocol. Stable releases of firefox and chromium/chrome support SPDY draft2 out of the box and it already works really well. SPDY draft 3 however is only supported with more recent builds of the current browsers. SPDY push seems to work properly only with SPDY draft 3 and the latest chrome/chromium browsers. However we’re all working hard on getting the rough edges smooth and I presume SPDY draft 3 and push will be working in all stable browsers soon.
    We also had to disable push for draft 2 as this seemed to have negative effects on chromium up to regular browser crashes.
    Try it!
    As we keep eating our own dog-food, https://www.webtide.com is already updated with the latest code and has push enabled. If you want to test the push functionality get a chrome canary or a chromium nightly build and access our company’s website.
    This is how it’ll look in the developer tools and on chrome://net-internals page.
    developer-tools (note that the request has been done with an empty cache and the pushed resources are being marked as read from cache):

    net-internals (note the pushed and claimed resource count):

    Pretty exciting! We keep “pushing” for more and better SPDY support. Improve our push strategy and support getting SPDY a better protocol. Stay tuned for more stuff to come.
    Note that SPDY stuff is not in any official jetty release, yet. But most probably will be in the next release. Documentation for jetty will be updated soon as well.

  • JMiniX JMX console in Jetty

    Jetty has long had a rich set of JMX mbeans that give very detailed status, configuration and control over the server and applications, which can now simply be accessed with the JMiniX web console:

    The usability of JMX has been somewhat let down due to a lack of quality JMX management consoles.  JConsole and JVirtualVM do give good access to MBeans, but they rely on a RMI connection which can be tricky to setup to a remote machine.   JMiniX avoids the RMI by allowing access to the MBeans via a servlet you can add to your webapplication.

    The instructions were straight forward to follow and the steps were simply:

    1. Add dependency to your pom
    2. Add a repository to your pom (bummer – needs restlet.org which is not in maven central – if it was I’d consider adding JMiniX to our released test webapp)
    3. Define the servlet in your web.xml
    4. Build and run!

    You can see by the screen shot above that the console gives a nice rendering of the available mbeans from the JVM and Jetty (and cometd if running). Attributes can be viewed and updated, and operations can be called – all the normal stuff.   It only gives direct mbean access and does not provide any higher level management functions, but this is not a big problem if the mbeans are well designed and self documented.

    Also if you wanted to develop more advanced management functions, then the restful nature of JMiniX should make this fairly straight forward.  For example attributes can be retrieved with simple requests like:

    http://localhost:8080/jmx/servers/0/domains/org.eclipse.jetty.server/
    mbeans/type=server,id=0/attributes/startupTime/

    That returns JSON like:

    {"value":"1339059648877","label":"startupTime"}

    JMiniX looks like a great tool to improve the management of your servers and applications and to leverage the value already built into the Jetty JMX mbeans.

    We had been working on a similar effort for restful access to JMX, but JMiniX is more advanced.  It does lack some of the features that we had been working on like aggregate access to repeated attributes, but considering the state of JMiniX, we may consider contributing those features to that project instead.

  • Truth in Benchmarking!

    One of my pet peeves is misleading benchmarks, as discussed in my Lies, Damned Lies and Benchmarks blog.  Recently there has been a bit of interest in Vert.x, some of it resulting from apparently good benchmark results against node.js. The author gave a disclaimer that the tests were non-rigorous and just for fun, but they have already lead some people to ask if Jetty can scale like Vert.x.

    I know absolutely nothing about Vert.x, but I do know that their benchmark is next to useless to demonstrate any kind of scalability of a server.  So I’d like to analyse their benchmarks and compare them to how we benchmark jetty/cometd to try to give some understanding about how benchmarks should be designed and interpreted.

    The benchmark

    The vert.x benchmark uses 6 clients, each with 10 connections, each with up to 2000 pipelines HTTP requests for a trivial 200 OK or tiny static file. The tests were run for a minute and the average request rate was taken. So lets break this down:

    6 Clients of 10 connections!

    However you look at this (6 users each with a browser with 10 connections, or 60 individual users), 6 or 60 users does not represent any significant scalability.  We benchmark jetty/comet with 10,000 to 200,000 connections and have production sites that run with similar numbers.

    Testing 60 connections does not tell you anything about scalability. So why do so many benchmarks get performed on low numbers of connections?  It’s because it is really really hard to generate realistic load for hundreds of thousands of connections.  To do so, we use the jetty asynchronous HTTP client, which has been designed specifically for this purpose, and we still need to use multiple load generating machines to achieve high numbers of connections.

    2000 pipelined requests!

    Really? HTTP pipelining is not turned on by default in most web browsers, and even if it was, I cannot think of any realistic application that would be generate 2000 requests in a pipeline. Why is this important?  Because with pipelined requests a server that does:

    byte[] buffer = new byte[8192];
    socket.getInputStream().read(buffer);

    will read many requests into that buffer in a single read.  A trivial HTTP request is a few 10s of bytes (and I’m guessing they didn’t send any of the verbose complex headers that real browsers do), so the vert.x benchmark would be reading 30 or more requests on each read.  Thus this benchmark is not really testing any IO performance, but simply how fast they can iterate over a buffer and parse simple requests. At best it is telling you about the latency in their parsing and request handling.

    Handling reads is not the hard part of scaling IO.  It is handling the idle pauses between the reads that is difficult.  It is these idle periods that almost all real load profiles have that requires the server to carefully allocate resources so that idle connections do not consume resources that could be better used by non idle connections.    2000 connections each with 6 pipelined requests would be more realistic, or better yet 20000 connections with 6 requests that are sent with 10ms delays between them.

    Trivial 200 OK or Tiny static resource

    Creating a scalable server for non trivial applications is all about trying to ensure that maximal resources are applied to performing real business logic in preparing dynamic responses.   If all the responses are trivial or static, then the server is free to be more wasteful.  Worse still for realistic benchmarks, a trivial response generation can probably be in-lined by the hotspot compiler is a way that no real application ever could be.

    Run for a minute

    A minute is insufficient time for a JVM to achieve steady state.  For the first few minutes of a run the Hotspot JIT compiler will be using CPU to analyse and compile code. A trivial application might be able to be hotspot compiled in a minute, but any reasonably complex server/application is going to take much longer.  Try watching your application with jvisualvm and watch the perm generation continue to grow for many minutes while more and more classes are compiled. Only after the JVM has warmed up your application and CPU is no longer being used to compile, can any meaningful results be obtained.

    The other big killer of performance are full garbage collections that can stop the entire VM for many seconds.  Running fast for 60 seconds does not do you much good if a second later you pause for 10s while collecting the garbage from those fast 60 seconds.

    Benchmark result need to be reported for steady state over longer periods of time and you need to consider GC performance.  The jetty/cometd benchmark tools specifically measures and reports both JIT and GC actions during the benchmark runs and we can perform many benchmark runs in the same JVM.  Below is example output showing that for a 30s run some JIT was still performed, so the VM is not fully warmed up yet:

    Statistics Started at Mon Jun 21 15:50:58 UTC 2010
    Operative System: Linux 2.6.32-305-ec2 amd64
    JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server
    VM runtime 16.3-b01 1.6.0_20-b02
    Processors: 2
    System Memory: 93.82409% used of 7.5002174 GiB
    Used Heap Size: 2453.7236 MiB
    Max Heap Size: 5895.0 MiB
    Young Generation Heap Size: 2823.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    Testing 2500 clients in 100 rooms
    Sending 3000 batches of 1x50B messages every 8000µs
    - - - - - - - - - - - - - - - - - - - -
    Statistics Ended at Mon Jun 21 15:51:29 UTC 2010
    Elapsed time: 30164 ms
            Time in JIT compilation: 12 ms
            Time in Young Generation GC: 0 ms (0 collections)
            Time in Old Generation GC: 0 ms (0 collections)
    Garbage Generated in Young Generation: 1848.7974 MiB
    Garbage Generated in Survivor Generation: 0.0 MiB
    Garbage Generated in Old Generation: 0.0 MiB
    Average CPU Load: 109.96191/200

    Conclusion

    I’m sure the vert.x guys had every good intent when doing their micro-benchmark, and it may well be that vert.x scales really well.  However I wish that when developers consider benchmarking servers, that instead of thinking: “let’s send a lot of requests at it”, that their first thought was “let’s open a lot of connections at it”.  Better yet, a benchmark (micro or otherwise) should be modelled on some real application and the load that it might generate.

    The jetty/cometd benchmark is of a real chat application, that really works and has real features like member lists, private messages etc.  Thus the results that we achieve in benchmarks are able to be reproduced by real applications in production.