Category: WebSockets

  • Jetty 9 – Updated WebSocket API

    Build Your Magic HereCreating WebSockets in Jetty is even easier with Jetty 9!
    While the networking gurus in Jetty have been working on the awesome improvements to the I/O layers in core Jetty 9, the WebSocket fanatics in the community have been working on making writing WebSockets even easier.
    The initial WebSocket implementation in Jetty was started back in November of 2009, well before the WebSocket protocol was finalized.
    It has grown in response to Jetty’s involvement with the WebSocket draft discussions to the finalization of RFC6455, and onwards into the changes being influenced on our design as a result of WebSocket extensions drafts such as x-webkit-perframe-deflate, permessage-deflate, fragment, and ongoing mux discussions.
    The Jetty 7.x and Jetty 8.x codebases provided WebSockets to developers required a complex set of knowledge about how WebSockets work and how Jetty implemented WebSockets.  This complexity was as a result of this rather organic growth of WebSocket knowledge around intermediaries and WebSocket Extensions impacted the original design.
    With Jetty 9.x we were given an opportunity to correct our mistakes.

    The new WebSockets API in Jetty 9.x

    Note: this information represents what is in the jetty-9 branch on git, which has changed in small but important ways since 9.0.0.M0 was released.

    With the growing interest in next generation protocols like SPDY and HTTP/2.0, along with evolving standards being tracked for Servlet API 3.1 and Java API for WebSockets (JSR-356), the time Jetty 9.x was at hand.  We dove head first into cleaning up the codebase, performing some needed refactoring, and upgrading the codebase to Java 7.
    Along the way, Jetty 9.x started to shed the old blocking I/O layers, and all of the nasty logic surrounding it, resulting on a Async I/O focused Jetty core.  We love this new layer, and we expect you will to, even if you don’t see it directly.  This change benefits Jetty with a smaller / cleaner / easier to maintain and test codebase, along with various performance improvements such as speed, CPU use, and even less memory use.
    In parallel, the Jetty WebSocket codebase changed to soak up the knowledge gained in our early adoption of WebSockets and also to utilize the benefits of the new Jetty Async I/O layers better.   It is important to note that Jetty 9.x WebSockets is NOT backward compatible with prior Jetty versions.
    The most significant changes:

    • Requires Java 7
    • Only supporting WebSocket version 13 (RFC-6455)
    • Artifact Split

    The monolithic jetty-websocket artifact has been to split up the various websocket artifacts so that developers can pick and choose what’s important to them.

    The new artifacts are all under the org.eclipse.jetty.websocket groupId on maven central.

    • websocket-core.jar – where the basic API classes reside, plus internal implementation details that are common between server & client.
    • websocket-server.jar – the server specific classes
    • websocket-client.jar – the client specific classes
    • Only 1 Listener now (WebSocketListener)
    • Now Supports Annotated WebSocket classes
    • Focus is on Messages not Frames

    In our prior WebSocket API we assumed, incorrectly, that developers would want to work with the raw WebSocket framing.   This change brings us in line with how every other WebSocket API behaves, working with messages, not frames.

    • WebSocketServlet only configures for a WebSocketCreator

    This subtle change means that the Servlet no longer creates websockets of its own, and instead this work is done by the WebSocketCreator of your choice (don’t worry, there is a default creator).
    This is important to properly support the mux extensions and future Java API for WebSockets (JSR-356)

    Jetty 9.x WebSockets Quick Start:

    Before we get started, some important WebSocket Basics & Gotchas

    1. A WebSocket Frame is the most fundamental part of the protocol, however it is not really the best way to read/write to websockets.
    2. A WebSocket Message can be 1 or more frames, this is the model of interaction with a WebSocket in Jetty 9.x
    3. A WebSocket TEXT Message can only ever be UTF-8 encoded. (it you need other forms of encoding, use a BINARY Message)
    4. A WebSocket BINARY Message can be anything that will fit in a byte array.
    5. Use the WebSocketPolicy (available in the WebSocketServerFactory) to configure some constraints on what the maximum text and binary message size should be for your socket (to prevent clients from sending massive messages or frames)

    First, we need the servlet to provide the glue.
    We’ll be overriding the configure(WebSocketServerFactory) here to configure a basic MyEchoSocket to run when an incoming request to upgrade occurs.

    package examples;
    import org.eclipse.jetty.websocket.server.WebSocketServerFactory;
    import org.eclipse.jetty.websocket.server.WebSocketServlet;
    public class MyEchoServlet extends WebSocketServlet
    {
        @Override
        public void configure(WebSocketServerFactory factory)
        {
            // register a socket class as default
            factory.register(MyEchoSocket.class);
        }
    }

    The responsibility of your WebSocketServlet class is to configure the WebSocketServerFactory.     The most important aspect is describing how WebSocket implementations are to be created when request for new sockets arrive.  This is accomplished by configuring an appropriate WebSocketCreator object.  In the above example, the default WebSocketCreator is being used to register a specific class to instantiate on each new incoming Upgrade request.
    If you wish to use your own WebSocketCreator implementation, you can provide it during this configure step.
    Check the examples/echo to see how this is done with factory.setCreator() and EchoCreator.
    Note that request for new websockets can arrive from a number of different code paths, not all of which will result in your WebSocketServlet being executed.  Mux for example will result in a new WebSocket request arriving as a logic channel within the MuxExtension itself.
    As for implementing the MyEchoSocket, you have 3 choices.

    1. Implementing Listener
    2. Using an Adapter
    3. Using Annotations

    Choice 1: implementing WebSocketListener interface.

    Implementing WebSocketListener is the oldest and most fundamental approach available to you for working with WebSocket in a traditional listener approach (be sure you read the other approaches below before you settle on this approach).
    It is your responsibility to handle the connection open/close events appropriately when using the WebSocketListener. Once you obtain a reference to the WebSocketConnection, you have a variety of NIO/Async based write() methods to write content back out the connection.

    package examples;
    import java.io.IOException;
    import org.eclipse.jetty.util.Callback;
    import org.eclipse.jetty.util.FutureCallback;
    import org.eclipse.jetty.websocket.core.api.WebSocketConnection;
    import org.eclipse.jetty.websocket.core.api.WebSocketException;
    import org.eclipse.jetty.websocket.core.api.WebSocketListener;
    public class MyEchoSocket implements WebSocketListener
    {
        private WebSocketConnection outbound;
        @Override
        public void onWebSocketBinary(byte[] payload, int offset,
                                      int len)
        {
            /* only interested in text messages */
        }
        @Override
        public void onWebSocketClose(int statusCode, String reason)
        {
            this.outbound = null;
        }
        @Override
        public void onWebSocketConnect(WebSocketConnection connection)
        {
            this.outbound = connection;
        }
        @Override
        public void onWebSocketException(WebSocketException error)
        {
            error.printStackTrace();
        }
        @Override
        public void onWebSocketText(String message)
        {
            if (outbound == null)
            {
                return;
            }
            try
            {
                String context = null;
                Callback callback = new FutureCallback<>();
                outbound.write(context,callback,message);
            }
            catch (IOException e)
            {
                e.printStackTrace();
            }
        }
    }

    Choice 2: extending from WebSocketAdapter

    Using the provided WebSocketAdapter, the management of the Connection is handled for you, and access to a simplified WebSocketBlockingConnection is also available (as well as using the NIO based write signature seen above)

    package examples;
    import java.io.IOException;
    import org.eclipse.jetty.websocket.core.api.WebSocketAdapter;
    public class MyEchoSocket extends WebSocketAdapter
    {
        @Override
        public void onWebSocketText(String message)
        {
            if (isNotConnected())
            {
                return;
            }
            try
            {
                // echo the data back
                getBlockingConnection().write(message);
            }
            catch (IOException e)
            {
                e.printStackTrace();
            }
        }
    }

    Choice 3: decorating your POJO with @WebSocket annotations.

    This the easiest WebSocket you can create, and you have some flexibility in the parameters of the methods as well.

    package examples;
    import java.io.IOException;
    import org.eclipse.jetty.util.FutureCallback;
    import org.eclipse.jetty.websocket.core.annotations.OnWebSocketMessage;
    import org.eclipse.jetty.websocket.core.annotations.WebSocket;
    import org.eclipse.jetty.websocket.core.api.WebSocketConnection;
    @WebSocket(maxTextSize = 64 * 1024)
    public class MyEchoSocket
    {
        @OnWebSocketMessage
        public void onText(WebSocketConnection conn, String message)
        {
            if (conn.isOpen())
            {
                return;
            }
            try
            {
                conn.write(null,new FutureCallback(),message);
            }
            catch (IOException e)
            {
                e.printStackTrace();
            }
        }
    }

    The annotations you have available:
    @OnWebSocketMessage: To receive websocket message events.
    Examples:

      @OnWebSocketMessage
      public void onTextMethod(String message) {
         // simple TEXT message received
      }
      @OnWebSocketMessage
      public void onTextMethod(WebSocketConnection connection,
                               String message) {
         // simple TEXT message received, with Connection
         // that it occurred on.
      }
      @OnWebSocketMessage
      public void onBinaryMethod(byte data[], int offset,
                                 int length) {
         // simple BINARY message received
      }
      @OnWebSocketMessage
      public void onBinaryMethod(WebSocketConnection connection,
                                 byte data[], int offset,
                                 int length) {
         // simple BINARY message received, with Connection
         // that it occurred on.
      }

    @OnWebSocketConnect: To receive websocket connection connected event (will only occur once).
    Example:

      @OnWebSocketConnect
      public void onConnect(WebSocketConnection connection) {
         // WebSocket is now connected
      }

    @OnWebSocketClose: To receive websocket connection closed events (will only occur once).
    Example:

      @OnWebSocketClose
      public void onClose(int statusCode, String reason) {
         // WebSocket is now disconnected
      }
      @OnWebSocketClose
      public void onClose(WebSocketConnection connection,
                          int statusCode, String reason) {
         // WebSocket is now disconnected
      }

    @OnWebSocketFrame: To receive websocket framing events (read only access to the raw Frame details).
    Example:

      @OnWebSocketFrame
      public void onFrame(Frame frame) {
         // WebSocket frame received
      }
      @OnWebSocketFrame
      public void onFrame(WebSocketConnection connection,
                          Frame frame) {
         // WebSocket frame received
      }

    One More Thing … The Future

    We aren’t done with our changes to Jetty 9.x and the WebSocket API, we are actively working on the following features as well…

    • Mux Extension

    The multiplex extension being drafted will allow for multiple virtual WebSocket connections over a single physical TCP/IP connection.  This extension will allow browsers to better utilize their connection limits/counts, and allow web proxy intermediaries to bundle multiple websocket connections to a server together over a single physical connection.

    • Streaming APIs

    There has been some expressed interest in providing read and write of text or binary messages using the standard Java IO Writer/Reader (for TEXT messages) and OutputStream/InputStream (for BINARY messages) APIs.

    Current plans for streamed reading includes new @OnWebSocketMessage interface patterns.

      // In the near future, we will have the following some Streaming
      // forms also available.  This is a delicate thing to
      // implement and currently does not work properly, but is
      // scheduled.
      @OnWebSocketMessage
      public void onTextMethod(Reader stream) {
         // TEXT message received, and reported to your socket as a
         // Reader. (can handle 1 message, regardless of size or
         // number of frames)
      }
      @OnWebSocketMessage
      public void onTextMethod(WebSocketConnection connection,
                               Reader stream) {
         // TEXT message received, and reported to your socket as a
         // Reader. (can handle 1 message, regardless of size or
         // number of frames).  Connection that message occurs
         // on is reported as well.
      }
      @OnWebSocketMessage
      public void onBinaryMethod(InputStream stream) {
         // BINARY message received, and reported to your socket
         // as a InputStream. (can handle 1 message, regardless
         // of size or number of frames).
      }
      @OnWebSocketMessage
      public void onBinaryMethod(WebSocketConnection connection,
                                 InputStream stream) {
         // BINARY message received, and reported to your socket
         // as a InputStream. (can handle 1 message, regardless
         // of size or number of frames).  Connection that
         // message occurs on is reported as well.
      }

    And for streaming writes, we plan to provide Writer and OutputStream implementations that simply wrap the provided WebSocketConnection.

    • Android Compatible Client Library

    While Android is currently not Java 7 compatible, a modified websocket-client library suitable for use with Android is on our TODO list.

    • Support Java API for WebSocket API (JSR356)

    We are actively tracking the work being done with this JSR group, it is coming, but is still some way off from being a complete and finished API (heck, the current EDR still doesn’t support extensions). Jetty 9.x will definitely support it, and we have tried to build our Jetty 9.x WebSocket API so that the the Java API for WebSockets can live above it.

  • Jetty 9 – Features

    Jetty 9 milestone 0 has landed! We are very excited about getting this release of jetty out and into the hands of everyone. A lot of work as gone into reworking fundamentals and this is going to be the best version of jetty yet!

    Anyway, as promised a few weeks back, here is a list of some of the big features in jetty-9. By no means an authoritative list of things that have changed, these are many of the high points we think are worthy of a bit of initial focus in jetty-9. One of the features will land in a subsequent milestone releases (pluggable modules) as that is still being refined somewhat, but the rest of them are largely in place and working in our initial testing.
    We’ll blog in depth on some of these features over the course of the next couple of months. We are targeting a November official release of Jetty 9.0.0 so keep an eye out. The improved documentation is coming along well and we’ll introduce that shortly. In the meantime, give the initial milestones a whirl and give us feedback on the mailing lists, on twitter (#jettyserver hashtag pls) or directly at some of the conferences we’ll be attending over the next couple of months.
    Next Generation Protocols – SPDY, WebSockets, MUX and HTTP/2.0 are actively replacing the venerable HTTP/1.1 protocol. Jetty directly supports these protocols as equals and first class siblings to HTTP/1.1. This means a lighter faster container that is simpler and more flexible to deal with the rapidly changing mix of protocols currently being experienced as HTTP/1.1 is replaced.
    Content Push – SPDY v3 supporting including content push within both the client and server. This is a potentially huge optimization for websites that know what a browser will need in terms of javascript files or images, instead of waiting for a browser to ask first.
    Improved WebSocket Server and Client

    • Fast websocket implementation
    • Supporting classic Listener approach and @WebSocket annotations
    • Fully compliant to RFC6455 spec (validated via autobahn test suite http://autobahn.ws/testsuite)
    • Support for latest versions of Draft WebSocket extensions (permessage-compression, and fragment)

    Java 7 – We have removed some areas of abstraction within jetty in order to take advantage of improved APIs in the JVM regarding concurrency and nio, this leads to a leaner implementation and improved performance.
    Servlet 3.1 ready – We actively track this developing spec and will be with support, in fact much of the support is already in place.
    Asynchronous HTTP client – refactored to simplify API, while retaining the ability to run many thousands of simultaneous requests, used as a basis for much of our own testing and http client needs.
    Pluggable Modules – one distribution with integration with libraries, third party technologies, and web applications available for download through a simple command line interface
    Improved SSL Support – the proliferation of mobile devices that use SSL has manifested in many atypical client implementations, support for these edge cases in SSL has been thoroughly refactored such that support is now understandable and maintainable by humans
    Lightweight – Jetty continues its history of having a very small memory footprint while still being able to scale to many ten’s of thousands of connections on commodity hardware.
    Eminently Embeddable – Years of embedding support pays off in your own application, webapp, or testing. Use embedded jetty to unit test your web projects. Add a web server to your existing application. Bundle your web app as a standalone application.

  • WebSocket over SSL in Jetty

    Jetty has always been in the front line on the implementation of the WebSocket Protocol.
    The CometD project leverages the Jetty WebSocket implementation to its maximum, to achieve great scalability and minimal latencies.
    Until now, however, support for WebSocket over SSL was lacking in Jetty.
    In Jetty 7.6.x a redesign of the connection layer allows for more pluggability of SSL encryption/decryption and of connection upgrade (from HTTP to WebSocket), and these changes combined allowed to implement very easily WebSocket over SSL.
    These changes are now merged into Jetty’s master branch, and will be shipped with the next version of Jetty.
    Developers will now be able to use the wss:// protocol in web pages in conjunction with Jetty on the server side, or just rely on the CometD framework to forget about transport details and always have the fastest, most reliable and now also confidential transport available, and concentrate in writing application logic rather than transport logic.
    WebSocket over SSL is of course also available in the Java WebSocket client provided by Jetty.
    Enjoy !

  • Websocket Example: Server, Client and LoadTest

    The websocket protocol specification is approaching final and the Jetty implementation and API have been tracking the draft and is ready when the spec and browsers are available.   More over, Jetty release 7.5.0 now includes a capable websocket java client that can be used for non browser applications or load testing. It is fully asynchronous and can create thousands of connections simultaneously.

    This blog uses the classic chat example to introduce a websocket server, client and load test.

    The project

    The websocket example has been created as a maven project with groupid com.example.  The entire project can be downloaded from here.   The pom.xml defines a dependency on org.eclipse.jetty:jetty-websocket-7.5.0.RC1 (you should update to 7.5.0 when the final release is available), which provides the websocket API and transitively the jetty implementation.  There is also a dependency on org.eclipse.jetty:jetty-servlet which provides the ability to create an embedded servlet container to run the server example.

    While the project implements a Servlet, it is not in a typical webapp layout, as I wanted to provide both client and server in the same project.    Instead of a webapp, this project uses embedded jetty in a simple Main class to provide the server and the static content is served from the classpath from src/resources/com/example/docroot.

    Typically developers will want to build a war file containing a webapp, but I leave it as an exercise for the reader to put the servlet and static content described here into a webapp format.

    The Servlet

    The Websocket connection starts with a HTTP handshake.  Thus the websocket API in jetty also initiated by the handling of a HTTP request (typically) by a Servlet.  The advantage of this approach is that it means that websocket connections are terminated in the same rich application space provided by HTTP servers, thus a websocket enabled web application can be developed in a single environment rather than by collaboration between a HTTP server and a separate websocket server.

    We create the ChatServlet with an init() method that instantiates and configures a WebSocketFactory instance:

    public class ChatServlet extends HttpServlet
    {
      private WebSocketFactory _wsFactory;
      private final Set _members = new CopyOnWriteArraySet();
      @Override
      public void init() throws ServletException
      {
        // Create and configure WS factory
        _wsFactory=new WebSocketFactory(new WebSocketFactory.Acceptor()
        {
          public boolean checkOrigin(HttpServletRequest request, String origin)
          {
            // Allow all origins
            return true;
          }
          public WebSocket doWebSocketConnect(HttpServletRequest request, String protocol)
          {
             if ("chat".equals(protocol))
               return new ChatWebSocket();
             return null;
          }
        });
        _wsFactory.setBufferSize(4096);
        _wsFactory.setMaxIdleTime(60000);
      }
      ...

    The WebSocketFactory is instantiated by passing it an Acceptor instance, which in this case is an anonymous instance. The Acceptor must implement two methods: checkOrigin, which in this case accepts all; and doWebSocketConnect, which must accept a WebSocket connection by creating and returning an instance of the WebSocket interface to handle incoming messages.  In this case, an instance of the nested ChatWebSocket class is created if the protocol is “chat”.   The other WebSocketFactory fields have been initialised with hard coded buffers size and timeout, but typically these would be configurable from servlet init parameters.

    The servlet handles get requests by passing them to the WebSocketFactory to be accepted or not:

      ...
      protected void doGet(HttpServletRequest request,
                           HttpServletResponse response)
        throws IOException
      {
        if (_wsFactory.acceptWebSocket(request,response))
          return;
        response.sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE,
                           "Websocket only");
      }
      ...

    All that is left for the Servlet, is the ChatWebSocket itself.   This is just a POJO that receives callbacks for events.  For this example we have implemented the WebSocket.OnTextMessage interface to restrict the call backs to only connection management and full messages:

      private class ChatWebSocket implements WebSocket.OnTextMessage
      {
        Connection _connection;
        public void onOpen(Connection connection)
        {
          _connection=connection;
          _members.add(this);
        }
        public void onClose(int closeCode, String message)
        {
          _members.remove(this);
        }
        public void onMessage(String data)
        {
          for (ChatWebSocket member : _members)
          {
            try
            {
              member._connection.sendMessage(data);
            }
            catch(IOException e)
            {
              e.printStackTrace();
            }
          }
        }
      }

    The handling of the onOpen callback is to add the ChatWebSocket to the set of all members (and remembering the Connection object for subsequent sends).  The onClose handling simply removes the member from the set.   The onMessage handling iterates through all the members and sends the received message to them (and prints any resulting exceptions).

     

    The Server

    To run the servlet, there is a simple Main method that creates an embedded Jetty server with a ServletHandler for the chat servlet, as ResourceHandler for the static content needed by the browser client and a DefaultHandler to generate errors for all other requests:

    public class Main
    {
      public static void main(String[] arg) throws Exception
      {
        int port=arg.length>1?Integer.parseInt(arg[1]):8080;
        Server server = new Server(port);
        ServletHandler servletHandler = new ServletHandler();
        servletHandler.addServletWithMapping(ChatServlet.class,"/chat/*");
        ResourceHandler resourceHandler = new ResourceHandler();
        resourceHandler.setBaseResource(Resource.newClassPathResource("com/example/docroot/"));
        DefaultHandler defaultHandler = new DefaultHandler();
        HandlerList handlers = new HandlerList();
        handlers.setHandlers(new Handler[] {servletHandler,resourceHandler,defaultHandler});
        server.setHandler(handlers);
        server.start();
        server.join();
      }
    }

    The server can be run from an IDE or via maven using the following command line:

    mvn
    mvn -Pserver exec:exec

    The Browser Client

    The HTML for the chat room simply imports some CSS and the javascript before creating a few simple divs to contain the chat text, the join dialog and the joined dialog:

    <html>
     <head>
     <title>WebSocket Chat Example</title>
     <script type='text/javascript' src="chat.js"></script>
     <link rel="stylesheet" type="text/css" href="chat.css" />
     </head>
     <body>
      <div id='chat'></div>
      <div id='input'>
       <div id='join' >
        Username:&nbsp;<input id='username' type='text'/>
        <input id='joinB' class='button' type='submit' name='join' value='Join'/>
       </div>
       <div id='joined' class='hidden'>
        Chat:&nbsp;<input id='phrase' type='text'/>
        <input id='sendB' class='button' type='submit' name='join' value='Send'/>
       </div>
      </div>
      <script type='text/javascript'>init();</script>
     </body>
    </html>

    The javascript create a room object with methods to handle the various operations of a chat room.  The first operation is to join the chat room, which is triggered by entering a user name.  This creates a new WebSocket object pointing to the /chat URL path on the same server the HTML was loaded from:

    var room = {
      join : function(name) {
        this._username = name;
        var location = document.location.toString()
          .replace('http://', 'ws://')
          .replace('https://', 'wss://')+ "chat";
        this._ws = new WebSocket(location, "chat");
        this._ws.onopen = this.onopen;
        this._ws.onmessage = this.onmessage;
        this._ws.onclose = this.onclose;
      },
      onopen : function() {
        $('join').className = 'hidden';
        $('joined').className = '';
        $('phrase').focus();
        room.send(room._username, 'has joined!');
      },
      ...

    The javascript websocket object is initialised with call backs for onopen, onclose and onmessage. The onopen callback is handled above by switching the join div to the joined div and sending a “has joined” message.

    Sending is implemented by creating a string of username:message and sending that via the WebSocket instance:

      ...
      send : function(user, message) {
        user = user.replace(':', '_');
        if (this._ws)
          this._ws.send(user + ':' + message);
      },
      ...

    If the chat room receives a message, the onmessage callback is called, which sanitises the message, parses out the username and appends the text to the chat div:

      ...
      onmessage : function(m) {
        if (m.data) {
          var c = m.data.indexOf(':');
          var from = m.data.substring(0, c)
            .replace('<','<')
            .replace('>','>');
          var text = m.data.substring(c + 1)
            .replace('<', '<')
            .replace('>', '>');
          var chat = $('chat');
          var spanFrom = document.createElement('span');
          spanFrom.className = 'from';
          spanFrom.innerHTML = from + ': ';
          var spanText = document.createElement('span');
          spanText.className = 'text';
          spanText.innerHTML = text;
          var lineBreak = document.createElement('br');
          chat.appendChild(spanFrom);
          chat.appendChild(spanText);
          chat.appendChild(lineBreak);
          chat.scrollTop = chat.scrollHeight - chat.clientHeight;
        }
      },
      ...

    Finally, the onclose handling empties the chat div and switches back to the join div so that a new username may be entered:

      ...
      onclose : function(m) {
        this._ws = null;
        $('join').className = '';
        $('joined').className = 'hidden';
        $('username').focus();
        $('chat').innerHTML = '';
      }
    };

    With this simple client being served from the server, you can now point your websocket capable browsers at http://localhost:8080 and interact with the chat room. Of course this example glosses over a lot of detail and complications a real chat application would need, so I suggest you read my blog is websocket chat simpler to learn what else needs to be handled.

    The Load Test Client

    The jetty websocket java client is an excellent tool for both functional and load testing of a websocket based service.  It  uses the same endpoint API as the server side and for this example we create a simple implementation of the OnTextMessage interface that keeps track of the all the open connection and counts the number of messages sent and received:

    public class ChatLoadClient implements WebSocket.OnTextMessage
    {
      private static final AtomicLong sent = new AtomicLong(0);
      private static final AtomicLong received = new AtomicLong(0);
      private static final Set<ChatLoadClient> members = new CopyOnWriteArraySet<ChatLoadClient>();
      private final String name;
      private final Connection connection;
      public ChatLoadClient(String username,WebSocketClient client,String host, int port)
      throws Exception
      {
        name=username;
        connection=client.open(new URI("ws://"+host+":"+port+"/chat"),this).get();
      }
      public void send(String message) throws IOException
      {
        connection.sendMessage(name+":"+message);
      }
      public void onOpen(Connection connection)
      {
        members.add(this);
      }
      public void onClose(int closeCode, String message)
      {
        members.remove(this);
      }
      public void onMessage(String data)
      {
        received.incrementAndGet();
      }
      public void disconnect() throws IOException
      {
        connection.disconnect();
      }

    The Websocket is initialized by calling open on the WebSocketClient instance passed to the constructor.  The WebSocketClient instance is shared by multiple connections and contains the thread pool and other common resources for the client.

    This load test example comes with a main method that creates a WebSocketClient from command line options and then creates a number of ChatLoadClient instances:

    public static void main(String... arg) throws Exception
    {
      String host=arg.length>0?arg[0]:"localhost";
      int port=arg.length>1?Integer.parseInt(arg[1]):8080;
      int clients=arg.length>2?Integer.parseInt(arg[2]):1000;
      int mesgs=arg.length>3?Integer.parseInt(arg[3]):1000;
      WebSocketClient client = new WebSocketClient();
      client.setBufferSize(4096);
      client.setMaxIdleTime(30000);
      client.setProtocol("chat");
      client.start();
      // Create client serially
      ChatLoadClient[] chat = new ChatLoadClient[clients];
      for (int i=0;i<chat.length;i++)
        chat[i]=new ChatLoadClient("user"+i,client,host,port);
      ...

    Once the connections are opened, the main method loops around picking a random client to speak in the chat room

      ...
      // Send messages
      Random random = new Random();
      for (int i=0;i<mesgs;i++)
      {
        ChatLoadClient c = chat[random.nextInt(chat.length)];
        String msg = "Hello random "+random.nextLong();
        c.send(msg);
      }
      ...

    Once all the messages have been sent and all the replies have been received, the connections are closed:

      ...
      // close all connections
      for (int i=0;i<chat.length;i++)
        chat[i].disconnect();

    The project is setup so that the load client can be run with the following maven command:

    mvn -Pclient exec:exec

    And the resulting output should look something like:

    Opened 1000 of 1000 connections to localhost:8080 in 1109ms
    Sent/Received 10000/10000000 messages in 15394ms: 649603msg/s
    Closed 1000 connections to localhost:8080 in 45ms

    Yes that is 649603 messages per second!!!!!!!!!!! This is a pretty simple easy test, but it is still scheduling 1000 local sockets plus generating and parsing all the websocket frames. Real applications on real networks are unlikely to achieve close to this level, but the indications are good for the capability of high throughput and stand by for more rigorous bench marks shortly.

     

     

     

  • Prelim Cometd WebSocket Benchmarks

    I have done some very rough preliminary benchmarks on the latest cometd-2.4.0-SNAPSHOT with the latest Jetty-7.5.0-SNAPSHOT and the results are rather impressive.  The features that these two releases have added are:

    • Optimised Jetty NIO with latest JVMs and JITs considered.
    • Latest websocket draft implemented and optimised.
    • Websocket client implemented.
    • Jackson JSON parser/generator used for cometd
    • Websocket cometd transport for the server improved.
    • Websocket cometd transport for the bayeux client implemented.

    The benchmarks that I’ve done have all been on my notebook using the localhost network, which is not the most realistic of environments, but it still does tell us a lot about the raw performance of the cometd/jetty.  Specifically:

    • Both the server and the client are running on the same machine, so they are effectively sharing the 8 CPUs available.   The client typically takes 3x more CPU than the server (for the same load), so this is kind of like running the server on a dual core and the client on a 6 core machine.
    • The local network has very high throughput which would only be matched by gigabit networks.  It also has practically no latency, which is unlike any real network.  The long polling transport is more dependent on good network latency than the websocket transport, so the true comparison between these transports will need testing on a real network.

    The Test

    The cometd load test is a simulated chat application.  For this test I tried long-polling and websocket transports for 100, 1000 and 10,000 clients that were each logged into 10 randomly selected chat rooms from a total of 100 rooms.   The messages sent were all 50 characters long and were published in batches of 10 messages at once, each to randomly selected rooms.  There was a pause between batches that was adjusted to find a good throughput that didn’t have bad latency.  However little effort was put into finding the optimal settings to maximise throughput.

    The runs were all done on JVM’s that had been warmed up, but the runs were moderately short (approx 30s), so steady state was not guaranteed and the margin of error on these numbers will be pretty high.  However, I also did a long run test at one setting just to make sure that steady state can be achieved.

    The Results

    The bubble chart above plots messages per second against number of clients for both long-polling and websocket transports.   The size of the bubble is the maximal latency of the test, with the smallest bubble being 109ms and the largest is 646ms.  Observations from the results are:

    • Regardless of transport we achieved 100’s of 1000’s messages per second!  These are great numbers and show that we can cycle the cometd infrastructure at high rates.
    • The long-polling throughput is probably a over reported because there are many messages being queued into each HTTP response.   The most HTTP responses I saw was 22,000 responses per second, so for many application it will be the HTTP rate that limits the throughput rather than the cometd rate.  However the websocket throughput did not benefit from any such batching.
    • The maximal latency for all websocket measurements was significantly better than long polling, with all websocket messages being delivered in < 200ms and the average was < 1ms.
    • The websocket throughput increased with connections, which probably indicates that at low numbers of connections we were not generating a maximal load.

    A Long Run

    The throughput tests above need to be redone on a real network and longer runs. However I did do one long run ( 3 hours) of 1,000,013,657 messages at 93,856/sec. T results suggest no immediate problems with long runs. Neither the client nor the server needed to do a old generation collection and all young generation collections took on average only 12ms.

    The output from the client is below:

    Statistics Started at Fri Aug 19 15:44:48 EST 2011
    Operative System: Linux 2.6.38-10-generic amd64
    JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM runtime 17.1-b03 1.6.0_22-b04
    Processors: 8
    System Memory: 55.35461% used of 7.747429 GiB
    Used Heap Size: 215.7406 MiB
    Max Heap Size: 1984.0 MiB
    Young Generation Heap Size: 448.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    Testing 1000 clients in 100 rooms, 10 rooms/client
    Sending 1000000 batches of 10x50 bytes messages every 10000 µs
    - - - - - - - - - - - - - - - - - - - -
    Statistics Ended at Fri Aug 19 18:42:23 EST 2011
    Elapsed time: 10654717 ms
    	Time in JIT compilation: 57 ms
    	Time in Young Generation GC: 118473 ms (8354 collections)
    	Time in Old Generation GC: 0 ms (0 collections)
    Garbage Generated in Young Generation: 2576746.8 MiB
    Garbage Generated in Survivor Generation: 336.53125 MiB
    Garbage Generated in Old Generation: 532.35156 MiB
    Average CPU Load: 433.23907/800
    ----------------------------------------
    Outgoing: Elapsed = 10654716 ms | Rate = 938 msg/s = 93 req/s =   0.4 Mbs
    All messages arrived 1000013657/1000013657
    Messages - Success/Expected = 1000013657/1000013657
    Incoming - Elapsed = 10654716 ms | Rate = 93856 msg/s = 90101 resp/s(96.00%) =  35.8 Mbs
    Thread Pool - Queue Max = 972 | Latency avg/max = 3/62 ms
    Messages - Wall Latency Min/Ave/Max = 0/8/135 ms

    Note that the client was using 433/800 of the available CPU, while you can see that the server (below) was using only 170/800.  This suggests that the server has plenty of spare capacity if it were given the entire machine.

    Statistics Started at Fri Aug 19 15:44:47 EST 2011
    Operative System: Linux 2.6.38-10-generic amd64
    JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM runtime 17.1-b03 1.6.0_22-b04
    Processors: 8
    System Memory: 55.27913% used of 7.747429 GiB
    Used Heap Size: 82.58406 MiB
    Max Heap Size: 2016.0 MiB
    Young Generation Heap Size: 224.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    - - - - - - - - - - - - - - - - - - - -
    Statistics Ended at Fri Aug 19 18:42:23 EST 2011
    Elapsed time: 10655706 ms
    	Time in JIT compilation: 187 ms
    	Time in Young Generation GC: 140973 ms (12073 collections)
    	Time in Old Generation GC: 0 ms (0 collections)
    Garbage Generated in Young Generation: 1652646.0 MiB
    Garbage Generated in Survivor Generation: 767.625 MiB
    Garbage Generated in Old Generation: 1472.6484 MiB
    Average CPU Load: 170.20532/800

    Conclusion

    These results are preliminary, but excellent none the less!   The final releases of jetty 7.5.0 and cometd 2.4.0 will be out within a week or two and we will be working to bring you some more rigorous benchmarks with those releases.

     

     

     

  • Is WebSocket Chat Simpler?

    A year ago I wrote an article asking Is WebSocket Chat Simple?, where I highlighted the deficiencies of this much touted protocol for implementing simple comet applications like chat. After a year of intense debate there have been many changes and there are new drafts of both the WebSocket protocol and WebSocket API. Thus I thought it worthwhile to update my article with comments to see how things have improved (or not) in the last year.

    The text in italics is my wishful thinking from a year ago
    The text in bold italics is my updated comments

    Is WebSocket Chat Simple (take II)?

    The WebSocket protocol has been touted as a great leap forward for bidirectional web applications like chat, promising a new era of simple Comet applications. Unfortunately there is no such thing as a silver bullet and this blog will walk through a simple chat room to see where WebSocket does and does not help with Comet applications. In a WebSocket world, there is even more need for frameworks like cometD.

    Simple Chat

    Chat is the “helloworld” application of web-2.0 and a simple WebSocket chat room is included with the jetty-7 which now supports WebSockets. The source of the simple chat can be seen in svn for the client-side and server-side.

    The key part of the client-side is to establish a WebSocket connection:

    join: function(name) {
       this._username=name;
       var location = document.location.toString()
           .replace('http:','ws:');
       this._ws=new WebSocket(location);
       this._ws.onopen=this._onopen;
       this._ws.onmessage=this._onmessage;
       this._ws.onclose=this._onclose;
    },

    It is then possible for the client to send a chat message to the server:

    _send: function(user,message){
       user=user.replace(':','_');
       if (this._ws)
           this._ws.send(user+':'+message);
    },

    and to receive a chat message from the server and to display it:

    _onmessage: function(m) {
       if (m.data){
           var c=m.data.indexOf(':');
           var from=m.data.substring(0,c)
               .replace('<','<')
               .replace('>','>');
           var text=m.data.substring(c+1)
               .replace('<','<')
               .replace('>','>');
           var chat=$('chat');
           var spanFrom = document.createElement('span');
           spanFrom.className='from';
           spanFrom.innerHTML=from+': ';
           var spanText = document.createElement('span');
           spanText.className='text';
           spanText.innerHTML=text;
           var lineBreak = document.createElement('br');
           chat.appendChild(spanFrom);
           chat.appendChild(spanText);
           chat.appendChild(lineBreak);
           chat.scrollTop = chat.scrollHeight - chat.clientHeight;
      }
    },

    For the server-side, we simply accept incoming connections as members:

    public void onConnect(Connection connection)
    {
        _connection=connection;
        _members.add(this);
    }

    and then for all messages received, we send them to all members:

    public void onMessage(byte frame, String data){
       for (ChatWebSocket member : _members){
       try{
           member._connection.sendMessage(data);
       }
       catch(IOException e){
           Log.warn(e);
       }
     }
    }

    So we are done, right? We have a working chat room – let’s deploy it and we’ll be the next Google GChat!! Unfortunately, reality is not that simple and this chat room is a long way short of the kinds of functionality that you expect from a chat room – even a simple one.

    Not So Simple Chat

    On Close?

    With a chat room, the standard use-case is that once you establish your presence in the room and it remains until you explicitly leave the room. In the context of webchat, that means that you can send receive a chat message until you close the browser or navigate away from the page. Unfortunately the simple chat example does not implement this semantic because the WebSocket protocol allows for an idle timeout of the connection. So if nothing is said in the chat room for a short while then the WebSocket connection will be closed, either by the client, the server or even an intermediary. The application will be notified of this event by the onClose method being called.

    So how should the chat room handle onClose? The obvious thing to do is for the client to simply call join again and open a new connection back to the server:

    _onclose: function() {
       this._ws=null;
       this.join(this.username);
    }

    This indeed maintains the user’s presence in the chat room, but is far from an ideal solution since every few idle minutes the user will leave the room and rejoin. For the short period between connections, they will miss any messages sent and will not be able to send any chat
    themselves.

    Keep-Alives

    In order to maintain presence, the chat application can send keep-alive messages on the WebSocket to prevent it from being closed due to an idle timeout. However, the application has no idea at all about what the idle timeouts are, so it will have to pick some arbitrary frequent period (e.g. 30s) to send keep-alives and hope that is less than any idle timeout on the path (more or less as long-polling does now).

    Ideally a future version of WebSocket will support timeout discovery, so it can either tell the application the period for keep-alive messages or it could even send the keep-alives on behalf of the application.

    The latest drafts of the websocket protocol do include control packets for ping and pong, which can effectively be used as messages to keep alive a connection. Unfortunately this mechanism is not actually usable because: a) there is no javascript API to send pings; b) there is no API to communicate to the infrastructure if the application wants the connection kept alive or not; c) the protocol does not require that pings are sent; d) neither the websocket infrastructure nor the application knows the frequency at which pings would need to be sent to keep alive the intermediaries and other end of the connection. There is a draft proposal to declare timeouts in headers, but it remains to be seen if that gathers any traction.

    Unfortunately keep-alives don’t avoid the need for onClose to initiate new WebSockets, because the internet is not a perfect place and especially with wifi and mobile clients, sometimes connections just drop. It is a standard part of HTTP that if a connection closes while being used, the GET requests are retried on new connections, so users are mostly insulated from transient connection failures. A WebSocket chat room needs to work with the same assumption and even with keep-alives, it needs to be prepared to reopen a connection when onClose is called.

    Queues

    With keep-alives, the WebSocket chat connection should be mostly be a long-lived entity, with only the occasional reconnect due to transient network problems or server restarts. Occasional loss of presence might not be seen to be a problem, unless you’re the dude that just typed a long chat message on the tiny keyboard of your vodafone360 app or instead of chat you are playing on chess.com and you don’t want to abandon a game due to transient network issues. So for any reasonable level of quality of service, the application is going to need to “pave over” any small gaps in connectivity by providing some kind of message queue in both client and server. If a message is sent during the period of time that there is no WebSocket connection, it needs to be queued until such time as the new connection is established.

    Timeouts

    Unfortunately, some failures are not transient and sometimes a new connection will not be established. We can’t allow queues to grow forever and pretend that a user is present long after their connection is gone. Thus both ends of the chat application will also need timeouts and the user will not be seen to have left the chat room until they have no connection for the period of the timeout or until an explicit leaving message is received.

    Ideally a future version of WebSocket will support an orderly close message so the application can distinguish between a network failure (and keep the user’s presence for a time) and an orderly close as the user leaves the page (and remove the user’s present).

    Both the protocol and API have been updated with the ability to distinguish an orderly close from a failed close. The WebSocket API now has a CloseEvent that is passed to the onclose method that does contain the close code and reason string that is sent with an orderly close and this will allow simpler handling in the endpoints and avoid pointless client retries.

    Message Retries

    Even with message queues, there is a race condition that makes it difficult to completely close the gaps between connections. If the onClose method is called very soon after a message is sent, then the application has no way to know if that close event happened before or after the message was delivered. If quality of service is important, then the application currently has no option but to have some kind of per message or periodic acknowledgment of message delivery.

    Ideally a future version of WebSocket will support orderly close, so that delivery can be known for non-failed connections and a complication of acknowledgements can be avoided unless the highest quality of service is required.

    Orderly close is now supported (see above.)

    Backoff

    With onClose handling, keep-alives, message queues, timeouts and retries, we finally will have a chat room that can maintain a user’s presence while they remain on the web page. But unfortunately the chat room is still not complete, because it needs to handle errors and non-transient failures. Some of the circumstances that need to be avoided include:

    • If the chat server is shut down, the client application is notified of this simply by a call to onClose rather than an onOpen call. In this case, onClose should not just reopen the connection as a 100% CPU busy loop with result. Instead the chat application has to infer that there was a connection problem and to at least pause a short while before trying again – potentially with a retry backoff algorithm to reduce retries over time.

      Ideally a future version of WebSocket will allow more access to connection errors, as the handling of no-route-to-host may be entirely different to handling of a 401 unauthorized response from the server.

      The WebSocket protocol is now full HTTP compliant before the 101 of the upgrade handshake, so responses like 401 can legally be sent. Also the WebSocket API now has an onerror call back, but unfortuantely it is not yet clear under what circumstances it is called, nor is there any indication that information like a 401 response or 302 redirect, would be available to the application.

    • If the user types a large chat message, then the WebSocket frame sent may exceed some resource level on the client, server or intermediary. Currently the WebSocket response to such resource issues is to simply close the connection. Unfortunately for the chat application, this may look like a transient network failure (coming after a successful onOpen call), so it may just reopen the connection and naively retry sending the message, which will again exceed the max message size and we can lather, rinse and repeat! Again it is important that any automatic retries performed by the application will be limited by a backoff timeout and/or max retries.

      Ideally a future version of WebSocket will be able to send an error status as something distinct from a network failure or idle timeout, so the application will know not to retry errors.

      While there is no general error control frame, there is now a reason code defined in the orderly close, so that for any errors serious enough to force the connection to be closed the following can be communicated: 1000 – normal closure; 1001 – shutdown or navigate away; 1002 – protocol error; 1003 data type cannot be handled; 1004 message is too large. These are a great improvement, but it would be better if such errors could be sent in control frames so that the connection does not need to be sacrificed in order to reject 1 large message or unknown type.

    Does it have to be so hard?

    The above scenario is not the only way that a robust chat room could be developed. With some compromises on quality of service and some good user interface design, it would certainly be possible to build a chat room with less complex usage of a WebSocket. However, the design decisions represented by the above scenario are not unreasonable even for chat and certainly are applicable to applications needing a better QoS that most chat rooms.

    What this blog illustrates is that there is no silver bullet and that WebSocket will not solve many of the complexities that need to be addressed when developing robust Comet web applications. Hopefully some features such as keep-alives, timeout negotiation, orderly close and error notification can be build into a future version of WebSocket, but it is not the role of WebSocket to provide the more advanced handling of queues, timeouts, reconnections, retries and backoffs. If you wish to have a high quality of service, then either your application or the framework that it uses will need to deal with these features.

    CometD with WebSocket

    CometD version 2 will soon be released with support for WebSocket as an alternative transport to the currently supported JSON long-polling and JSONP callback-polling. cometD supports all the features discussed in this blog and makes them available transparently to browsers with or without WebSocket support. We are hopeful that WebSocket usage will be able to give us even better throughput and latency for cometD than the already impressive results achieved with long-polling.

    Cometd 2 has been released and we now have even more impressive results Websocket support is build into both Jetty and cometd, but uptake has been somewhat hampered by the multiple versions of the protocol in the wild and patchy/changing browser support.

    Programming to a framework like cometd remains the easiest way to achieve a comet application as well as have portability over “old” techniques like long polling and emerging technologies like websockets.