Blog

  • NoSql Sessions with Jetty7 and Jetty8

    When Jetty 7.5.0 is released we will have officially started to dabble in the area of distributed session handling and storage. To start this out we have created a set of abstract classes around the general concept of NoSQL support, and have prepared an initial implementation using MongoDB. We will also be working on Ehcache and perhaps Cassandra implementations over time to round out the offering, but it is overall a pretty exciting time for these sorts of things.

    NoSQL sessions are a good idea for a number of usage scenarios, but as with NoSQL solutions in general, it is not a one-size-fits-all technology. The Jetty NoSQL session implementation should be good for scenarios that require decentralization, highly parallel work loads, and scalability, while also supporting session migration from one machine to the next for load balancing purposes. While we are initially releasing with just the MongoDB session manager, it is important to make clear that all the different distributed NoSQLish solutions out there have there own positives and negatives that you need to balance when choosing a storage medium. This is an interesting and diverse area of development, and since there is little standardization at the moment it is not a simple matter of exporting data from one system to the next if you want to change back ends.

    Before jumping in and embracing this solution for your session management, ask yourself some questions:

    • Do I require a lot of write behavior on my session objects?

    When you’re dealing with anything that touches the network to perform an action, you have an entirely different set of issues than if you can keep all your logic on one machine.  The hash session manager is the fastest solution for this use profile, but the JDBC session manager is not a bad solution if you need to operate with the network.  That in mind, there is an optimization in the NoSQL session managers where tight write loops should queue up a bit before an actual write to the back end MongoDB server occurs.  In general, if you have a session profile that involves a lot of writes all the time, you might want to shy away from this approach.

    • Am I bouncing sessions across lots of machines all the time?

    If you are, then you might be better off to get rid of sessions entirely and be more RESTful, but a networked session manager is going to be difficult to scale to this approach and be consistent.  By consistent I mean writing data into your session on one node and having that same data present within a session on another node.  If you’re looking at using MongoDB to increase the number of sessions you’re able to support, it is vitally important to remember that the network is not an inexhaustable resource, and keeping sessions localized is good practice, especially if you want consistent behavior. But if you want non-sticky sessions or mostly sticky sessions that can scale, this sort of NoSQL session manager is certainly an option, especially for lightweight, mostly read sessions.

    • Do I want to scale to crazy amounts of sessions that are relatively small and largely contain write-once read-often data?

    Great! Use this!  You are the people we had in mind when we developed the distributed session handling.

    On the topic of configuring the new session managers, it is much like other traditional ones: add them to the context.xml or set up with the regular jetty.xml route. There are, however, a couple of important options to keep in mind for the session ID manager.

    • scavengeDelay–How often will a scavenge operation occur looking for sessions to invalidate?
    • scavengePeriod–How much time after a scavenge has completed should you wait before doing it again?
    • purge (Boolean)–Do you want to purge (delete) sessions that are invalid from the session store completely?
    • purgeDelay–How often do you want to perform this purge operation?
    • purgeInvalidAge–How old should an invalid session be before it is eligible to be purged?
    • purgeValidAge–How old should a valid session be before it is eligible to be marked invalid and purged? Should this occur at all?

    A guide for detailed configuration can be found on our wiki at on the Session Clustering with MongoDB page.

    The new MongoDB session manager and session ID manager are located in the jetty-nosql module.  Since we plan to have multiple offerings we have made the mongodb dependency optional, so if you’re planning to use embedded Jetty, make sure you declare a hard dependency in Maven. You can also download the mongodb jar file and place it into a lib/mongodb directory within the jetty distribution itself; then you must add mongodb to the OPTIONS  on the cli or in the start.ini file you’re starting Jetty with.

    There were a number of different ways to go in implementing session ID management. While we are wholly tolerant of a user request being moved from one server to another, we chose to keep normal session operations localized to the machine where the session originates.  If the request bounces from one machine to another, the latest known session is loaded. If it is saved and then bounces back, Jetty notices the change in the version of the session and reloads, but these operations are heavy weight: they require pulling back all data of a session across the network, as opposed to a field or two of MongoDB goodness.  One side effect of this approach is the scavenge operation executes only on the known session IDs of a given node. In this scenario, if your happy cluster of Jetty instances has a problem and one of them crashes (not our fault!), there is potential for previously valid session IDs to remain in your MongoDB session store, never to be seen again, but also never cleaned up. That is where purge comes in: the purge process can perform a passive sweep through the MongoDB cluster to delete really old, valid sessions.  You can also delete the invalid sessions that are over a week old, or a month old, or whatever you like. If you have hoarding instincts, you can turn purge off (it’s true by default), and your MongoDB cluster will grow… and grow.

    We have also added some additional JMX support to the MongoDB session manager. When you enable JMX, you can access all the normal session statistics, but you also have the option to force execution of the purge and scavenge operations on a single node, or purge fully, which executes the purge logic for everything in the MongoDB store.  In this mode you can disable purge on your nodes and schedule the actions for when you are comfortable they will not cause issues on the network.  For tips on configuring JMX support for jetty see our tutorial on JMX.

    Lastly I’ll just mention that MongoDB is really a treat to work with. I love how easy it is to print the data being returned from MongoDB, and it’s in happy JSON.  It has a rich query language that allowed us to easily craft queries for the exact information we were looking for, reducing the footprint on the network the session work imposes.

     

  • Websocket Example: Server, Client and LoadTest

    The websocket protocol specification is approaching final and the Jetty implementation and API have been tracking the draft and is ready when the spec and browsers are available.   More over, Jetty release 7.5.0 now includes a capable websocket java client that can be used for non browser applications or load testing. It is fully asynchronous and can create thousands of connections simultaneously.

    This blog uses the classic chat example to introduce a websocket server, client and load test.

    The project

    The websocket example has been created as a maven project with groupid com.example.  The entire project can be downloaded from here.   The pom.xml defines a dependency on org.eclipse.jetty:jetty-websocket-7.5.0.RC1 (you should update to 7.5.0 when the final release is available), which provides the websocket API and transitively the jetty implementation.  There is also a dependency on org.eclipse.jetty:jetty-servlet which provides the ability to create an embedded servlet container to run the server example.

    While the project implements a Servlet, it is not in a typical webapp layout, as I wanted to provide both client and server in the same project.    Instead of a webapp, this project uses embedded jetty in a simple Main class to provide the server and the static content is served from the classpath from src/resources/com/example/docroot.

    Typically developers will want to build a war file containing a webapp, but I leave it as an exercise for the reader to put the servlet and static content described here into a webapp format.

    The Servlet

    The Websocket connection starts with a HTTP handshake.  Thus the websocket API in jetty also initiated by the handling of a HTTP request (typically) by a Servlet.  The advantage of this approach is that it means that websocket connections are terminated in the same rich application space provided by HTTP servers, thus a websocket enabled web application can be developed in a single environment rather than by collaboration between a HTTP server and a separate websocket server.

    We create the ChatServlet with an init() method that instantiates and configures a WebSocketFactory instance:

    public class ChatServlet extends HttpServlet
    {
      private WebSocketFactory _wsFactory;
      private final Set _members = new CopyOnWriteArraySet();
      @Override
      public void init() throws ServletException
      {
        // Create and configure WS factory
        _wsFactory=new WebSocketFactory(new WebSocketFactory.Acceptor()
        {
          public boolean checkOrigin(HttpServletRequest request, String origin)
          {
            // Allow all origins
            return true;
          }
          public WebSocket doWebSocketConnect(HttpServletRequest request, String protocol)
          {
             if ("chat".equals(protocol))
               return new ChatWebSocket();
             return null;
          }
        });
        _wsFactory.setBufferSize(4096);
        _wsFactory.setMaxIdleTime(60000);
      }
      ...

    The WebSocketFactory is instantiated by passing it an Acceptor instance, which in this case is an anonymous instance. The Acceptor must implement two methods: checkOrigin, which in this case accepts all; and doWebSocketConnect, which must accept a WebSocket connection by creating and returning an instance of the WebSocket interface to handle incoming messages.  In this case, an instance of the nested ChatWebSocket class is created if the protocol is “chat”.   The other WebSocketFactory fields have been initialised with hard coded buffers size and timeout, but typically these would be configurable from servlet init parameters.

    The servlet handles get requests by passing them to the WebSocketFactory to be accepted or not:

      ...
      protected void doGet(HttpServletRequest request,
                           HttpServletResponse response)
        throws IOException
      {
        if (_wsFactory.acceptWebSocket(request,response))
          return;
        response.sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE,
                           "Websocket only");
      }
      ...

    All that is left for the Servlet, is the ChatWebSocket itself.   This is just a POJO that receives callbacks for events.  For this example we have implemented the WebSocket.OnTextMessage interface to restrict the call backs to only connection management and full messages:

      private class ChatWebSocket implements WebSocket.OnTextMessage
      {
        Connection _connection;
        public void onOpen(Connection connection)
        {
          _connection=connection;
          _members.add(this);
        }
        public void onClose(int closeCode, String message)
        {
          _members.remove(this);
        }
        public void onMessage(String data)
        {
          for (ChatWebSocket member : _members)
          {
            try
            {
              member._connection.sendMessage(data);
            }
            catch(IOException e)
            {
              e.printStackTrace();
            }
          }
        }
      }

    The handling of the onOpen callback is to add the ChatWebSocket to the set of all members (and remembering the Connection object for subsequent sends).  The onClose handling simply removes the member from the set.   The onMessage handling iterates through all the members and sends the received message to them (and prints any resulting exceptions).

     

    The Server

    To run the servlet, there is a simple Main method that creates an embedded Jetty server with a ServletHandler for the chat servlet, as ResourceHandler for the static content needed by the browser client and a DefaultHandler to generate errors for all other requests:

    public class Main
    {
      public static void main(String[] arg) throws Exception
      {
        int port=arg.length>1?Integer.parseInt(arg[1]):8080;
        Server server = new Server(port);
        ServletHandler servletHandler = new ServletHandler();
        servletHandler.addServletWithMapping(ChatServlet.class,"/chat/*");
        ResourceHandler resourceHandler = new ResourceHandler();
        resourceHandler.setBaseResource(Resource.newClassPathResource("com/example/docroot/"));
        DefaultHandler defaultHandler = new DefaultHandler();
        HandlerList handlers = new HandlerList();
        handlers.setHandlers(new Handler[] {servletHandler,resourceHandler,defaultHandler});
        server.setHandler(handlers);
        server.start();
        server.join();
      }
    }

    The server can be run from an IDE or via maven using the following command line:

    mvn
    mvn -Pserver exec:exec

    The Browser Client

    The HTML for the chat room simply imports some CSS and the javascript before creating a few simple divs to contain the chat text, the join dialog and the joined dialog:

    <html>
     <head>
     <title>WebSocket Chat Example</title>
     <script type='text/javascript' src="chat.js"></script>
     <link rel="stylesheet" type="text/css" href="chat.css" />
     </head>
     <body>
      <div id='chat'></div>
      <div id='input'>
       <div id='join' >
        Username:&nbsp;<input id='username' type='text'/>
        <input id='joinB' class='button' type='submit' name='join' value='Join'/>
       </div>
       <div id='joined' class='hidden'>
        Chat:&nbsp;<input id='phrase' type='text'/>
        <input id='sendB' class='button' type='submit' name='join' value='Send'/>
       </div>
      </div>
      <script type='text/javascript'>init();</script>
     </body>
    </html>

    The javascript create a room object with methods to handle the various operations of a chat room.  The first operation is to join the chat room, which is triggered by entering a user name.  This creates a new WebSocket object pointing to the /chat URL path on the same server the HTML was loaded from:

    var room = {
      join : function(name) {
        this._username = name;
        var location = document.location.toString()
          .replace('http://', 'ws://')
          .replace('https://', 'wss://')+ "chat";
        this._ws = new WebSocket(location, "chat");
        this._ws.onopen = this.onopen;
        this._ws.onmessage = this.onmessage;
        this._ws.onclose = this.onclose;
      },
      onopen : function() {
        $('join').className = 'hidden';
        $('joined').className = '';
        $('phrase').focus();
        room.send(room._username, 'has joined!');
      },
      ...

    The javascript websocket object is initialised with call backs for onopen, onclose and onmessage. The onopen callback is handled above by switching the join div to the joined div and sending a “has joined” message.

    Sending is implemented by creating a string of username:message and sending that via the WebSocket instance:

      ...
      send : function(user, message) {
        user = user.replace(':', '_');
        if (this._ws)
          this._ws.send(user + ':' + message);
      },
      ...

    If the chat room receives a message, the onmessage callback is called, which sanitises the message, parses out the username and appends the text to the chat div:

      ...
      onmessage : function(m) {
        if (m.data) {
          var c = m.data.indexOf(':');
          var from = m.data.substring(0, c)
            .replace('<','<')
            .replace('>','>');
          var text = m.data.substring(c + 1)
            .replace('<', '<')
            .replace('>', '>');
          var chat = $('chat');
          var spanFrom = document.createElement('span');
          spanFrom.className = 'from';
          spanFrom.innerHTML = from + ': ';
          var spanText = document.createElement('span');
          spanText.className = 'text';
          spanText.innerHTML = text;
          var lineBreak = document.createElement('br');
          chat.appendChild(spanFrom);
          chat.appendChild(spanText);
          chat.appendChild(lineBreak);
          chat.scrollTop = chat.scrollHeight - chat.clientHeight;
        }
      },
      ...

    Finally, the onclose handling empties the chat div and switches back to the join div so that a new username may be entered:

      ...
      onclose : function(m) {
        this._ws = null;
        $('join').className = '';
        $('joined').className = 'hidden';
        $('username').focus();
        $('chat').innerHTML = '';
      }
    };

    With this simple client being served from the server, you can now point your websocket capable browsers at http://localhost:8080 and interact with the chat room. Of course this example glosses over a lot of detail and complications a real chat application would need, so I suggest you read my blog is websocket chat simpler to learn what else needs to be handled.

    The Load Test Client

    The jetty websocket java client is an excellent tool for both functional and load testing of a websocket based service.  It  uses the same endpoint API as the server side and for this example we create a simple implementation of the OnTextMessage interface that keeps track of the all the open connection and counts the number of messages sent and received:

    public class ChatLoadClient implements WebSocket.OnTextMessage
    {
      private static final AtomicLong sent = new AtomicLong(0);
      private static final AtomicLong received = new AtomicLong(0);
      private static final Set<ChatLoadClient> members = new CopyOnWriteArraySet<ChatLoadClient>();
      private final String name;
      private final Connection connection;
      public ChatLoadClient(String username,WebSocketClient client,String host, int port)
      throws Exception
      {
        name=username;
        connection=client.open(new URI("ws://"+host+":"+port+"/chat"),this).get();
      }
      public void send(String message) throws IOException
      {
        connection.sendMessage(name+":"+message);
      }
      public void onOpen(Connection connection)
      {
        members.add(this);
      }
      public void onClose(int closeCode, String message)
      {
        members.remove(this);
      }
      public void onMessage(String data)
      {
        received.incrementAndGet();
      }
      public void disconnect() throws IOException
      {
        connection.disconnect();
      }

    The Websocket is initialized by calling open on the WebSocketClient instance passed to the constructor.  The WebSocketClient instance is shared by multiple connections and contains the thread pool and other common resources for the client.

    This load test example comes with a main method that creates a WebSocketClient from command line options and then creates a number of ChatLoadClient instances:

    public static void main(String... arg) throws Exception
    {
      String host=arg.length>0?arg[0]:"localhost";
      int port=arg.length>1?Integer.parseInt(arg[1]):8080;
      int clients=arg.length>2?Integer.parseInt(arg[2]):1000;
      int mesgs=arg.length>3?Integer.parseInt(arg[3]):1000;
      WebSocketClient client = new WebSocketClient();
      client.setBufferSize(4096);
      client.setMaxIdleTime(30000);
      client.setProtocol("chat");
      client.start();
      // Create client serially
      ChatLoadClient[] chat = new ChatLoadClient[clients];
      for (int i=0;i<chat.length;i++)
        chat[i]=new ChatLoadClient("user"+i,client,host,port);
      ...

    Once the connections are opened, the main method loops around picking a random client to speak in the chat room

      ...
      // Send messages
      Random random = new Random();
      for (int i=0;i<mesgs;i++)
      {
        ChatLoadClient c = chat[random.nextInt(chat.length)];
        String msg = "Hello random "+random.nextLong();
        c.send(msg);
      }
      ...

    Once all the messages have been sent and all the replies have been received, the connections are closed:

      ...
      // close all connections
      for (int i=0;i<chat.length;i++)
        chat[i].disconnect();

    The project is setup so that the load client can be run with the following maven command:

    mvn -Pclient exec:exec

    And the resulting output should look something like:

    Opened 1000 of 1000 connections to localhost:8080 in 1109ms
    Sent/Received 10000/10000000 messages in 15394ms: 649603msg/s
    Closed 1000 connections to localhost:8080 in 45ms

    Yes that is 649603 messages per second!!!!!!!!!!! This is a pretty simple easy test, but it is still scheduling 1000 local sockets plus generating and parsing all the websocket frames. Real applications on real networks are unlikely to achieve close to this level, but the indications are good for the capability of high throughput and stand by for more rigorous bench marks shortly.

     

     

     

  • Prelim Cometd WebSocket Benchmarks

    I have done some very rough preliminary benchmarks on the latest cometd-2.4.0-SNAPSHOT with the latest Jetty-7.5.0-SNAPSHOT and the results are rather impressive.  The features that these two releases have added are:

    • Optimised Jetty NIO with latest JVMs and JITs considered.
    • Latest websocket draft implemented and optimised.
    • Websocket client implemented.
    • Jackson JSON parser/generator used for cometd
    • Websocket cometd transport for the server improved.
    • Websocket cometd transport for the bayeux client implemented.

    The benchmarks that I’ve done have all been on my notebook using the localhost network, which is not the most realistic of environments, but it still does tell us a lot about the raw performance of the cometd/jetty.  Specifically:

    • Both the server and the client are running on the same machine, so they are effectively sharing the 8 CPUs available.   The client typically takes 3x more CPU than the server (for the same load), so this is kind of like running the server on a dual core and the client on a 6 core machine.
    • The local network has very high throughput which would only be matched by gigabit networks.  It also has practically no latency, which is unlike any real network.  The long polling transport is more dependent on good network latency than the websocket transport, so the true comparison between these transports will need testing on a real network.

    The Test

    The cometd load test is a simulated chat application.  For this test I tried long-polling and websocket transports for 100, 1000 and 10,000 clients that were each logged into 10 randomly selected chat rooms from a total of 100 rooms.   The messages sent were all 50 characters long and were published in batches of 10 messages at once, each to randomly selected rooms.  There was a pause between batches that was adjusted to find a good throughput that didn’t have bad latency.  However little effort was put into finding the optimal settings to maximise throughput.

    The runs were all done on JVM’s that had been warmed up, but the runs were moderately short (approx 30s), so steady state was not guaranteed and the margin of error on these numbers will be pretty high.  However, I also did a long run test at one setting just to make sure that steady state can be achieved.

    The Results

    The bubble chart above plots messages per second against number of clients for both long-polling and websocket transports.   The size of the bubble is the maximal latency of the test, with the smallest bubble being 109ms and the largest is 646ms.  Observations from the results are:

    • Regardless of transport we achieved 100’s of 1000’s messages per second!  These are great numbers and show that we can cycle the cometd infrastructure at high rates.
    • The long-polling throughput is probably a over reported because there are many messages being queued into each HTTP response.   The most HTTP responses I saw was 22,000 responses per second, so for many application it will be the HTTP rate that limits the throughput rather than the cometd rate.  However the websocket throughput did not benefit from any such batching.
    • The maximal latency for all websocket measurements was significantly better than long polling, with all websocket messages being delivered in < 200ms and the average was < 1ms.
    • The websocket throughput increased with connections, which probably indicates that at low numbers of connections we were not generating a maximal load.

    A Long Run

    The throughput tests above need to be redone on a real network and longer runs. However I did do one long run ( 3 hours) of 1,000,013,657 messages at 93,856/sec. T results suggest no immediate problems with long runs. Neither the client nor the server needed to do a old generation collection and all young generation collections took on average only 12ms.

    The output from the client is below:

    Statistics Started at Fri Aug 19 15:44:48 EST 2011
    Operative System: Linux 2.6.38-10-generic amd64
    JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM runtime 17.1-b03 1.6.0_22-b04
    Processors: 8
    System Memory: 55.35461% used of 7.747429 GiB
    Used Heap Size: 215.7406 MiB
    Max Heap Size: 1984.0 MiB
    Young Generation Heap Size: 448.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    Testing 1000 clients in 100 rooms, 10 rooms/client
    Sending 1000000 batches of 10x50 bytes messages every 10000 µs
    - - - - - - - - - - - - - - - - - - - -
    Statistics Ended at Fri Aug 19 18:42:23 EST 2011
    Elapsed time: 10654717 ms
    	Time in JIT compilation: 57 ms
    	Time in Young Generation GC: 118473 ms (8354 collections)
    	Time in Old Generation GC: 0 ms (0 collections)
    Garbage Generated in Young Generation: 2576746.8 MiB
    Garbage Generated in Survivor Generation: 336.53125 MiB
    Garbage Generated in Old Generation: 532.35156 MiB
    Average CPU Load: 433.23907/800
    ----------------------------------------
    Outgoing: Elapsed = 10654716 ms | Rate = 938 msg/s = 93 req/s =   0.4 Mbs
    All messages arrived 1000013657/1000013657
    Messages - Success/Expected = 1000013657/1000013657
    Incoming - Elapsed = 10654716 ms | Rate = 93856 msg/s = 90101 resp/s(96.00%) =  35.8 Mbs
    Thread Pool - Queue Max = 972 | Latency avg/max = 3/62 ms
    Messages - Wall Latency Min/Ave/Max = 0/8/135 ms

    Note that the client was using 433/800 of the available CPU, while you can see that the server (below) was using only 170/800.  This suggests that the server has plenty of spare capacity if it were given the entire machine.

    Statistics Started at Fri Aug 19 15:44:47 EST 2011
    Operative System: Linux 2.6.38-10-generic amd64
    JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM runtime 17.1-b03 1.6.0_22-b04
    Processors: 8
    System Memory: 55.27913% used of 7.747429 GiB
    Used Heap Size: 82.58406 MiB
    Max Heap Size: 2016.0 MiB
    Young Generation Heap Size: 224.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    - - - - - - - - - - - - - - - - - - - -
    Statistics Ended at Fri Aug 19 18:42:23 EST 2011
    Elapsed time: 10655706 ms
    	Time in JIT compilation: 187 ms
    	Time in Young Generation GC: 140973 ms (12073 collections)
    	Time in Old Generation GC: 0 ms (0 collections)
    Garbage Generated in Young Generation: 1652646.0 MiB
    Garbage Generated in Survivor Generation: 767.625 MiB
    Garbage Generated in Old Generation: 1472.6484 MiB
    Average CPU Load: 170.20532/800

    Conclusion

    These results are preliminary, but excellent none the less!   The final releases of jetty 7.5.0 and cometd 2.4.0 will be out within a week or two and we will be working to bring you some more rigorous benchmarks with those releases.

     

     

     

  • CometD JSON library pluggability

    It all started when my colleague Joakim showed me the results of some JSON libraries benchmarks he was doing, which showed Jackson to be the clear winner among many libraries.
    So I decided that for the upcoming CometD 2.4.0 release it would have been good to make CometD independent of the JSON library used, so that Jackson or other libraries could have been plugged in.
    Historically, CometD made use of the Jetty‘s JSON library, and this is still the default if no other library is configured.
    Running a CometD specific benchmark using Jetty’s JSON library and Jackson (see this test case) shows, on my laptop, this sample output:

    Parsing:
    ...
    jackson context iteration 1: 946 ms
    jackson context iteration 2: 949 ms
    jackson context iteration 3: 944 ms
    jackson context iteration 4: 922 ms
    jetty context iteration 1: 634 ms
    jetty context iteration 2: 634 ms
    jetty context iteration 3: 636 ms
    jetty context iteration 4: 639 ms
    Generating:
    ...
    jackson context iteration 1: 548 ms
    jackson context iteration 2: 549 ms
    jackson context iteration 3: 552 ms
    jackson context iteration 4: 561 ms
    jetty context iteration 1: 788 ms
    jetty context iteration 2: 796 ms
    jetty context iteration 3: 798 ms
    jetty context iteration 4: 805 ms
    

    Jackson is roughly 45% slower in parsing and 45% faster in generating, so not bad for Jetty’s JSON compared to the best in class.
    Apart from efficiency, Jackson has certainly more features than Jetty’s JSON library with respect to serializing/deserializing custom classes, so having a pluggable JSON library in CometD is only better for end users, that can now choose the solution that fits them best.
    Unfortunately, I could not integrate the Gson library, which does not seem to have the capability of deserializing arbitrary JSON into java.util.Map object graphs, like Jetty’s JSON and Jackson are able to do in one line of code.
    If you have insights on how to make Gson work, I’ll be glad to hear.
    The documentation on how to configure CometD’s JSON library can be found here.
    UPDATE
    After a suggestion from Tatu Saloranta of Jackson, the Jackson parsing is now faster than Jetty’s JSON library by roughly 20%:

    ...
    jackson context iteration 1: 555 ms
    jackson context iteration 2: 506 ms
    jackson context iteration 3: 506 ms
    jackson context iteration 4: 532 ms
    jetty context iteration 1: 632 ms
    jetty context iteration 2: 637 ms
    jetty context iteration 3: 639 ms
    jetty context iteration 4: 635 ms
    
  • Jetty JMX Webservice

    Jetty JMX Webservice is a webapp providing a RESTful API to query JMX mbeans and invoke mbean operations without the hassle that comes with RMI. No more arguments with your firewall admin, just a single http port.
    That alone might not be a killer feature, but Jetty JMX Webservice also aggregates multiple mbeans having the same ObjectName in the same JVM (e.g. jmx beans for multiple webapps) as well as from multiple jetty instances. That way you’ve a single REST api aggregating JMX mbeans for one or more jetty instances you can use to feed your favourite monitoring system for example.
    The whole module is in an early development phase and may contain some rough edges. But we wanted to check for interest of the community early and get some early feedback.
    We’ve started a very simple JQuery based webfrontend as a showcase on what the REST api can be used for.
    Instance Overview:
    A table showing two jetty instances.
    Or a realtime memory graph gathering memory consumption from the REST api. This is an accordion like view. You see an accordion line for each node showing the current heap used. You can open each line and get a realtime graph of memory consumption. The memory used in the accordion and the graphs are updated in realtime which is hard to show in a picture:
    Realtime memory graph
    Pretty cool. Note that this is just a showcase on how the REST api can be used.
    URLs Paths of the webservice
    /ws/ – index page
    /ws/nodes – aggregated basic node information
    /ws/mbeans – list of all aggregated mbeans
    /ws/mbeans/[mbean objectName] – detailed information about all attributes and operations this mbean offers
    /ws/mbeans/[mbean objectName]/attributes – aggregate page containing the values of all attributes of the given mbean
    /ws/mbeans/[mbean objectName]/attributes/[attributeName] – aggregated values of a specific attribute
    /ws/mbeans/[mbean objectName]/operation/[operationName] – invoke specified operation
    Examples URLs:
    /ws/mbeans/java.lang:type=Memory
    /ws/mbeans/java.lang:type=Memory/operations/gc
    How to get it running

    Here’s all you need to do to get jetty-jmx-ws running in your jetty instance and some examples how the REST api looks like and how it can be used. Should take less than 15 min.

    1. Checkout the sandbox project
      svn co https://svn.codehaus.org/jetty-contrib/sandbox/jetty-jmx-ws
    2. cd into the new directory and build the project
      cd jetty-jmx-ws && mvn clean install
    3. Make sure you got the [INFO] BUILD SUCCESS message
    4. Copy the war file you’ll find in the projects target directory into the webapps directory of your jetty instance
      cp target/jetty-jmx-ws-[version].war [pathToJetty]/webapps
    5. Access the webapp by browsing to:
      http://[jettyinstanceurl]/jetty-jmx-ws-[version]/ws/
      e.g.:
      http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/
    6. You’re done. 🙂

    How to use it: Starting point

    As it is a RESTful api it will guide you from the base URL to more detailed pages. The base URL will return:

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <Index>
    <mBeans>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans</mBeans>
    <nodes>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/nodes</nodes>
    </Index>

    It shows you two URLs.
    The first one will guide you through a list of mbeans which is aggregated. This means it’ll show you mbeans which do exist on ALL configured instances. mebans which exist only on a single instance will be filtered out.

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <MBeans>
    <MBean>
    <ObjectName>JMImplementation:type=MBeanServerDelegate</ObjectName>
    <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/JMImplementation:type=MBeanServerDelegate</URL>
    </MBean>
    <MBean>
    <ObjectName>com.sun.management:type=HotSpotDiagnostic</ObjectName>
    <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/com.sun.management:type=HotSpotDiagnostic</URL>
    </MBean>
    <MBean>
    <ObjectName>java.lang:type=Memory</ObjectName>
    <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory</URL>
    </MBean>
    SNIPSNAP - lots of mbeans
    </MBeans>

    The second URL shows you basic node information for all configured nodes:

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <Index>
    <nodes>
    <name>localhost:1099</name>
    <jettyVersion>7.4.1-SNAPSHOT</jettyVersion>
    <threadCount>42</threadCount>
    <peakThreadCount>45</peakThreadCount>
    <heapUsed>41038176</heapUsed>
    <heapInit>0</heapInit>
    <heapCommitted>85000192</heapCommitted>
    <heapMax>129957888</heapMax>
    <jmxServiceURL>service:jmx:rmi:///jndi/rmi://localhost:1099/jettyjmx</jmxServiceURL>
    </nodes>
    <nodes>
    <name>localhost:1100</name>
    <jettyVersion>7.4.1-SNAPSHOT</jettyVersion>
    <threadCount>45</threadCount>
    <peakThreadCount>47</peakThreadCount>
    <heapUsed>73915872</heapUsed>
    <heapInit>0</heapInit>
    <heapCommitted>129957888</heapCommitted>
    <heapMax>129957888</heapMax>
    <jmxServiceURL>service:jmx:rmi:///jndi/rmi://localhost:1100/jettyjmx</jmxServiceURL>
    </nodes>
    </Index>

    Howto query a single mbean
    This example shows how to let you guide through the REST api to query a specific mbean. This example will guide you through to the memory mbean.

    1. From the base URL follow the link to the mbeans list:
      http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans
    2. Search for the mbean name you’re looking for:
      <MBean>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory</URL>
      </MBean>
    3. Open the link inside the URL tag
      http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory
    4. You’ll get a list of all operations which can be executed on that mbean and all attributes which can be queried:
      <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
      <MBean>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Operations>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Operation>
      <Name>gc</Name>
      <Description>gc</Description>
      <ReturnType>void</ReturnType>
      <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/operations/gc</URL>
      </Operation>
      </Operations>
      <Attributes>
      <Attribute>
      <Name>HeapMemoryUsage</Name>
      <description>HeapMemoryUsage</description>
      <type>javax.management.openmbean.CompositeData</type>
      <isReadable>true</isReadable>
      <isWritable>false</isWritable>
      <uri>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/attributes/HeapMemoryUsage</uri>
      </Attribute>
      SNIPSNAP - lots of attributes cutted
      </Attributes>
      </MBean>
    5. Besides some information about all operations and attributes you’ll find URLs to invoke operations like:
      http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/operations/gc
      which will invoke a garbage collection.And URLs to display the attributes’ values like:
      http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/attributes/HeapMemoryUsage
      Will show you: 

      <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
      <mBeanAttributeValueJaxBeans>
      <Attribute>
      <AttributeName>HeapMemoryUsage</AttributeName>
      <NodeName>localhost:1099</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=85000192, init=0, max=129957888, used=28073528})</Value>
      </Attribute>
      <Attribute>
      <AttributeName>HeapMemoryUsage</AttributeName>
      <NodeName>localhost:1100</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=129957888, init=0, max=129957888, used=69793976})</Value>
      </Attribute>
      </mBeanAttributeValueJaxBeans>
    6. You can as well get an aggregated view of all attributes for an mbean by just adding attributes to the mbeans url: http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/attributes
      will return: 

      <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
      <mBeanAttributeValueJaxBeans>
      <Attribute>
      <AttributeName>HeapMemoryUsage</AttributeName>
      <NodeName>localhost:1099</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=85000192, init=0, max=129957888, used=30005472})</Value>
      </Attribute>
      <Attribute>
      <AttributeName>HeapMemoryUsage</AttributeName>
      <NodeName>localhost:1100</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=129957888, init=0, max=129957888, used=68043064})</Value>
      </Attribute>
      <Attribute>
      <AttributeName>NonHeapMemoryUsage</AttributeName>
      <NodeName>localhost:1099</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=85356544, init=24317952, max=136314880, used=52749944})</Value>
      </Attribute>
      <Attribute>
      <AttributeName>NonHeapMemoryUsage</AttributeName>
      <NodeName>localhost:1100</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=92868608, init=24317952, max=136314880, used=78705952})</Value>
      </Attribute>
      <Attribute>
      <AttributeName>ObjectPendingFinalizationCount</AttributeName>
      <NodeName>localhost:1099</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>0</Value>
      </Attribute>
      <Attribute>
      <AttributeName>ObjectPendingFinalizationCount</AttributeName>
      <NodeName>localhost:1100</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>0</Value>
      </Attribute>
      <Attribute>
      <AttributeName>Verbose</AttributeName>
      <NodeName>localhost:1099</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>false</Value>
      </Attribute>
      <Attribute>
      <AttributeName>Verbose</AttributeName>
      <NodeName>localhost:1100</NodeName>
      <ObjectName>java.lang:type=Memory</ObjectName>
      <Value>false</Value>
      </Attribute>
      </mBeanAttributeValueJaxBeans>

    Security
    It’s a webapp with some servlets. So secure it the same way you would any servlet.
    What’s next?
    There’s two more features which I didn’t describe. I will write a follow up soon describing how to use them.

    1. You can filter by nodes with QueryParams
    2. Invoke operations whith parameters
    3. Configuration for multiple instances
    4. Get JSON instead of XML by setting accept header

    If there’s interest in this project, I will also take care to write some manuals and add them to the wiki pages.

  • Jetty Overlayed WebApp Deployer

    The Jetty Overlay Deployer allows multiple WAR files to be overlayed so that a web application can be customised, configured and deployed without the need to unpack, modify and repack the WAR file. This has the benefits of:

    • The WAR file may be kept immutable, even signed, so that it is clear which version has been deployed.
    • All modifications made to customise/configure the web application are kept in a separate wars and thus are easily identifiable for review and migration to new versions.
    • A parameterised template overlay can be created that contains common customisations and configuration that apply to many instances of the web application (eg for multi-tenant deployment).
    • Because the layered deployment clearly identifies the common and instance specific components, then Jetty is able to share classloaders and static resource caches for the template, greatly reducing the memory footprint of multiple instances

    This blog is a tutorial of how to configure Jetty to use the Overlay deployer, and how to deploy multiple instances of the JTrac web application.

    Overview

    The customisation, configuration and deployment a web application bundled as a WAR file frequently includes some or all of:

    • Editing the WEB-INF/web.xml file to set init parameters, add filters/servlets or to configure JNDI resources.
    • Editing other application specific configuration files in WEB-INF/*
    • Editing container specific configuration files in WEB-INF/* (eg jetty-web.xml or jboss-web.xml)
    • Adding/modifying static content such as images and css to style/theme the webapplication
    • Adding jars to the container classpath for Datasource and other resources
    • Modifying the container configuration to provide JNDI resources

    The result is that the customisations and configurations are blended into both the container and the WAR file. If either the container or the base WAR file are upgraded to a new version, it can be a very difficult and error prone task to identify all the changes that have been made and to reapply them to a new version.

    Overlays

    To solve the problems highlighted above, jetty 7.4 introduces WAR overlays (which are a concept borrowed from the maven war plugin). An overlay is basically just another WAR files, whose contents are merged on top of the original WAR so that files may be added or replaced.

    However,  jetty overlays also allow mixin fragments of web.xml, so the configuration can be modified without being replaced

    Jtrac Overlay Example

    The jtrac issue tracking webapplication is a good example of a typical web application, as it uses the usual suspects of libs: spring, hibernate, dom4j, commons-*, wicket, etc. So I’ve used it as the basis of this example.

    The files for this demonstration are available in overlays-demo.tar.gz. This could be expanded on top of the jetty distribution, but for this tutorial we will expand it to /tmp and install the components step by step:

    cd /tmp
    wget http://webtide.intalio.com/wp-content/uploads/2011/05/overlays-demo.tar.gz
    tar xfvz overlays-demo.tar.gz
    export OVERLAYS=/tmp/overlays

    Configuring Jetty for Overlays

    Overlays support is included in jetty distributions from 7.4.1-SNAPSHOT onwards, so you can download a distribution from oss.sonatype.org or maven central(once 7.4.1 is released) and unpack into a directory.    The start.ini file then needs to be edited so that it includes the overlay option and configuration file.   The resulting file should look like:

    OPTIONS=Server,jsp,jmx,resources,websocket,ext,overlay
    etc/jetty.xml
    etc/jetty-deploy.xml
    etc/jetty-overlay.xml

    The smarts of this are in etc/jetty-deploy.xml files, which installs the OverlayedAppProvider into the DeploymentManager. Jetty can then be started normally:

    java -jar start.jar

    Jetty will now be listening on port 8080, but with no webapp deployed.   The rest of the tutorial should be conducted in another window with the JETTY_HOME environment set to the jetty distribution directory.

    Installing the WebApp

    The WAR file for this demo can be downloaded and deployed using the following commands, which essentially downloads and extracts the WAR file to the $JETTY_HOME/overlays/webapps directory

    cd /tmp
    wget -O jtrac.zip http://sourceforge.net/projects/j-trac/files/jtrac/2.1.0/jtrac-2.1.0.zip/download
    jar xfv jtrac.zip jtrac/jtrac.war
    mv jtrac/jtrac.war $JETTY_HOME/overlays/webapps

    When you have run these commands (or equivalent), you will see in the jetty server window a message saying that the OverlayedAppProvider has extracted and loaded the war file:

    2011-05-06 10:31:54.678:INFO:OverlayedAppProvider:Extract jar:file:/tmp/jetty-distribution-7.4.1-SNAPSHOT/overlays/webapps/jtrac-2.1.0.war!/ to /tmp/jtrac-2.1.0_236811420856825222.extract
    2011-05-06 10:31:55.235:INFO:OverlayedAppProvider:loaded jtrac-2.1.0@1304641914666

    Unlike the normal webapps dir, loading a war file from the overlays/webapp dir does not deploy the webapplication.  It simply makes it available to be used as the basis for templates and overlays.

    Installing a Template Overlay

    A template overlay is a WAR structured directory/archive that contains just the files that have been added or modified to customize/configure the webapplication for all instances that will be deployed.

    The demo template can be installed from the downloaded files with the command:

    mv $OVERLAYS/jtracTemplate=jtrac-2.1.0 $JETTY_HOME/overlays/templates/

    In the Jetty server window, you should see the template loaded with a message like:

    2011-05-06 11:00:08.716:INFO:OverlayedAppProvider:loaded jtracTemplate=jtrac-2.1.0@1304643608715

    The contents of the loaded template is as follows:

    templates/jtracTemplate=jtrac-2.1.0
    └── WEB-INF
        ├── classes
        │   └── jtrac-init.properties
        ├── log4j.properties
        ├── overlay.xml
        ├── template.xml
        └── web-overlay.xml

    The name of the template directory (or it could be a war) uses the ‘=’ character in jtracTemplate=jtrac-2.1.0 to separates the name of the template from the name of the WAR file in webapps that it applies to.  If  = is a problem, then -- may also be used.

    WEB-INF/classes/jtrac-init.properties – replaces the jtrac properties file with an empty file, as the properties contained within it are configured elsewhere

    WEB-INF/log4j.properties – configures the logging for all instances of the template.

    WEB-INF/overlay.xml – a Jetty XML formatted IoC file that is used to inject/configure the ContextHandler for each instances. In this case it just sets up the context path:

    
    
    
      /
    

    WEB-INF/template.xml – a Jetty XML formatted IoC file that is used to inject/configure the resource cache and classloader that is shared by all instances of the template. It is run only once per load of the template:

    
    
    
      
        true
        10000000
        1000
        64000000
      
    

    WEB-INF/web-overlay.xml – a web.xml fragment that is overlayed on top of the web.xml from the base WAR file, that can set init parameters and add/modify filters and servlets. In this it sets the application home and springs rootKey:

    
    
      
        jtrac.home
        /tmp/jtrac-${overlay.instance.classifier}
      
      
        webAppRootKey
        jtrac-${overlay.instance.classifier}
      
      
    

    Note the use of parameterisation of values such as ${overlays.instance.classifier}, as this allows the configuration to be made in the template and not customised for each instance.

    Without the overlayed deployer, all the configurations above would still need to have been made, but rather that being in a single clear structure they would have been either in the servers common directory, the servers webdefaults.xml (aka server.xml), or baked into the WAR file of each application instance using copied/modified files from the original. The overlayed deployer allows us to make all these changes in one structure, more over it allows some of the configuration to be parameterised to facilitate easy multi-tenant deployment.

    Installing an Instance Overlay

    Now that we have installed a template, we can install one or more instance overlays, which deploy the actual web applications:

    mv /tmp/overlays/instances/jtracTemplate=blue $JETTY_HOME/overlays/instances/
    mv /tmp/overlays/instances/jtracTemplate=red $JETTY_HOME/overlays/instances/
    mv /tmp/overlays/instances/jtracTemplate=blue $JETTY_HOME/overlays/instances/
    

    As each instance is moved into place, you will see the jetty server window react and deploy that instance. Within each instance, there is the structure:

    instances/jtracTemplate=red/
    ├── WEB-INF
    │   └── overlay.xml
    ├── favicon.ico
    └── resources
        └── jtrac.css
    

    WEB-INF/overlay.xml – a Jetty XML format IoC file that injects/configures the context for the instance. In this case it sets up a virtual host for the instance:

    
    
    
      
        
          127.0.0.2
          red.myVirtualDomain.com
        
      
    

    favicon.ico – replaces the icon in the base war with one themed for the instance colour.

    resources/jtrac.css – replaces the style sheet from the base war with one themed for the instance colour

    The deployed instances can now be viewed by pointing your browser at http://127.0.0.1:8080, http://127.0.0.2:8080 and http://127.0.0.3:8080. The default username/password for jtrac is admin/admin.

    Things to know and notice

    • Each instance is themed with images and styles sheets from the instance overlay.
    • Each instance is running with it’s own application directory (eg. /tmp/jtrac-red), that is set templates web-overlay.xml.
    • The instances are distinguished by virtual host that is set in the instance overlay.xml
    • The static content from the base war and template are shared between all instances. Specifically there is a shared ResourceCache so only a single instance of each static content is loaded into memory.
    • The classloader at the base war and template level is shared between all instances, so that only a single instance of common classes is loaded into memory. Classes with non shared statics can be configured to load in the instances classloader.
    • All overlays are hot deployed and dependencies tracked. If an XML is touched in an instance, it is redeployed. If an XML is touched in a template, then all instances using it are redeployed. If a WAR file is touched, then all templates and all instances dependant on it are redeployed.
    • New versions can easily be deployed. Eg when jtrac-2.2.0.war becomes available, it can just be dropped into overlays/webapps and then rename jtracTemplate=jtrac-2.1.0 to jtracTemplate=jtrac-2.2.0
    • There is a fuller version of this demo in overlays-demo-jndi.tar.gz, that uses JNDI (needs options=jndi,annotations and jetty-plus.xml in start.ini) and shows how extra jars can be added in the overlays.
  • CometD Message Flow Control with Listeners

    In the last blog entry I talked about message flow control using CometD‘s lazy channels.
    Now I want to show how it is possible to achieve a similar flow control using specialized listeners that allow to manipulate the ServerSession message queue.
    The ServerSession message queue is a data structure that is accessed concurrently when messages are published and delivered to clients, so it needs appropriate synchronization when accessed.
    In order to simplify this synchronization requirements, CometD allows you to add DeQueueListeners to ServerSessions, with the guarantee that these listeners will be called with the appropriate locks acquired, to allow user code to freely modify the queue’s content.
    Below you can find an example of a DeQueueListener that keeps only the first message of a series of message published to the same channel within a tolerance period of 1000 ms, and removes the others (it relies on the timestamp extension):

    String channelName = "/stock/GOOG";
    long tolerance = 1000;
    ServerSession session = ...;
    session.addListener(new ServerSession.DeQueueListener()
    {
        public void deQueue(ServerSession session, Queue queue)
        {
            long lastTimeStamp = 0;
            for (Iterator iterator = queue.iterator(); iterator.hasNext();)
            {
                ServerMessage message = iterator.next();
                if (channelName.equals(message.getChannel()))
                {
                    long timeStamp = Long.parseLong(message.get(Message.TIMESTAMP_FIELD).toString());
                    if (timeStamp <= lastTimeStamp + tolerance)
                    {
                        System.err.println("removed " + message);
                        iterator.remove();
                    }
                    else
                    {
                        System.err.println("kept " + message);
                        lastTimeStamp = timeStamp;
                    }
                }
            }
        }
    });

    Other possibilities include keeping the last message (instead of the first), coalescing the message fields following a particular logic, or even clearing the queue completely.
    DeQueueListeners are called when CometD is about to deliver messages to the client, so clearing the queue completely results in an empty response being sent to the client.
    This is different from the behavior of lazy channels, that allowed to delay the message delivery until a configurable timeout expired.
    However, lazy channels do not alter the number of messages being sent, while DeQueueListeners can manipulate the message queue.
    Therefore, CometD message control flow is often best accomplished by using both mechanisms: lazy channels to delay message delivery, and DeQueueListeners to reduce/coalesce the number of messages sent.

  • Jetty with Spring XML

    Since the very beginning, Jetty has been IOC friendly and thus has been able to be configured with spring.  But the injecting and assembling the jetty container is not the only need that Jetty has for configuration and there are several other configuration files (eg contexts/yourapp.xml,  jetty-web.xml,  jetty-env.xml) that have needed to be in the Jetty XML configuration format.

    With the release of Jetty-7.4, the jetty-spring module has been enhanced with and XmlConfiguration Provider, so now anywhere there is a jetty xml file can be replaced with a spring XML file, so that an all spring configuration is now possible. [ But note that there is no plan to use spring as the default configuration mechanism.  For one, the 2.9MB size of the spring jar is too large for Jetty’s foot print aspirations (currently only 1.5MB for everything) ].

    Starting with spring Jetty

    First you will need a download of jetty-hightide, that includes the spring module:

    wget --user-agent=other http://repo2.maven.org/maven2/org/mortbay/jetty/jetty-hightide/7.4.0.v20110414/jetty-hightide-7.4.0.v20110414.tar.gz
    tar xfz jetty-hightide-7.4.0.v20110414.tar.gz
    jetty-hightide-7.4.0.v20110414/

    You then need to augment this with a spring jar and commons logging:

    cd lib/spring
    wget --user-agent=other http://repo2.maven.org/maven2/org/springframework/spring/2.5.6/spring-2.5.6.jar
    wget --user-agent=other http://repo2.maven.org/maven2/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar
    cd ../..

    and then add spring to the Jetty options by editing start.ini and adding “spring” to the OPTIONS set there:

    OPTIONS=Server,jsp,jmx,resources,websocket,ext,jta,plus,jdbc,annotations,spring

    and that’s it! Jetty is now ready to be configured with spring

    Example Jetty XML

    We can now replace the main etc/jetty.xml file with a spring version as follows:

    
    
    
      
      
        
          
            
            
          
        
        
          
            
              
            
          
        
        
          
            
              
                 
                 
              
            
          
        
        
        
        
        
        
        
      
    
    

    Note that Server bean is given the name (or alias) of “Main” to identify it as the primary bean configured by this file. This equates to the Configure element of the Jetty XML format. Note also that both the Server and Contexts ids are used by subsequent config files (eg etc/jetty-deploy) to reference the beans created here and that the ID space is shared between the configuration formats. Thus you can mix and match configuration formats.

    Example Context XML

    As another example, you can replace the contexts/test.xml file with a spring version as follows:

    
    
    
      
        
        
          
            
              
              
            
          
          
        
        
        
        
        
        
        
        
        
        
          
            www.myVirtualDomain.com
            localhost
            127.0.0.1
          
        
      
    
    

    Note that unlike jetty XML, spring does not have a GET element that allows a bean to be obtained from another bean and then configured. So the structure of this context file is somewhat different to the corresponding jetty xml file.

    Running Spring Jetty

    Running spring jetty is now exactly as for normal jetty:

    java -jar start.jar

    This uses the start.ini file and the lib directory to construct a classpath and to execute the configuration files specified (including the jetty.xml we have converted to spring). Use java -jar start.jar --help to learn more about the jetty start mechanism.

    Of course, with spring, you can also start jetty by running spring directly and using a more spring-like mechanism for aggregating multiple configuration files.

    Conclusion

    While spring and jetty XML are roughly equivalent, they each have their idiosyncrasies. The Jetty API has been developed with the jetty XML format in mind, so if you examine the full suite of Jetty XML files, you will see Getters and methods calls used to configure the server. These can be done in spring (AFAIN using helper classes), but it is a little more clunky than jetty XML. This can be improved over time by a) having spring config files written by somebody more spring literate than me; b) improving the API to be more spring friendly; c) adapting the style of configuration aggregation to be more spring-like. I’m receptive to all three and would welcome spring users to collaborate with to improve the all spring configuration of jetty.

  • Getting Started With Websockets

    The WebSockets protocol and API is an emerging standard to provide better bidirectional communication between a browser (or other web client) and a server.  It is intended to eventually replace the comet techniques like long polling.   Jetty has supported the various websocket drafts in the 7.x and 8.x releases and this blog tells you how to get started with websockets.

    You don’t want to do this!

    This blog will show you how to use websockets from the lowest levels, but I would not advise that any application programmer should follow these examples to build and application.   WebSockets is not a silver bullet and on it’s own it will never be simple to use for non trivial applications (see Is WebSocket Chat Simpler?), so my recommendation is that application programmers look toward frameworks like cometd, that private a higher level of abstraction, hide the technicalities and allow either comet long polling or websockets to be used transparently.

    So instead this blog is aimed at framework developers who want to use websockets in their own frameworks and application developers who can’t stand not knowing what is under the hood.

    Test Client and Server

    The simplest way to get started is to download a jetty aggregate jar that comes complete with a test websocket client and server.  You can do this with a browser of with the following command line wgets:

    wget -O jetty-all.jar --user-agent=demo
      http://repo2.maven.org/maven2/org/eclipse/jetty/aggregate/jetty-all/7.4.0.v20110414/jetty-all-7.4.0.v20110414.jar
    wget --user-agent=demo
      http://repo2.maven.org/maven2/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar

    To run a simple test server (use –help to see more options):

    java -cp jetty-all.jar:servlet-api-2.5.jar
      org.eclipse.jetty.websocket.TestServer
      --port 8080
      --docroot .
      --verbose

    You can test the server with the test client (use –help to see more options):

    java -cp jetty-all.jar:servlet-api-2.5.jar
      org.eclipse.jetty.websocket.TestClient
      --port 8080
      --protocol echo

    The output from the test client is similar to ping and you can use the options discovered by –help to try out different types of tests, including fragmentation and aggregation of websocket frames

    Using a Browser

    Using a java client is not much use, unless you want to write a desktop application that uses websocket (a viable use).  But most users of websocket will want to use the browser as a client.  So point your browser at the TestServer at http://localhost:8080.

    The Websocket TestServer also runs a HTTP file server at the directory given by –docroot, so in this case you should see in the browser a listing of the directory in which you ran the test server.

    To turn the browser into a websocket client, we will need to server some HTML and javascript that will execute in the browser and talk back to the server using websockets.  So create the file index.html in the same directory you ran the server from and put into it the following contents which you can download from here. This index file contains the HTML, CSS and javascript for a basic chat room.

    You should now be able to point your browser(s) at the test server and see a chat room and join it.  If your browser does not support websockets, you’ll be given a warning.

    How does the Client work?

    The initial HTML view has a prompt for a user name.  When a name is entered the join method is called, which creates the websocket to the server.  The URI for the websocket is derived from the documents location and call back functions are registered for open, message and close events.   There org.ietf.websocket.test-echo-broadcast sub protocol is specified as this echos all received messages to all other broadcast connections, giving use the semantic needed for a chat room:

    join: function(name) {
      this._username=name;
      var location = document.location.toString().replace('http://','ws://').replace('https://','wss://');
      this._ws=new WebSocket(location,"org.ietf.websocket.test-echo-broadcast");
      this._ws.onopen=this._onopen;
      this._ws.onmessage=this._onmessage;
      this._ws.onclose=this._onclose;
    },

    When the websocket is successful at connecting to the server, it calls the onopen callback, which we have implemented to change the appearance of the chat room to prompt for a chat message.  It also sends a message saying the user has joined the room:

    _onopen: function(){
      $('join').className='hidden';
      $('joined').className='';
      $('phrase').focus();
      room._send(room._username,'has joined!');
    },

    Sending of a message is done by simply formatting a string as “username:chat text” and calling the websocket send method:

    _send: function(user,message){
      user=user.replace(':','_');
      if (this._ws)
        this._ws.send(user+':'+message);
    },
    chat: function(text) {
      if (text != null && text.length>0 )
         room._send(room._username,text);
    },

    When the browser receives a websocket message over the connection the onmessage callback is called with a message object. Our implementation looks for  the username and colon, strips out any markup and then appends the message to the chat room:

    _onmessage: function(m) {
      if (m.data){
        var c=m.data.indexOf(':');
        var from=m.data.substring(0,c).replace('<','<').replace('>','>');
        var text=m.data.substring(c+1).replace('<','<').replace('>','>');
        var chat=$('chat');
        var spanFrom = document.createElement('span');
        spanFrom.className='from';
        spanFrom.innerHTML=from+': ';
        var spanText = document.createElement('span');
        spanText.className='text';
        spanText.innerHTML=text;
        var lineBreak = document.createElement('br');
        chat.appendChild(spanFrom);
        chat.appendChild(spanText);
        chat.appendChild(lineBreak);
        chat.scrollTop = chat.scrollHeight - chat.clientHeight;
      }
    },

    If the server closes the connection, or if the browser times it out, then the onclose callback is called.  This simply nulls out the chat room and reverts to the starting position:

    _onclose: function(m) {
      this._ws=null;
      $('join').className='';
      $('joined').className='hidden';
      $('username').focus();
      $('chat').innerHTML='';
    }

    How Does the Server Work?

    The server side code for  this chat room is using an embedded Jetty server and is written against the jetty websocket APIs that are not part of the websocket standard.  There is not yet even a proposed standard for serverside websocket APIs, but it is a topic for consideration with the servlet 3.1 JSR.

    The test server is an extension of an embedded Jetty server, and the constructor adds a connector at the required port, creates a WebSocketHandler and a ResourceHandler and chains them together:

    public TestServer(int port)
    {
        _connector = new SelectChannelConnector();
        _connector.setPort(port);
        addConnector(_connector);
        _wsHandler = new WebSocketHandler()
        {
            public WebSocket doWebSocketConnect(HttpServletRequest request, String protocol)
            {
                ...
                return _websocket;
            }
        };
        setHandler(_wsHandler);
        _rHandler=new ResourceHandler();
        _rHandler.setDirectoriesListed(true);
        _rHandler.setResourceBase(_docroot);
        _wsHandler.setHandler(_rHandler);
    }

    The resource handler is responsible for serving the static content like HTML and javascript.  The WebSocketHandler looks for WebSocket handshake request and handles them by calling the doWebSocketConnect method, which we have extended to create a WebSocket depending on the sub protocol passed:

    _wsHandler = new WebSocketHandler()
    {
        public WebSocket doWebSocketConnect(HttpServletRequest request, String protocol)
        {
            if ("org.ietf.websocket.test-echo".equals(protocol) || "echo".equals(protocol) || "lws-mirror-protocol".equals(protocol))
                _websocket = new TestEchoWebSocket();
            else if ("org.ietf.websocket.test-echo-broadcast".equals(protocol))
                _websocket = new TestEchoBroadcastWebSocket();
            else if ("org.ietf.websocket.test-echo-assemble".equals(protocol))
                _websocket = new TestEchoAssembleWebSocket();
            else if ("org.ietf.websocket.test-echo-fragment".equals(protocol))
                _websocket = new TestEchoFragmentWebSocket();
            else if (protocol==null)
                _websocket = new TestWebSocket();
            return _websocket;
        }
    };

    Below is a simplification of the test WebSocket from the test server, that excludes the shared code for the other protocols supported. Like the javascript API, there is an onOpen,onClose and onMessage callback. The onOpen callback is passed in a Connection instance that is used to send messages. The implementation of onOpen adds the websocket to a collection of all known websockets, and onClose is used to remove the websocket. The implementation of onMessage is to simply iterate through that collection and to send the received message to each websocket:

    ConcurrentLinkedQueue _broadcast =
        new ConcurrentLinkedQueue();
    class TestEchoBroadcastWebSocket implements WebSocket.OnTextMessage
    {
        protected Connection _connection;
        public void onOpen(Connection connection)
        {
            _connection=connection;
            _broadcast.add(this);
        }
        public void onClose(int code,String message)
        {
            _broadcast.remove(this);
        }
        public void onMessage(final String data)
        {
            for (TestEchoBroadcastWebSocket ws : _broadcast)
            {
                try
                {
                    ws._connection.sendMessage(data);
                }
                catch (IOException e)
                {
                    _broadcast.remove(ws);
                    e.printStackTrace();
                }
            }
        }
    }

    Don’t do it this way!

    Now you know the basics of how websockets works, I repeat my warning that you should not do it this way – unless you are a framework developer.   Even then, you are probably going to want to use the WebSocketServlet and a non embedded jetty, but the basic concepts are the same. Note the strength of the jetty solution is that it terminates both WebSocket connections and HTTP requests in the same environment, so that mixed frameworks and applications are easy to create.

    Application developers should really look to a framework like cometd rather than directly coding to websockets themselves.  It is not that the mechanics of websockets are hard, just that they don’t solve all of the problems that you will encounter in a real world comet application.

     

  • CometD Codemotion Slides

    The Codemotion conference slides of my talk on Comet and WebSocket web applications are available here: slideshare, download.