Author: admin

  • Guide to Jetty Webinar (Thu 8 April)

    Jan Bartel and I will be presenting a "Guide to Jetty" Webinar on Thu, Apr 8, 2010 8:00 AM – 9:00 AM PDT.   We’ll present an overview of Jetty and then show some hands on examples of running Jetty, deploying webapps, coding against the embedded API, plus a cometd demo.  We’ll also take questions from the attendees.

    Please register at  http://gotomeeting.com/429603674

  • Websocket Chat

    The websocket protocol has been touted as a great leap forward for bidirectional web applications like chat, promising a new era of simple comet applications. Unfortunately there is no such thing as a silver bullet and this blog will walk through a simple chat room to see where websocket does and does not help with comet applications. In a websocket world, there is even more need for frameworks like cometd.

    Simple Chat

    A chat is the “helloworld” application of web-2.0 and a simple websocket chat room is included with the jetty-7 which now supports websockets. The source of the simple chat can be seen in svn for the client side and server side. The key part of the client side is to establish a WebSocket connection:

            join: function(name) {          this._username=name;          var location = document.location.toString().replace('http:','ws:');          this._ws=new WebSocket(location);          this._ws.onopen=this._onopen;          this._ws.onmessage=this._onmessage;          this._ws.onclose=this._onclose;        },

    It is then possible for the client to send a chat message to the server:

            _send: function(user,message){          user=user.replace(':','_');          if (this._ws)            this._ws.send(user+':'+message);        },

    and to receive a chat message from the server and to display it:

            _onmessage: function(m) {          if (m.data){            var c=m.data.indexOf(':');            var from=m.data.substring(0,c).replace('<','<').replace('>','>');            var text=m.data.substring(c+1).replace('<','<').replace('>','>');
                var chat=$('chat');            var spanFrom = document.createElement('span');            spanFrom.className='from';            spanFrom.innerHTML=from+': ';            var spanText = document.createElement('span');            spanText.className='text';            spanText.innerHTML=text;            var lineBreak = document.createElement('br');            chat.appendChild(spanFrom);            chat.appendChild(spanText);            chat.appendChild(lineBreak);            chat.scrollTop = chat.scrollHeight - chat.clientHeight;             }        },

    For the server side, we simply accept incoming connections as members:

            public void onConnect(Outbound outbound)        {            _outbound=outbound;            _members.add(this);        }

    and then for all messages received, we send them to all members:

            public void onMessage(byte frame, String data)        {            for (ChatWebSocket member : _members)            {                try                {                    member._outbound.sendMessage(frame,data);                }                catch(IOException e)                {                    Log.warn(e);                }            }        }

    So we are done right?  We have a working chat room – let’s deploy it and we’ll be the next google gchat!! Unfortunately reality is not that simple and this chat room is a long way short of the kinds of functionality that expect from a chat room – even a simple one.

    Not So Simple Chat

    On Close?

    With a chat room, the standard use-case is that once you establish your presence in the room and it remains until you explicitly leave the room.  In the context of webchat, that means that  you can send receive a chat message until you close the browser or  navigate away from the page.   Unfortunately the simple chat example does not implement this semantic because the websocket protocol allows for an idle timeout of the connection. So if nothing is said in the chat room for a short while then the websocket connection will be closed, either by the client, the server or even and intermediary. The application will be notified of this event by the onClose method being called.
    So how should the chat room handle onClose?  The obvious thing to do is for the client to simply call join again and open a new connection back to the server:

            _onclose: function() {          this._ws=null;          this.join(this.username);        }

    This indeed maintains the user’s presence in the chat room, but is far from an ideal solution since every few idle minutes the user will leave the room and rejoin. For the short period between connections, they will miss any messages sent and will not be able to send any chat themselves.

    Keep Alives

    In order to maintain presence, the chat application can send keep-alive messages on the websocket to prevent it being closed due to an idle timeout.  However, the application has no idea at all about what the idle timeout are, so it will have to pick some arbitrary frequent period (eg 30s) to send keep-alives and hope that is less than any idle timeout on the path (more or less as long-polling does now). Ideally a future version of websocket will support timeout discovery, so it can either tell the application the period for keep-alive messages or it could even send the keep alives on behalf of the application.
    Unfortunately keep-alives don’t avoid the need for onClose to initiate new Websockets, because the internet is not a perfect place and specially with wifi and mobile clients, sometimes connections just drop.  It is a standard part of HTTP that if a connection closes while being used, the GET requests are retried on new connections, so users are mostly insulated from transient connection failures.  A websocket chat room needs to work with the same assumption and even with keep-alives, it needs to be prepared to reopen a connection when onClose is called.

    Queues

    With keep alives, the websocket chat connection should be mostly be a long lived entity, with only the occasional reconnect due to transient network problems or server restarts. Occasional loss of presence might not been seen to be a problem, unless you’re the dude that just typed a long chat message on the tiny keyboard of your vodafone360 app or instead of chat you are playing on chess.com and you don’t want to abandon a game due to transient network issues.  So for any reasonable level of quality of service, the application is going to need to “pave over” any small gaps in connectivity by providing some kind of message queue in both client and server.  If a message is sent during the period of time that there is no websocket connection, it needs to be queued until such time as the new connection is established.

    Timeouts

    Unfortunately, some failures are not transient and
    sometimes a new connection will not be established. We can’t allow queues to grow for ever and to pretend that a user is present long after their connection is gone. Thus both ends of the
    chat application will also need timeouts and user will not be seen to
    have left the chat room until they have no connection for the period
    of the timeout or until an explicit leaving message is received.
    Ideally
    a future version of websocket will support an orderly close message so
    the application can distinguish between a network failure (and keep the
    user’s presence for a time)  and an orderly close as the user leaves the page (and
    remove the user’s present).

    Message Retries

    Even with message queues, there is a race condition that makes it difficult to completely close the gaps between connections. If the onClose method is called very soon after a message is sent, then the application has no way to know if that close happened before or after the message was delivered. If quality of service is important, then the application currently has not option but to have some kind of per message or periodic acknowledgement of message delivery. Ideally a future version of websocket will support orderly close, so that delivery can be known for non failed connections and a complication of acknowledgements can be avoided unless the highest quality of service is required.

    Backoff

    With onClose handling, keep-alives, message queues, timeouts and retries, we finally will have a chat room that can maintain a users presence while they remain on the web page.  But unfortunately the chat room is still not complete, because it needs to handle errors and non transient failures. Some of the circumstances that need to be avoided include:

    • If the chat server is shut down, the client application is notified of this simply by a call to onClose rather than an onOpen call. In this case, onClose should not just reopen the connection as a 100% CPU busy loop with result.  Instead the chat application has to infer that there was a connection problem and to at least pause a short while before trying again – potentially with a retry backoff algorithm to reduce retries over time. Ideally a future version of websocket will allow more access to connection errors, as the handling of no-route-to-host may be entirely different to handling of a 401 unauthorized response from the server.
    • If the user types a large chat message, then the websocket frame sent may exceed some resource level on the client, server or intermediary. Currently the websocket response to such resource issues is to simply close the connection.  Unfortunately for the chat application, this may look like a transient network failure (coming after a successful onOpen call), so it may just reopen the connection and naively retry sending the message, which will again exceed the max message size and we can lather, rinse and repeat!  Again it is important that any automatic retries performed by the application will be limited by a backoff timeout  and/or max retries.   Ideally a future version of websocket will be able to send an error status as something distinct from a network failure or idle timeout, so the application will know not to retry errors.

    Does it have to be so hard?

    The above scenario is not the only way that a robust chat room could be developed. With some compromises on quality of service and some good user interface design, it would certainly be possible to build a chat room with less complex usage of a WebSocket.  However, the design decisions represented by the above scenario are not unreasonable even for chat and certainly are applicable to applications needing a better QoS that most chat rooms.
    What this blog illustrates is that there is no silver bullet and that WebSocket will not solve many of the complexities that need to be addressed when developing robust comet web applications. Hopefully some features such as keep alives, timeout negotiation, orderly close and error notification can be build into a future version of websocket, but it is not the role of websocket to provide the more advanced handling of queues, timeouts, reconnections, retries and backoffs.   If you wish to have a high quality of service, then either your application or the framework that it uses will need to deal with these features.

    Cometd with Websocket

    Cometd version 2 will soon be released with support for websocket as an alternative transport to the currently supported JSON long polling and JSONP callback polling. Cometd supports all the  features discussed in this blog and makes them available transparently to browsers with or without websocket support.  We are hopeful that websocket usage will be able to give us even better throughput and latency for cometd than the already impressive results achieved with long polling.

     
     
     
     
     
     
     

  • Webinar on reliable messaging with Jetty, Cometd and ActiveMQ

    Jan Bartel (Intalio) and Daan Van Santeen (Progress FUSE) will be giving a series of live webinars on how Jetty, Cometd and ActiveMQ can be used to provide a reliable messaging platform to the browser.

    • What Jetty is and how its CometD Bayeux implementation implements messaging to the browser
    • What Apache ActiveMQ is and how its JMS implementation allows for reliable messaging
    • What the JMS and Bayeux messaging protocols are and how they complement each other
      How
      these technologies can be combined to create a flexible and reliable
      platform to implement messaging from a back-end system to a browser

    The webinar includes a simple demonstration of fault tolerance failover from the browser.

     

    Click to register:

     

     

  • Websockets – IETF v WHATWG?

    There is a jurisdictional issue brewing over the future of internet standards – I know because I’m stirring the pot.  The dispute is between the WHATWG and the IETF regarding the specification process for the websocket protocol (which I have some concerns about, but none the less is supported by Jetty).

    The IETF is the body that has been responsible for developing and/or standardizing the vast majority of protocols which run the internet: HTTP, FTP, SMTP, etc. It has an open collaborative based process based on working code and rough consensus and is overseen by the Internet Society, a non profit organization with membership open to all.

    The WHATWG was formed in response to concerns about the W3C’s evolution of HTML
    and has been instrumental in developing the HTML5 standards.  It is essentially a browser vendor consortium that is governed by an invitation only committee and lead by a Google employee.  While it’s process is conducted openly and all are invited to participate,  only the appointed editor has any power in the actual decision making process. The editor is appointed by the browser vendors.

    The majority of the WHATWG efforts have been about HTML5, and most welcome the advances they are driving in the browsers. However, the websocket API and protocol have also come out of the HTML5 work and specify a new protocol that will run over ports 80/443, that will start off looking kind of like HTTP, but is expressly not HTTP.

    Making the internet work well by producing quality standards is exactly the mission statement of the IETF.  So a new protocol running over port 80 is definitely something that falls within the scope of the the IETF mission.   The WHATWG were invited to submit their protocol as a IETF draft document, which they did and the IETF after due process has formed the hybi working group to ” take on prime responsibility for the specification of the WebSockets protocol”.   This appears to have shocked the WHATWG and they saying that they do not wish to relinquish editorial control of the protocol. It appears they were hoping for a rubber stamp from the IETF.

    Meanwhile, Google’s Chrome browser has started shipping with the websocket protocol enabled and it is expected that other browser vendors in the WHATWG consortium will soon follow.  The argument has been made that it’s “already shipping”, so it’s too late to make any significant changes to the protocol.

    The problem is that the protocol has been developed by only a fragment of the internet industry, and essentially by a single company within that fragment. There has been no consensus sought or obtained from the wider internet community – ie servers, routers, bridges, proxies, firewalls, caches, load balancers, aggregators, offloaders, ISPs, filters, corporate security policies, traffic monitoring, billing, accounting, shaping, application frameworks etc. etc.   These communities and vendors are waking up to a world where the traffic they expected over port 80/443 aint what it used to be. Their products and services will be broken, bypassed or at best co-opted for unintended usage.  They had no real voice in this change.  Many would not have even realized that the HTML5 effort was going to substantially change the wire protocol.

    It is easy to present this state of affairs as a takeover of port 80 by Google so that they can get Wave to work better. That google  expect the rest of the industry to scramble to make the changes necessary to allow websockets to tunnel through the infrastructure unhindered by any concerns other than connectivity to Google.    I know that this characterization of the situation will be taken as personally insulting to the individuals involved, who I’m sure are acting in good faith and not as part of some conspiracy.  However, the power of group-think is significant and individuals are greatly affected by the environment that they operate in.  Conflicts of interest are avoided by not by peoples best intentions, but by not creating processes that are inherently conflicted.

    I don’t mean to be too Machiavellian about this, but if the IETF does not assert is roll as the primary internet standards body, then the  outcome will essentially be that a Google led consortium has taken over port 80. Note that Google are also doing some great research on a HTTP replacement protocol called SPDY, which is showing some excellent promise. SPDY might be the way of the future, but do we really want it to arrive by having google simply start shipping it in Chrome?  If we let port 80 be taken by websockets without consensus, then could happen with HTTP as well (mwah ha ha ha)!

    The websocket protocol as specified by the WHATWG might indeed be wonderful, but unless we follow due process, we will not really know that it is. The IETF has a truly open
    process based on rough consensus in which all are welcome to participate. They have a proven track record and have overseen the standards that have withstood the unprecedented growth in the internet.  The IETF are the natural body to oversee standardization of internet protocols and there is no evidence that this task would be better handled by a closed industry consortium lead by Google.

    My suggestion of how to break this impasse, is for the WHATWG to continue to be the editor of the current specification and to push forward with the deployment of 1.0, which essentially ignores intermediaries and proxies anyway.   In parallel, the IETF should continue with their working group to develop the 1.1 specification based on 1.0, but with an all-of-industry rough consensus.

     

     

     

  • Jetty WebSocket Server

    Jetty-7.0.1 has been extended with a WebSocket server implementation based on the same scalable asynchronous IO infrastructure of Jetty and integrated into the Jetty Servlet container.
    WebSocket came out of work on HTML5 by the  What Working Group to specify a mechanism to allow two way communications to a browsers.  It is currently being standardized at the W3C for the WebSocket API and by the IETF for the WebSocket protocol and is soon to be supported by releases of Firefox, and Chromium. While I have significant concerns about the websockets protocol, it is important that server concerns are considered in the standardization process. Thus to follow the IETF model of “rough consensus and working code”, it is important that Jetty has a working implementation of the protocol.
    The key feature of the Jetty Websocket implementation is that it is not another separate server.  Instead it is fully integrated into the Jetty HTTP server and servlet container. so a Servlet or Handler can process and accept a request to upgrade a HTTP connection to a WebSocket connection.    Applications components created by standard web applications can then send and receive datagrams over the WebSocket with non blocking sends and receives.
    Below is an example of a SIMPLE “Chat” application written using the Jetty WebSocketServlet, which can handle normal doGet style requests, but will call doWebSocketConnect if an upgrade request is received:

    public class WebSocketChatServlet extends WebSocketServlet{    private final Set _members = new CopyOnWriteArraySet();
        protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws ServletException ,IOException     {        getServletContext().getNamedDispatcher("default").forward(request,response);    }
        protected WebSocket doWebSocketConnect(HttpServletRequest request, String protocol)    {        return new ChatWebSocket();    }
        class ChatWebSocket implements WebSocket    {        Outbound _outbound;
            public void onConnect(Outbound outbound)        {            _outbound=outbound;            _members.add(this);        }
            public void onMessage(byte frame, byte[] data,int offset, int length)        {}
            public void onMessage(byte frame, String data)        {            for (ChatWebSocket member : _members)            {                try                {                    member._outbound.sendMessage(frame,data);                }                catch(IOException e) {Log.warn(e);}            }        }
            public void onDisconnect()        {            _members.remove(this);        }    }}



    The client side of this chatroom is implemented by this script, whose key sections include:

    join: function(name) {  this._username=name;  var location = document.location.toString().replace('http:','ws:');  this._ws=new WebSocket(location);  this._ws.onopen=this._onopen;  this._ws.onmessage=this._onmessage;  this._ws.onclose=this._onclose;},_onopen: function(){  $('join').className='hidden';  $('joined').className='';  $('phrase').focus();  room._send(room._username,'has joined!');},_send: function(user,message){  user=user.replace(':','_');  if (this._ws)    this._ws.send(user+':'+message);},_onmessage: function(m) {  if (m.data){    var c=m.data.indexOf(':');    var from=m.data.substring(0,c).replace('<','<').replace('>','>');    var text=m.data.substring(c+1).replace('<','<').replace('>','>');    ...              }      }



    This example is included in the test webapp shipped with Jetty-7.0.1,  and has been tested with the websocket client in dev releases of Google Chromium 4.0.
    The intention for the jetty-websocket server is to be the focus of trial and experimentation with the protocol, it’s implementation and framesworks like cometd that might use it. All should be considered Alpha and highly likely that they will change with feedback received. Hopefully using the protocol with real servers, clients and applications will result in the experience required to feedback to the IETF requirements that will drive the improvement of the protocol.
    One example of an area that needs to be improved is the discovery and/or negotiation of idle timeouts for the WebSocket connections.   Currently the jetty-server is inheriting the idle timeout of the HTTP connection, which is 30s by default.  This means that the demo chat application drops it’s connection after 30 seconds of no conversation.  This is not exactly what you want in a chat room, but because there is no way to discover or configure the idle timeouts of other parties to a websocket connection (including proxies), then the application has no choice but to handle the idle close event itself.

  • How to improve Websocket

    Background

    The W3C has developed the Websocket API proposal for HTML5, that enables web pages to perform two-way communication with a remote host. There is also a proposed  IETF draft websocket protocol to transport the websocket messages.
     
    I believe that there are significant deficiencies in the proposed websocket protocol and this paper looks at how they can be rectified.

    Specification Style

    The web socket protocol document has adopted an algorithmic specification style, so that rather than describing the structure of websocket data framing, the document describes an algorithm that parses websocket data framing.   This raises esoteric questions like: is an implementation that parses websocket data framing using a different algorithm a compliant implementation or not?  But a more practical problem with this style of specification is that the spec is impenetrable as it is full of text like:

    Let /b_v/ be integer corresponding to the low 7 bits of/b/ (the value you would get by _and_ing /b/ with 0x7F). Multiply /length/ by 128, add /b_v/ to that result, andstore the final result in /length/. If the high-order bit of /b/ is set (i.e. if /b/ _and_ed with 0x80 returns 0x80), then return to the step abovelabeled _length_.

    I challenge the reader to confirm that the  client side framing and the server side framing are symmetric and implement the same data framing!
    Rather than such verbose means, IETF specifications typically use the precise language of augmented Backus-Naur Form (ABNF RFC5234) to formally describe protocols in a way that is not open to confusion or mis-implementation.   To illustrate the clarity possible, I’ve translated section 4.2 into BNF:

    ws0-frame           = sentinel-frame 
                        / length-frame

    sentinel-frame      = %x00-7F      ; frame type
                          *( %x00-FE ) ; utf-8 data
                          %xFF         ; the sentinel

    length-frame        = %x80-FF      ; frame type
                          frame-length 
                          octet-data   ; binary data

    frame-length        = unlimited-integer
    unlimited-integer   = *( %x80-FF ) %x00-7F 
                        ; concatenate 7 low order bits from each octet
                        ; to form a binary integer

    octet-data          = *( %x00-FF )
                        ; the number of octets is exactly the length determined
                        ; by the frame-length

     
    This is a precise specification and requires only an interpretation of the frame length field to provide an implementation independent definition of websocket data framing.

    Simplified Framing

    Now that we can clearly see and understand websocket data framing, we can see that it really has two types of data framing: one for UTF-8 content and one for binary content. As the binary framing is perfectly able to carry the utf-8 data, then the protocol can be greatly simplified by removing the sentinel framing as follows:

    ws1-frame           = frame-length
    frame-type
    octet-data

    frame-type          = 0x00    ; utf8 frame
    / 0x01-FF ; undefined binary frame

    frame-length        = unlimited-integer
    unlimited-integer   = *( %x80-FF ) %x00-7F
    ; concatenate 7 low order bits from each octet
    ; to form a binary integer
    octet-data          = *( %x00-FF )
    ; the number of octets is exactly the length determined
    ; by the frame-length

    Learn from HTTP

    It is always wise to consider the past when looking to the future. HTTP/1.1 introduced pipelining as a mechanism to reduce the latency and improve the throughput when using a TCP/IP connection. The mechanism allows multiple HTTP messages to be sent on a connection, without waiting for any response or state change resulting from one message to the next.  Unfortunately HTTP pipelines have several short comings that have prevented their widespread adoption and more unfortunately the websocket protocol proposal has made similar mistakes:
    3>Orderly Close

    An HTTP/1.1 connection may be closed by either end or by an intermediary as part of normal operation, leaving a degree of uncertainty about the delivery status of messages in the pipeline. Messages can be resent on another connection only if they are known to be idempotent (eg GET / HEAD methods). Similarly, a Websocket may be closed by either end or an intermediary as part of normal operation, leaving a degree of uncertainty about the delivery status of messages that have been sent. But Websocket has no knowledge of any message type, so it is unable to know that any messages are idempotent, thus it is unable to retransmit any messages on a new connection. Worse still, websocket has no concept of an idle connection, and thus an implementation will either keep connections open forever (DOS risk) or rick closing an in-use connection. Note also that the burden of handling disconnection and message retries falls to the application with websocket.
    So if a connection closes, a websocket application does not know which messages sent have been recieved,  short of acknowledging every message (which is a significant overhead and thus not practicable as a solution for all).  However, if websocket can be improved with a mechanism to orderly close connections, then the delivery status of messages can be well known for normal operation and will only be uncertain if there is a real failure of a network or node.  Orderly close requires a connection life-cycle to be defined and maintained by exchanging control messages between the end points:

    ws2-frame          = frame-length
    frame-type
    octet-data
    frame-type         = 0x00 ; utf8 frame
    / 0x01    ; control frame
    / 0x02
    -FF ; undefined binary frame
    frame-length       = unlimited-integer
    unlimited-integer  = *( %x80-FF ) %x00-7F
    ; concatenate 7 low order bits from each octet
    ; to form a binary integer
    octet-data         = *( %x00-FF )
    ; the number of octets is exactly the length
    ; determined by the frame-length

    This improvement creates a control frame type that will allow messages about the lifecycle of a connection to be exchanged. To gloss over the detail, the control messages will need semantics of closing and closed, so an end point or intermediary can know if it is safe to send a message and that once a connection has been orderly closed, it is safe to assume that all message sent previously have been delivered.

    Message Fragmentation

    Another issue with HTTP/1.1 pipelining is that the time taken to transmit/receive/process one message in the pipeline can unreasonably delay the handling of subsequent messages.  While websocket is not hampered in this regard by request response semantics, it still suffers from the issue that the sending of a large websocket message may unreasonably delay the transmission of other messages. 
    A common protocol technique to deal with this issue is to implement message fragmentation, where a single message is transmitted in several frames and the frames of unrelated messages can be interleaved on a single connection. Either a message ID or a channel (== virtual connection) ID is needed to determine which fragments are part of the same message.   The following improvement adds fragmentation and a channel ID to websocket:

    ws3-message       = 1*(ws3-frame)
    ws3-frame         = frame-length
    message-length
    channel-id

    frame-type
    octet-data
    frame-type        = 0x00    ; control frame
    / 0x01    ; utf8 frame
    / 0x02-FF ; undefined binary frame
    frame-length      = 0x00              ; last frame of message
    / unlimited-integer ; frame-length
    message-length    = 0x00              ; unknown message length
    / unlimited-integer ; known message length

    channel-id        = unlimited-integer
    unlimited-integer = *( %x80-FF ) %x00-7F
    ; concatenate 7 low order bits from each octet
    &nb
    sp;              ; to form a binary integer

    octet-data        = *( %x00-FF )
    ; the number of octets is exactly the length determined
    ; by the frame-length

    A message is terminated when all octets have been sent for a known message length, or when a zero length frame is sent for an unknown message length. Related messages are sent on the same channel id and are strictly ordered. The creation and orderly close of channel-ids can be coordinated by control frames sent on the channel. The implementations of the protocol end points will be responsible for fragmenting and interleaving messages. A simple endpoint may choose not to fragment messages sent, but should be capable of assembling fragmented messages received.

    Multiplexing

    It is frequently desirable to aggregate (aka multiplexing) message streams from multiple clients and/or components into a single stream of messages, so that resources can be shared and/or load from a single source policed as a single entity.  Luckily the machinery needed for multiplexing over a transport protocol is exactly the machinery needed from message fragmentation and channels.   Thus the improvements already proposed can accommodate multiplexing.

    Flexibility and Extensibility

    Another issue with the websocket protocol, is that it lacks flexibility and extensibility when it comes to different content encodings.  Currently UTF-8 data has been allocated a frame-type byte, so it can be easily transmitted.   However this is done with fixed mapping of an integer to a content encoding, so it cannot easily be extended and alternate content encodings sent.   Examples of likely content encodings needed include:

    • compressed content using compress, gzip, or some future compression algorithm
    • UTF-16 may be more predictable and/or efficient if messages contain significant numbers of multi-byte characters

     
    These additional content encodings could be handled by allocating additional frame-type octets. However, such an approach would need coordination by a standards body for every new type and could rapidly consume the available octet space when the product of content encodings, transport encoding, character sets and/or content types is calculated. 
    Luckily there already exists a standard extensible system for describing content encodings, transport encoding, character sets and/or content types. The IETF standards for  Mime Type are widely used by web protocols and have good mappings to existing software components that can encode, decode, compress, decompress, validate, sign and/or display an unlimited and growing family of media types.
    Mime types and associated encodings are typically represented by 1 or more name value pairs of ISO-8859-1 strings (aka meta data). It would possible to extend websocket by replacing the fixed octect mapping of content encoding with a per message set of mime-type name value pairs.  However, to do so would be to repeat another mistake of HTTP, namely to have verbose highly redundant meta-data transmitted with every message.
    A more efficient and equally flexible solution is to associate meta-data fields such as mime type with a channel rather than with a message, so that the meta data need only be sent once and will apply to all subsequent messages in a channel, or until it is replaced by updated meta data:

    ws5-message        = 1* (ws5-frame)
    ws5-frame          = frame-length
    message-length
    channel-id
    frame-type
    octet-data
    frame-type         = 0x00    ; control frame
    / 0x01    ; meta-data name+value headers
    / 0x02    ; data frame
    / 0x03-FF ; undefined frame
    frame-length       = 0x00              ; last frame of message
    / unlimited-integer ; frame-length
    message-length     = 0x00              ; unknown message length
    / unlimited-integer ; known message length
    channel-id         = unlimited-integer
    unlimited-integer  = *( %x80-FF ) %x00-7F
    ; concatenate 7 low order bits from each octet
    ; to form a binary integer
    octet-data         = *( %x00-FF )
    ; the number of octets is exactly the length
    ; determined by the frame-length

    An application using this transport would be free to send as little or as much meta data as appropriate.  If the content types and encodings can be assumed or known in advance, then only control and data frames need be sent.

    Other Websocket improvements

    Semantic Specification

    Another poor aspect of the websocket protocol proposal is its usage of strict ordering and binary equivalence rather than semantic equivalence when handshaking to establish a websocket connection.  The specification expects implementations to send/receive exact sequences of bytes, for example:

    5. Send the following bytes to the remote side (the server):
    47 45 54 20
    Send the /resource name/ value, encoded as US-ASCII.
    Send the following bytes:
    20 48 54 54 50 2F 31 2E  31 0D 0A 55 70 67 72 61
    64 65 3A 20 57 65 62 53  6F 63 6B 65 74 0D 0A 43
    6F 6E 6E 65 63 74 69 6F  6E 3A 20 55 70 67 72 61
    64 65 0D 0A

     

    This binary sequence represents the exactly ordered and spaced HTTP request of:

    GET /resource name/ HTTP/1.1 CRLF
    Upgrade: WebSocket CRLF
    Connection: Upgrade CRLF

    A semantically equivalent HTTP request could have the headers order differently, an absolute URL for the resource additional headers for authentication, load balancing and/or cookies. There is no reason to reject semantically equivalent messages because a HTTP request that happens to have the correct semantics is correct.  The insistence of strict binary ordering and equivalence will break many proxies, load balancers and server implementations that are used to more flexible interpretations of network specifications.

    HTTP transport

    This paper has highlighted several existing issues with HTTP/1.1, such as pipeline limitations and verbose redundant headers. The improved websocket protocol addresses the main limitations of HTTP while providing equivalent or superior capabilities to carry both the data and meta data needed for HTTP.  It is entirely possible that with some additional minor improvements that improved websocket (or similar) could transport HTTP messages and become the basis of HTTP/2.0
     

    BWTP IETF draft

    Other than proposing incremental improvements to websocket, I have also proposed an entirely new protocol. The Bidirectional Web Transfer Protocol (BWTP) is an IETF draft protocol designed to be a transport for the websocket API as well as useful for other web clients.  BWTP and the improved websocket protocol are more or less semantically equivalent and the main differences are mostly stylistic.
    Either approach significantly improves upon the current websocket proposal and provides a transport protocol that would truly be a step forward.
  • Urbanization in the noosphere – Intalio acquires Webtide

    In his Homesteading in the Noosphere essay, Eric S. Raymond likened the creation of open source projects to homesteading on a frontier, via a process of  mixing one’s labor
    with the unowned land, fencing it, and defending one’s title”
    ,  in contrast to the lawful transfer of title that occurs in settled areas.  While the lands of the web servers still border a few wildernesses (eg asynchronous), the surrounding urban sprawl and industrialization reveals that the frontier days are mostly gone with the wild west. So the
    Webtide team could have continued on in our homesteading ways, kind of like a retro Wild Bill Cody show, or we could grasp our future and settle down to some serious urban planning.   With the acquisition of Webtide by Intalio, we’re all urbanites now.

    To take a metaphor way way too far,  my utopian ideal of the natural evolution of a landscape formed by homesteading, would be that of the Roman villa. Each homestead would grow to a villa that would attract the labour it needs to work the existing resources, but that would remain essentially rural and moderately independent, operating under the law and order created by a centralized government (apache, eclipse, etc.).  These villas would form the basis of an extensive yet flexible supply chain, so that enterprises could purchase produce from a variety of villas. An enterprise might use produce from the Villas of Spring, Webtide, Hybernate, and Sonatype to produce a product, while another may use Webtide, Tapestry and Atomikos
    But that utopia was not to be (as all utopias are fated to be dystopias if actually enacted). Instead, some homesteads looked towards the smoking industrial cities on the horizon and saw the productivity that could result from aggregation, mass production and industrialization.  Instead of being the base of the supply chains, these homesteads became factories producing complete pre-fab homesteads for new settlers – the open source application server was born! These one-size-fits-all pre-fab enterprises come complete with everything you need including a shop front just waiting for your logo above the door and the shop window to be filled with your products.  To demonstrate the viability of the pre-fab enterprise, they built a pet shop (albeit one where no actual animals, customers or transactions were involved). The JBoss, Gluecode and Spring homesteads have all gone into application server production and I wish them well.
    Meanwhile, the Webtide/Jetty homestead has grown from a little house on the prairie to a self sufficient village, that supplies custom open source services to many of the new townships, as well as established cities and new homesteads created further out in the frontier. It is a prosperous and vital lifestyle in the village and there was no great need to change.  But as any country boy knows… the bright lights of the big city can be very attractive.  But is the only choice of city life application server production?  Was the future of Jetty only going to be the creation of  yet another pre-fab bungalow factory?   Luckily no!  In Intalio, we’ve found a growing township with a different economic model and an urban plan more to our liking.
    Rather than pre-fab infrastructure, Intalio town builds real BPM and CRM solutions running on public or private clouds, which they supply to small, large and huge enterprises.   Intalio is not  an infrastructure vendor, but a solutions vendor. To build these solutions, Intalio needs a supply chain of components and they have chosen to use open source components and to take ownership in much of the means of production of those components. However, the most important aspect of the Intalio urban plan, is that the component providers must remain profitable and competitive producers in their own right.  Intalio needs our components to build their solutions and their requirements will be certainly be a driving force for us. But we will continue to supply and support our quality components to many enterprises and new homesteads. Intalio knows that only by continued exposure to market forces, will their components continue to grow, adapt and make their solutions competative. Webtide in Intalio is not a Detroit metal works making bumper bars for GM and only GM, and there is no protectionist agenda.
    Like Jetty’s move to the Eclipse Foundation,  I believe that our choice of acquisition partner reflects our continued commitment to the development of quality component based software that will be accessible and suitable for many  and varied consumers. Our clients and users should only see an improvement to the resources and services that we offer. Our stewardship of the Jetty/cometd projects will continue to be inclusive as it remains in our interests to see these projects used as widely and diversely as possible. Also Intalio will provide us with a comfortable urban base from which we can plan our next expeditions into the noosphere wilderness.
     
     
     
     


     

  • Asynchronous BlazeDS Polling with Jetty 7 Continuations

    Jetty now has available an asynchronous implementation for BlazeDS which uses Jetty 7 portable continuations.
    BlazeDS is an open source serverside web messaging technology. It provides real-time data push to flex/flash clients using techniques, such as polling and streaming, to provide a richer and more responsive experience. The asynchronous implementation works for HTTP polling, and was tested against BlazeDS 3.2.0.
    While the techniques BlazeDS use make clients more responsive, they also increase the load on the server by forcing it to hold a thread for idle clients. The advantage of using Jetty continuations with BlazeDS is that it lets your flash clients wait for a response without holding a thread the entire time, greatly increasing scalability. Jetty 7 style continuations are also portable; webapps coded to use the continuations work async on Jetty or any Servlet 3.0 container, and blocking on any Servlet 2.5 container. Greg explains the benefits of (Jetty 7) continuations better than I could.
    To use the asynchronous BlazeDS implementation with one of your applications, go through these quick steps:

    1. Drop jetty-blazeds.jar into your webapp’s classpath
    2. Enable continuations, if you’re using a non-Jetty-7 servlet container:
      • Make sure jetty-continuation-7.jar is on your classpath. Download the latest Jetty distribution from http://www.eclipse.org/jetty/downloads.php and drop lib/jetty-continuation-7*.jar into your webapp’s classpath.
      • Place org.eclipse.jetty.continuation.ContinuationFilter in front of your MessageBrokerServlet. The ContinuationFilter makes it possible for other containers, and even servlet 2.5 containers, to use jetty-7-style portable continuations, which we use as a portability layer on top of asynchronous servlets.

        <!-- Continuation Filter, to enable jetty-7 continuations -->
        <filter>
        <filter-name>ContinuationFilter</filter-name>
        <filter-class>org.eclipse.jetty.continuation.ContinuationFilter</filter-class>
        </filter>
        <filter-mapping>
        <filter-name>ContinuationFilter</filter-name>
        <url-pattern>/messagebroker/*</url-pattern>
        </filter-mapping>

         

    3. Modify your services-config.xml to use Jetty’s AsyncAMFEndpoint instead of AMFEndpoint. AsyncAMFEndpoint uses the same options as AMFEndpoint, e.g.,

      <channel-definition id="my-async-amf" class="mx.messaging.channels.AMFChannel">
      <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amfasync"
      class="org.mortbay.jetty.asyncblazeds.AsyncAMFEndpoint"/>
      <properties>
      <polling-enabled>true</polling-enabled>
      <polling-interval-seconds>0</polling-interval-seconds>
      <max-waiting-poll-requests>10</max-waiting-poll-requests>
      <wait-interval-millis>30000</wait-interval-millis>
      <client-wait-interval-millis>250</client-wait-interval-millis>
      </properties>
      </channel-definition>

       

    Source code is available in svn, and you can check it out and build it:

    $ svn co http://svn.codehaus.org/jetty/jetty/trunk/jetty-blazeds/
    $ cd jetty-blazeds
    $ mvn install

  • Cometd Features and Extensions

    The cometd project is nearing a 1.0 release and thus we are make a bit of a push to improve the project documentation. As part of this effort, we have realized that there are many cool features and extensions to cometd that have been under-publicized.  So this blog is an attempt to give a whirlwind tour of cometd features and extensions.

    Clients and Servers

    The cometd project provides many implementations of the Bayeux protocol.  The javascript and java implementations are most advanced, but there are also perl and python implementations under development within the project. There are also other implementations of Bayeux available outside the cometd project for groovy, flex, .net and atmosphere.

    Javascript Client

    There is now a common cometd-javascript client implementation used as the basis of the cometd-dojo and cometd-jquery implementations (dojox in 1.3.1 still contains a dojo specific client, but this will eventually be replaced with the common code base).  This common code base should able to be used to easily create implementation for other frameworks (eg ext.js or prototype).
    For simplicity, our documentation get’s around the details of which javascript implementation you are using by assuming that you code is written in the context of a:

    // Dojo stylevar cometd = dojox.cometd;

    or

    // jQuery stylevar cometd = $.cometd;

    Java Server

    The cometd-java server was written originally as part of the Jetty-6 servlet container and included support for asynchronous scaling.  While still based on jetty utility components, the cometd-java server is now portable and will run on most servlet containers and will use the asychronous features of Jetty or any servlet 3.0 container.

    Java Client

    The cometd-java client is based on the Jetty asynchronous HTTP Client, thus it is an excellent basis for devloping scalable load generators for testing your cometd application.  It can also be used in rich java UIs that wish to use cometd to communicate over the internet to a server behind firewalls and proxies.

    Basic Operation

    Publish/Subscribe Messaging

    The core operation of cometd is as a publish/subscribe messaging framework. A message is published to a channel with a URI like name (eg. /chat/room/demo ) and cometd will arrange to deliver that message to all subscribers for that channel, either locally in the server, remote in the client or a client of a clustered server.   The subscription may be for the channel itself (/chat/room/demo), a simple wildcard (/chat/room/*) or a deep wildcard (/chat/**).
    Subscription in javascript needs to provide a callback function to handle the messages:

    // Some initialization codevar subscription1 = cometd.addListener('/meta/connect', function() { ... });var subscription2 = cometd.subscribe('/foo/bar/', function() { ... });
    // Some de-initialization codecometd.unsubscribe(subscription2);cometd.removeListener(subscription1);

    Publishing a method in javascript is simply a matter of passing the channel name and the message itself:

    cometd.publish('/mychannel', { wibble: { foo: 'bar' }, wobble: 2 });

    Similar APIs for publish subscribe are available via a semi-standard cometd API and several java implementations are now using this.

    Service Channels

    With publish/subscribe, the basic feature set for a chat room is available. But non-trivial applications cannot be implemented with all communication broadcast on publicly accessible channels.  Thus cometd has a convention that any channel in the /meta/** or /service/** name space is a non-broadcast service channel (meta channels are used by the protocol itself and service channels are available to the application).  This means that a message published to a service channel will only be delivered to server side clients, listeners and extensions. A message to a service channel will never be remotely distributed to a remote client unless an application explicitly delivers or publishes a message to a particular client.
    This allows a client to publish a message to a service channel and know that it will only be received by the server

    Private Message Delivery

    A cometd application often needs to deliver a message to a specific client.  Thus as an alternative to publishing to a channel, a java server side application can deliver a message to a specific user:

    Client client = bayux.getClient(someClientId);client.deliver(fromClient,"/some/channel",aMsg,msgId);

    Note that a private message delivery still identifies a channel. This is not to broadcast on that channel, but to identify the message handler within the client. This channel may be a service channel, so the client will know it is a private message, or it can be an ordinary channel, in which case the client cannot tell if the message was published or delivered to it.
    Such private deliveries are often used to tell a newly subscribed client the latest state message. For example, consider a client that has subscribed to /portfolio/stock/GOOG.  That client needs to know the current price of the stock and should not have to wait until the price changes. Thus the portfolio application can detect the subscription server side and deliver a private message to the subscriber to tell them the latest price.

    Lazy Messages

    One of the key features of comet is delivering messages to clients from the server with low latency,  but not all messages need low latency. Consider a system status message, sent to all users, telling them something non urgent (eg maintenance scheduled for later in the day).  There is no need for that message to be sent to every single user on the system with minimal latency and there is a significant cost to try to do so.  If you have 10,000 users, then waking up 10k long polls will take a few seconds of server capacity which might be better used for urgent application events.
    Thus the cometd-java server has the concept of lazy messages.  A message may be flagged as lazy by publishing it to a channel that is flagged as lazy (ChannelImpl#setLazy(boolean)) or by publishing to any channel with the ChannelImpl#publishLazy(…) method. [ Note these methods are not yet on the standard API, but should be before 1.0. Until they are, you must cast to ChannelImpl ].
    A Lazy message will be queued for a client, but it will not wake up the long poll of that client.  So a lazy message will only be delivered when another non-lazy message is sent to that client, or the long poll naturally times out (in 30 to 200 seconds).  Thus low priority messages can be delivered with minimal additional load to the server.

    Message Batching

    Comet applications will often need to send several related messages in respond to the same action (for example subscribing to a chat room and sending a hello message).   To maximize communication efficiency, it is desirable that these multiple messages are transported on the same HTTP message.    Thus both the cometd client and server APIs support the concept of batching.  Once a batch is started, messages for a client a queued but not delivered until such time as the batch is ended. Batches may be nested, so it is safe to start a batch and call some other code that may do it’s own batching.
    On the javascript client side, batching can be achieved with code like:

    cometd.startBatch();cometd.unsubscribe(myChatSubscription);cometd.publish("/chat/demo",{text:'Elvis has left the building', from:'Elvis'});cometd.endBatch();

    On the java server side, batching can be achieved with code like:

    public void handleMessage(Client from, Message message){    from.startBatch();    processMessageForAll(from,message);    from.deliver(from,message.getChannel(),processResponseForOne(message),null);    from.endBatch();}

    This will send any message published for all users and the private reply to the client in a single HTTP response.

    Listeners, Data Filters and Extensions

    There are several different ways that application code can attach to the cometd clients and servers in order to receive events about cometd and to modify cometd behaviour:

    • Listeners are available on both client and server implementations and can inform the application of cometd events such as handshake, connections lost, channel subscriptions as well as message delivery.
    • DataFilters are available in the java server and can be mapped to channels so that they filter the data of any messages published to those channels. This allows a 3rd party to review an application and to apply validation and verification logic as an aspect rather than being baked in (which application developers never do). There are several utility data filters available.
    • Extensions are available on both client and server implementations and allow inbound and outbound messages to be intercepted, validated, modified, injected and/or deleted.  The utility extensions provided are detailed in the next section.

    Security Policy

    The SecurityPolicy API  is available in the java server and is used to authorize handshakes, channel creation, channel subscription and publishing.  If an SecurityPolicy implementation is constructed with a reference to the Bayeux instance, then it can call the getCurrentRequest() method to access the current HttpServletRequest and thus use standard web authentication and/or HttpSessions when authorizing actions.

    Extensions

    Timestamp Extension

    The timestamp extension simply adds the optional timestamp field to every message sent.

    Timesync Extension

    The timesync extension implements a NTP-like time synchronization.  Thus a client can be aware of the offset between it’s local clock and the servers clock. This allows an application (eg an Auction site) to send a single message with the semantic: “the auction closes at 18:45 EST” and then each client can use it’s own local clock to count down the auction, without the need for wasteful tick messages from the server.

    Acknowledged Message Extension

    The Cometd/Bayeux protocol is carried over TCP/IP, so it is intrinsically reliable and messages will not get corrupted. However, with cometd, there are some edge cases where messages might get lost (dropped connections) or might arrive out of order (using multiple connections).
    The acknowledge extension piggybacks message acknowlegements onto the long polling transports of cometd, so that messages are not lost or delivered out of order.

    Reload Extension

    The client side only reload extension is provided to allow a comet enabled web page to be reloaded with out needing to rehandshake.  The existing client ID can be passed from page to page, so that an Comet/Ajax style of user interface can be merged with a more traditional page based approach.

    Clustering

    Cometd servers may be aggregated into a cluster using Oort, which is a collection of extensions that use the java cometd client to link the servers together. Currently Oort is under documented, so is best understood by reading this summary and then looking at the Auction example.

    Observing Comets

    The Oort class allows cometd servers to observe each other, which means to open bayeux connections in both directions.  Observations are setup with the Oort#observerComet method and can be used to setup arbitrary topologies of servers (although fully connected is the norm and is implemented by the Oort cloud).
    Once observed, Oort Comets may cluster particular channels by calling Oort#observerChannel,which will cause the local server to subscribe to that channel on all observed comet servers. Any messages received on those subscriptions will be re-published to the local instance of that channel (with loop prevention). Thus messages published to an observed channel will be published on all observered comet servers.

    The Oort Cloud

    The Oort cloud is a self organizing cluster of Oort Comet servers that use the Oort Observed /oort/cloud channel to publish information about discovered Oort Comets.  Once an Oort comet is told of another via the /oort/cloud channel, it will observe it and then publish its own list of known Oort comets.   This allows a fully connected cluster of Oort comets to self organize with only one or two comet nodes known in common.

    Seti

    Once an Oort Cloud has been established, a load balancer will be needed to spread load over the cluster, so a user might be connected to any node in the cloud.   Seti (as in the Search for Extra Terrestial Intelligence), is a mechanism that allows a private message to be sent to a particular user that may be located anywhere in the cloud. Sharding and location caching can be used to make this more efficient.

    Examples

    Chat

    Chat is the hello world of web-2.0 and the introductory demo for cometd.  The server side of the chat monitors the join messages to maintain and distribute a membership list for each room. A services channel is used to provide a private message service.
    There is both a dojo chat client and jquery chat client provided.

    Auction

    The Auction demonstration provided shows the Oort and timesync extensions in use to provide a moderately complete example of a cometd application.


    Note that this example uses cometd-dojo for the client, but the UI is implemented in a mushup of prototype, behaviour and other js libs.  Volunteers desparately needed to make this all dojo or all jquery.

    Archetypes

    Assembling the components needed for a cometd web application can be a little complex as the server components need to be mixed with the javascript framework and the cometd client. To make this process easier, cometd now support cometd maven archetypes, that can build a blank cometd war project in a few lines.
    So what are you waiting for! Go comet!
     
     
     
     

  • Continuations to Continue

    Jetty-6 Continuations introduced the concept of asynchronous servlets to provide scalability and quality of service to web 2.0 applications such as chat, collaborative editing, price publishing, as well as powering HTTP based frameworks like cometd, apache camel, openfire XMPP and flex BlazeDS. wt58jhp2an
    With the introduction of similar  asynchronous features in Servlet-3.0, some have suggested that the Continuation API would be deprecated.  Instead, the Continuation API has been updated to provide a simplified portability run asynchronously on any servlet 3.0 container as well as on Jetty (6,7 & 8).  Continuations will work synchronously (blocking) on any 2.5 servlet container. Thus programming to the Continuations API allows your application to achieve asynchronicity today without waiting for the release of stable 3.0 containers (and needing to upgrade all your associated infrastructure).

    Continuation Improvements

    The old continuation API threw an exception when the continuation was suspended, so that the thread to exit the service method of the servlet/filter. This caused a potential race condition as a continuation would need to be registered with the asynchronous service before the suspend, so that service could do a resume before the actual suspend, unless a common mutex was used.
    Also, the old continuation API had a waiting continuation that would work on non-jetty servers.  However the behaviour of this the waiting continuation was a little different to the normal continuation, so code had to be carefully written to work for both.
    The new continuation API does not throw an exception from suspend, so
    the continuation can be suspended before it is registered with any
    services and the mutex is no longer needed. With the use of a ContinuationFilter for non asynchronous containers, the continuation will now behaive identically in all servers.

    Continuations and Servlet 3.0

    The servlet 3.0 asynchronous API introduced some additional asynchronous features not supported by jetty 6 continuations, including:

    • The ability to complete an asynchronous request without dispatching
    • Support for wrapped requests and responses.
    • Listeners for asynchronous events
    • Dispatching asynchronous requests to specific contexts and/or resources

    While powerful, these additional features may also be very complicated and confusing. Thus the new Continuation API has cherry picked the good ideas and represents a good compromise between power and complexity.  The servlet 3.0 features adopted are:

    • The completing a continuation without resuming.
    • Support for response wrappers.
    • Optional listeners for asynchronous events.

     

    Using The Continuation API

    The new continuation API
    is available in Jetty-7 and is not expected to significantly change in
    future releases.  Also the continuation library is intended to be
    deployed in WEB-INF/lib and is portable.  Thus the jetty-7 continuation
    jar will work asynchronously when deployed in jetty-6, jetty-7, jetty-8
    or any servlet 3.0 container.

    Obtaining a Continuation

    The ContinuationSupport factory class can be used to obtain a continuation instance associated with a request: 

        Continuation continuation = ContinuationSupport.getContinuation(request);

    Suspending a Request

    The suspend a request, the suspend method is called on the continuation: 

      void doGet(HttpServletRequest request, HttpServletResponse response)  {    ...    continuation.suspend();    ...  }

    After this method has been called, the lifecycle of the request will be extended beyond the return to the container from the Servlet.service(…) method and Filter.doFilter(…) calls. After these dispatch methods return to, as suspended request will not be committed and a response will not be sent to the HTTP client.

    Once a request is suspended, the continuation should be registered with an asynchronous service so that it may be used by an asynchronous callback once the waited for event happens.
    The request will be suspended until either continuation.resume() or continuation.complete() is called. If neither is called then the continuation will timeout after a default period or a time set before the suspend by a call to continuation.setTimeout(long). If no timeout listeners resume or complete the continuation, then the continuation is resumed with continuation.isExpired() true.
    There is a variation of suspend for use with request wrappers and the complete lifecycle (see below):

        continuation.suspend(response);

    Suspension is analogous to the servlet 3.0 request.startAsync() method. Unlike jetty-6 continuations, an exception is not thrown by suspend and the method should return normally. This allows the registration of the continuation to occur after suspension and avoids the need for a mutex. If an exception is desirable (to bypass code that is unaware of continuations and may try to commit the response), then continuation.undispatch() may be called to exit the current thread from the current dispatch by throwing a ContinuationThrowable.

    Resuming a Request

    Once an asynchronous event has occurred, the continuation can be resumed: 

      void myAsyncCallback(Object results)  {    continuation.setAttribute("results",results);    continuation.resume();  }

    Once a continuation is resumed, the request is redispatched to the servlet container, almost as if the request had been received again. However during the redispatch, the continuation.isInitial() method returns false and any attributes set by the asynchronous handler are available.

    Continuation resume is analogous to Servlet 3.0 AsyncContext.dispatch().

    Completing Request

    As an alternative to completing a request, an asynchronous handler may write the response itself. After writing the response, the handler must indicate the request handling is complete by calling the complete
    method: 

      void myAsyncCallback(Object results)  {    writeResults(continuation.getServletResponse(),results);    continuation.complete();  }

    After complete is called, the container schedules the response to be committed and flushed.

    Continuation resume is analogous to Servlet 3.0 AsyncContext.complete().

    Continuation Listeners

    An application may monitor the status of a continuation by using a ContinuationListener

      void doGet(HttpServletRequest request, HttpServletResponse response)  {    ...
        Continuation continuation = ContinuationSupport.getContinuation(request);    continuation.addContinuationListener(new ContinuationListener()    {      public void onTimeout(Continuation continuation) { ... }      public void onComplete(Continuation continuation) { ... }    });
        continuation.suspend();    ...  }

    Continuation listeners are analogous to Servlet 3.0 AsyncListeners.
     

    Continuation Patterns

    Suspend Resume Pattern

    The suspend/resume style is used when a servlet and/or filter is used to generate the response after a asynchronous wait that is terminated by an asynchronous handler. Typically a request attribute is used to pass results and to indicate if the request has already been suspended. 

      void doGet(HttpServletRequest request, HttpServletResponse response)  {     // if we need to get asynchronous results     Object results = request.getAttribute("results);     if (results==null)     {       final Continuation continuation = ContinuationSupport.getContinuation(request);
           // if this is not a timeout       if (continuation.isExpired())       {         sendMyTimeoutResponse(response);         return;       }
           // suspend the request       continuation.suspend(); // always suspend before registration
           // register with async service.  The code here will depend on the       // the service used (see Jetty HttpClient for example)       myAsyncHandler.register(new MyHandler()       {          public void onMyEvent(Object result)          {            continuation.setAttribute("results",results);            continuation.resume();          }       });       return; // or continuation.undispatch();     }
         // Send the results     sendMyResultResponse(response,results);   }

    This style is very good when the response needs the facilities of the servlet container (eg it uses a web framework) or if the one event may resume many requests so the containers thread pool can be used to handle each of them.

    Suspend Continue Pattern

    The suspend/complete style is used when an asynchronous handler is used to generate the response: 

      void doGet(HttpServletRequest request, HttpServletResponse response)  {     final Continuation continuation = ContinuationSupport.getContinuation(request);
         // if this is not a timeout     if (continuation.isExpired())     {       sendMyTimeoutResponse(request,response);       return;     }
         // suspend the request     continuation.suspend(response); // response may be wrapped.
         // register with async service.  The code here will depend on the     // the service used (see Jetty HttpClient for example)     myAsyncHandler.register(new MyHandler()     {       public void onMyEvent(Object result)       {         sendMyResultResponse(continuation.getServletResponse(),results);         continuation.complete();       }     });   }

    This style is very good when the response does not needs the facilities of the servlet container (eg it does not use a web framework) and if an event will resume only one continuation. If many responses are to be sent (eg a chat room), then writing one response may block and cause a DOS on the other responses.
     

    Continuation Examples

    Chat Servlet

    The ChatServlet example shows how the suspend/resume style can be used to directly code a chat room. The same principles are applied to frameworks like cometd.org which provide an richer environment for such applications, based on Continuations.

    Quality of Service Filter

    The QoSFilter(javadoc), uses suspend/resume style to limit the number of requests simultaneously within the filter. This can be used to protect a JDBC connection pool or other limited resource from too many simultaneous requests.

    If too many requests are received, the extra requests wait for a short time on a semaphore, before being suspended. As requests within the filter return, they use a priority queue to resume the suspended requests. This allows your authenticated or priority users to get a better share of your servers resources when the machine is under load.

    Denial of Service Filter

    The DosFilter(javadoc) is similar to the QoSFilter, but protects a web application from a denial of service attack (as best you can from within a web application). If too many requests are detected coming from one source, then those requests are suspended and a warning generated. This works on the assumption that the attacker may be written in simple blocking style, so by suspending you are hopefully consuming their resources. True protection from DOS can only be achieved by network devices (or eugenics :).

    Proxy Servlet

    The ProxyServlet uses the suspend/complete style and the jetty asynchronous client to implement a scalable Proxy server (or transparent proxy).

    Gzip Filter

    The jetty GzipFilter is a filter that implements dynamic compression by wrapping the response objects. This filter has been enhanced to understand continuations, so that if a request is suspended in suspend/complete style and the wrapped response is passed to the asynchronous handler, then a ContinuationListener is used to finish the wrapped response. This allows the GzipFilter to work with the asynchronous ProxyServlet and to compress the proxied responses.
     

    Where do you get it?

    You can read about it, or download it with jetty or include it in your maven project like this pom.xml.