Blog

  • CometD Message Flow Control with Lazy Channels

    In the CometD introduction post, I explained how the CometD project provides a solution for writing low-latency server-side event-driven web applications.
    Examples of this kind of applications are financial applications that provide stock quote price updates, or online games, or position tracking systems for fast moving objects (think a motorbike on a circuit).
    These applications have in common the fact that they generate a high rate of server-side events, say in the order of around 10 events per second.
    With such an event rate, most of the times you start wondering if it is appropriate to really send to clients every event (and therefore 10 events/s) or if it not better to save bandwidth and computing resources and send to clients events at a lower rate.
    For example, even if the stock quote price changes 10 times a second, it will probably be enough to deliver changes once a second to a web application that is conceived to be used by humans: I will be surprised if a person can make any use (or even see it and remember it) of the stock price that was updated 2 tenths of a seconds ago (and that in the meanwhile already changed 2 or 3 times). (Disclaimer: I am not involved in financial applications, I am just making a hypothesis here for the sake of explaining the concept).
    The CometD project provides lazy channels to implement this kind of message flow control (it also provides other message flow control means, of which I’ll speak in a future entry).
    A channel can be marked as lazy during its initialization on server-side:

    BayeuxServer bayeux = ...;
    bayeux.createIfAbsent("/stock/GOOG", new ConfigurableServerChannel.Initializer()
    {
        public void configureChannel(ConfigurableServerChannel channel)
        {
            channel.setLazy(true);
        }
    });

    Any message sent to that channel will be marked to be a lazy message, and will be delivered lazily: either when a timeout (the max lazy timeout) expires, or when the long poll returns, whichever comes first.
    It is possible to configure the duration of the max lazy timeout, for example to be 1 second, in web.xml:

    
        
            cometd
            org.cometd.server.CometdServlet
            
                maxLazyTimeout
                1000
            
            ...
        
        ...
    

    With this configuration, lazy channels will have a max lazy timeout of 1000 ms and messages published to a lazy channel will be delivered in a batch once a second.
    Assuming, for example, that you have a steady rate of 8 messages per second arriving to server-side that update the GOOG stock quote, you will be delivering a batch of 8 messages to clients every second, instead of delivering 1 message every 125 ms.
    Lazy channels do not immediately reduce the bandwidth consumption (since no messages are discarded), but combined with a GZip filter that compresses the output allow bandwidth savings by compressing more messages for each delivery (as in general it is better to compress a larger text than many small ones).
    You can browse the CometD documentation for more information, look at the online javadocs, post to the mailing list or pop up in the IRC channel #cometd on irc.freenode.org.

  • Jetty 7.4 new features

    A release candidate of Jetty 7.4 is now available as both Jetty@eclipse and Jetty-Hightide@codehaus distributions. This release contains a number of new features which I will briefly introduce now, and make the target of more detailed blogs, webinars and wiki pages over the next few weeks:

    Jetty Overlay Deployer

    Jetty now includes a deployer that is designed to allow a war file to be customised and deployed without modifying the original WAR  file, which is kept immutable or even signed so that you know it is exactly as delivered.   A WAR file is configured by applying a series of overlays, each of which may contain:

    • A web.xml fragment to modify or add filter, servlet and other web.xml configuration.
    • static content to add or replace static content in the war
    • Classes and Jars to be added to the classpath
    • Context configuration

    Overlays can be created that are application specific, node specific or instance specific and multiple instances of the same application will share the common war and overlays.  This sharing includes classloaders and static resource caches and greatly reduces the memory footprint of multi-tenanted deployments.

    Overlays also allow easy identification of what configuration has changed in a deployed WAR, so that an overlay can be applied to an updated WAR and the configuration changes will be preserved.

    Jetty Spring

    Since almost the beginning, Jetty has had it’s own dependency injection (aka IOC)  XML configuration format, which we often refer to as jetty.xml format. This is essentially equivalent to the more popular IOC frameworks like spring XML and Jetty has also been able to be configured and run with spring.  However that approach did not allow the context.xml and jetty-env.xml files to be written in spring format, so deployments still had a mix of IOC formats.    With Jetty 7.4, the jetty XML configuration format can now detect other formats and will use the java services mechanism to look for a provider for that format.  The Jetty-spring module now provides a SpringConfigurationProcessor so that anywhere that a Jetty XML Syntax file is expected, a spring XML file can be used.

    Jetty Reverse HTTP

    There is an increasing desire to server content of client machines or to run servers from behind restrictive firewalls.  An example of this is running a jetty server on an android phone, where the 3G network prevents inbound connections.

    Jetty Reverse HTTP uses comet long polling techniques to allow a jetty server to make an outbound connection to a gateway, over which it will receive inbound requests.

    Jetty Nested

    Jetty has some compelling features, but sometimes it just is not possible to deploy Jetty. Often this is due to non technical reasons such as a corporate policy that say that only allowable container is LogicalGlassSphere or similar.

    Jetty nested makes it possible to deploy jetty within another servlet container by adding a Jetty connector that takes requests/responses from the outer container and turns them into Jetty requests and responses.  If permissions allow, Jetty may also open other connectors on other ports, so that as well as outer container requests, jetty can directly serve async HTTP and/or Websockets.

    This allows your jetty application to be consumable by corporate policies and deployment procedures, while still giving you access to most of the feature set of Jetty. [ Currently the jetty nested connector does not support non blocking asynchronous requests, but it will eventually support that when deployed in servlet 3.0 containers. ]

    Half Close support

    Jetty has long used half close when sending content bounded by EOF to avoid the situation where intermediaries handle a TCP/IP RST by discarding all buffered data.   But since Jetty is also frequently used in a client and/or proxy role, it has become important that half close is supported inbound as well as outbound.  Jetty now supports inbound half closes without an immediate close, so that outbound data can  continue to be generated and flushed.    This is also integrated with SSL and Websocket closing hand shakes.

  • CometD Introduction

    The CometD project provides tools to write server-side event-driven web applications.
    This kind of web application is becoming more popular, thanks to the fact that browsers have become truly powerful (JavaScript performance problems are now a relic of the past) and are widely deployed, so they are a very good platform for no-install applications.
    Point to the URL, done.
    Server-side event-driven web applications are those web application that receive events from third party systems on server side, and need to delivery those events with a very low latency to clients (mostly browsers).
    Examples of such applications are chat applications, monitoring consoles, financial applications that provide stock quotes, online collaboration tools (e.g. document writing, code review), online games (e.g. chess, poker), social network applications, latest news information, mail applications, messaging applications, etc.
    The key point of these applications is low latency: you cannot play a one-minute chess game if your application polls the chess server every 5-10 seconds to download your opponent’s moves.
    These applications can be written using Comet techniques, but the moment you think it’s simple using those techniques, you’ll be faced with browsers glitches, nasty race conditions, scalability issues and in general with the complexity of asynchronous, multi-threaded programming.
    For example, Comet techniques do not specify how to identify a specific client. How can browserA tell the server to send a message to browserB ?
    It soon turns out that you need some sort of client identifier, and perhaps you want to support multiple clients in the same browser (so no, the HTTP session is not enough).
    Add to that connection heartbeats, error detection, authentication and disconnection and other features, and you realize you are building a protocol on top of HTTP.
    And this is where the CometD project comes to a rescue, providing that protocol on top of HTTP (the Bayeux protocol), and easy-to-use libraries that shield developers from said complexities.
    In a nutshell, CometD enables publish/subscribe web messaging: it makes possible to send a message from a browser to another browser (or to several other browsers), or to send a message to the server only, or have the server send messages to a browser (or to several other browsers).
    Below you can find an example of the JavaScript API, used in conjunction with the Dojo Toolkit:

    
      
        
        
      
    

    You can subscribe to a channel, that represents the topic for which you want to receive messages.
    For example, a stock quote web application may publish quote updates for Google to channel /stock/GOOG on server-side, and all browsers that subscribed to that channel will receive the message with the updated stock quote (and whatever other information the application puts in the message):

    dojox.cometd.subscribe("/stock/GOOG", function(message)
    {
      // Update the DOM with the content from the message
    });

    Equally easy is to publish messages to the server on a particular channel:

    dojox.cometd.publish("/game/chess/12345", {
      move: "e4"
    });

    And at the end, you can disconnect:

    dojox.cometd.disconnect();

    You can have more information on the CometD site, and on the documentation section.
    You can have a skeleton CometD project setup in seconds using Maven archetypes, as explained in the CometD primer. The Maven archetypes support Dojo, jQuery and (optionally) integrate with Spring.
    Download and try out the latest CometD 2.1.1 release.

  • CometD 2.1.1 Released

    CometD 2.1.1 has been released.
    This is a minor bug fix release that updates the JavaScript toolkits to Dojo 1.6.0 and jQuery 1.5.1, and Jetty to 7.3.1 and 6.1.26.
    Enjoy !

  • Is WebSocket Chat Simpler?

    A year ago I wrote an article asking Is WebSocket Chat Simple?, where I highlighted the deficiencies of this much touted protocol for implementing simple comet applications like chat. After a year of intense debate there have been many changes and there are new drafts of both the WebSocket protocol and WebSocket API. Thus I thought it worthwhile to update my article with comments to see how things have improved (or not) in the last year.

    The text in italics is my wishful thinking from a year ago
    The text in bold italics is my updated comments

    Is WebSocket Chat Simple (take II)?

    The WebSocket protocol has been touted as a great leap forward for bidirectional web applications like chat, promising a new era of simple Comet applications. Unfortunately there is no such thing as a silver bullet and this blog will walk through a simple chat room to see where WebSocket does and does not help with Comet applications. In a WebSocket world, there is even more need for frameworks like cometD.

    Simple Chat

    Chat is the “helloworld” application of web-2.0 and a simple WebSocket chat room is included with the jetty-7 which now supports WebSockets. The source of the simple chat can be seen in svn for the client-side and server-side.

    The key part of the client-side is to establish a WebSocket connection:

    join: function(name) {
       this._username=name;
       var location = document.location.toString()
           .replace('http:','ws:');
       this._ws=new WebSocket(location);
       this._ws.onopen=this._onopen;
       this._ws.onmessage=this._onmessage;
       this._ws.onclose=this._onclose;
    },

    It is then possible for the client to send a chat message to the server:

    _send: function(user,message){
       user=user.replace(':','_');
       if (this._ws)
           this._ws.send(user+':'+message);
    },

    and to receive a chat message from the server and to display it:

    _onmessage: function(m) {
       if (m.data){
           var c=m.data.indexOf(':');
           var from=m.data.substring(0,c)
               .replace('<','<')
               .replace('>','>');
           var text=m.data.substring(c+1)
               .replace('<','<')
               .replace('>','>');
           var chat=$('chat');
           var spanFrom = document.createElement('span');
           spanFrom.className='from';
           spanFrom.innerHTML=from+': ';
           var spanText = document.createElement('span');
           spanText.className='text';
           spanText.innerHTML=text;
           var lineBreak = document.createElement('br');
           chat.appendChild(spanFrom);
           chat.appendChild(spanText);
           chat.appendChild(lineBreak);
           chat.scrollTop = chat.scrollHeight - chat.clientHeight;
      }
    },

    For the server-side, we simply accept incoming connections as members:

    public void onConnect(Connection connection)
    {
        _connection=connection;
        _members.add(this);
    }

    and then for all messages received, we send them to all members:

    public void onMessage(byte frame, String data){
       for (ChatWebSocket member : _members){
       try{
           member._connection.sendMessage(data);
       }
       catch(IOException e){
           Log.warn(e);
       }
     }
    }

    So we are done, right? We have a working chat room – let’s deploy it and we’ll be the next Google GChat!! Unfortunately, reality is not that simple and this chat room is a long way short of the kinds of functionality that you expect from a chat room – even a simple one.

    Not So Simple Chat

    On Close?

    With a chat room, the standard use-case is that once you establish your presence in the room and it remains until you explicitly leave the room. In the context of webchat, that means that you can send receive a chat message until you close the browser or navigate away from the page. Unfortunately the simple chat example does not implement this semantic because the WebSocket protocol allows for an idle timeout of the connection. So if nothing is said in the chat room for a short while then the WebSocket connection will be closed, either by the client, the server or even an intermediary. The application will be notified of this event by the onClose method being called.

    So how should the chat room handle onClose? The obvious thing to do is for the client to simply call join again and open a new connection back to the server:

    _onclose: function() {
       this._ws=null;
       this.join(this.username);
    }

    This indeed maintains the user’s presence in the chat room, but is far from an ideal solution since every few idle minutes the user will leave the room and rejoin. For the short period between connections, they will miss any messages sent and will not be able to send any chat
    themselves.

    Keep-Alives

    In order to maintain presence, the chat application can send keep-alive messages on the WebSocket to prevent it from being closed due to an idle timeout. However, the application has no idea at all about what the idle timeouts are, so it will have to pick some arbitrary frequent period (e.g. 30s) to send keep-alives and hope that is less than any idle timeout on the path (more or less as long-polling does now).

    Ideally a future version of WebSocket will support timeout discovery, so it can either tell the application the period for keep-alive messages or it could even send the keep-alives on behalf of the application.

    The latest drafts of the websocket protocol do include control packets for ping and pong, which can effectively be used as messages to keep alive a connection. Unfortunately this mechanism is not actually usable because: a) there is no javascript API to send pings; b) there is no API to communicate to the infrastructure if the application wants the connection kept alive or not; c) the protocol does not require that pings are sent; d) neither the websocket infrastructure nor the application knows the frequency at which pings would need to be sent to keep alive the intermediaries and other end of the connection. There is a draft proposal to declare timeouts in headers, but it remains to be seen if that gathers any traction.

    Unfortunately keep-alives don’t avoid the need for onClose to initiate new WebSockets, because the internet is not a perfect place and especially with wifi and mobile clients, sometimes connections just drop. It is a standard part of HTTP that if a connection closes while being used, the GET requests are retried on new connections, so users are mostly insulated from transient connection failures. A WebSocket chat room needs to work with the same assumption and even with keep-alives, it needs to be prepared to reopen a connection when onClose is called.

    Queues

    With keep-alives, the WebSocket chat connection should be mostly be a long-lived entity, with only the occasional reconnect due to transient network problems or server restarts. Occasional loss of presence might not be seen to be a problem, unless you’re the dude that just typed a long chat message on the tiny keyboard of your vodafone360 app or instead of chat you are playing on chess.com and you don’t want to abandon a game due to transient network issues. So for any reasonable level of quality of service, the application is going to need to “pave over” any small gaps in connectivity by providing some kind of message queue in both client and server. If a message is sent during the period of time that there is no WebSocket connection, it needs to be queued until such time as the new connection is established.

    Timeouts

    Unfortunately, some failures are not transient and sometimes a new connection will not be established. We can’t allow queues to grow forever and pretend that a user is present long after their connection is gone. Thus both ends of the chat application will also need timeouts and the user will not be seen to have left the chat room until they have no connection for the period of the timeout or until an explicit leaving message is received.

    Ideally a future version of WebSocket will support an orderly close message so the application can distinguish between a network failure (and keep the user’s presence for a time) and an orderly close as the user leaves the page (and remove the user’s present).

    Both the protocol and API have been updated with the ability to distinguish an orderly close from a failed close. The WebSocket API now has a CloseEvent that is passed to the onclose method that does contain the close code and reason string that is sent with an orderly close and this will allow simpler handling in the endpoints and avoid pointless client retries.

    Message Retries

    Even with message queues, there is a race condition that makes it difficult to completely close the gaps between connections. If the onClose method is called very soon after a message is sent, then the application has no way to know if that close event happened before or after the message was delivered. If quality of service is important, then the application currently has no option but to have some kind of per message or periodic acknowledgment of message delivery.

    Ideally a future version of WebSocket will support orderly close, so that delivery can be known for non-failed connections and a complication of acknowledgements can be avoided unless the highest quality of service is required.

    Orderly close is now supported (see above.)

    Backoff

    With onClose handling, keep-alives, message queues, timeouts and retries, we finally will have a chat room that can maintain a user’s presence while they remain on the web page. But unfortunately the chat room is still not complete, because it needs to handle errors and non-transient failures. Some of the circumstances that need to be avoided include:

    • If the chat server is shut down, the client application is notified of this simply by a call to onClose rather than an onOpen call. In this case, onClose should not just reopen the connection as a 100% CPU busy loop with result. Instead the chat application has to infer that there was a connection problem and to at least pause a short while before trying again – potentially with a retry backoff algorithm to reduce retries over time.

      Ideally a future version of WebSocket will allow more access to connection errors, as the handling of no-route-to-host may be entirely different to handling of a 401 unauthorized response from the server.

      The WebSocket protocol is now full HTTP compliant before the 101 of the upgrade handshake, so responses like 401 can legally be sent. Also the WebSocket API now has an onerror call back, but unfortuantely it is not yet clear under what circumstances it is called, nor is there any indication that information like a 401 response or 302 redirect, would be available to the application.

    • If the user types a large chat message, then the WebSocket frame sent may exceed some resource level on the client, server or intermediary. Currently the WebSocket response to such resource issues is to simply close the connection. Unfortunately for the chat application, this may look like a transient network failure (coming after a successful onOpen call), so it may just reopen the connection and naively retry sending the message, which will again exceed the max message size and we can lather, rinse and repeat! Again it is important that any automatic retries performed by the application will be limited by a backoff timeout and/or max retries.

      Ideally a future version of WebSocket will be able to send an error status as something distinct from a network failure or idle timeout, so the application will know not to retry errors.

      While there is no general error control frame, there is now a reason code defined in the orderly close, so that for any errors serious enough to force the connection to be closed the following can be communicated: 1000 – normal closure; 1001 – shutdown or navigate away; 1002 – protocol error; 1003 data type cannot be handled; 1004 message is too large. These are a great improvement, but it would be better if such errors could be sent in control frames so that the connection does not need to be sacrificed in order to reject 1 large message or unknown type.

    Does it have to be so hard?

    The above scenario is not the only way that a robust chat room could be developed. With some compromises on quality of service and some good user interface design, it would certainly be possible to build a chat room with less complex usage of a WebSocket. However, the design decisions represented by the above scenario are not unreasonable even for chat and certainly are applicable to applications needing a better QoS that most chat rooms.

    What this blog illustrates is that there is no silver bullet and that WebSocket will not solve many of the complexities that need to be addressed when developing robust Comet web applications. Hopefully some features such as keep-alives, timeout negotiation, orderly close and error notification can be build into a future version of WebSocket, but it is not the role of WebSocket to provide the more advanced handling of queues, timeouts, reconnections, retries and backoffs. If you wish to have a high quality of service, then either your application or the framework that it uses will need to deal with these features.

    CometD with WebSocket

    CometD version 2 will soon be released with support for WebSocket as an alternative transport to the currently supported JSON long-polling and JSONP callback-polling. cometD supports all the features discussed in this blog and makes them available transparently to browsers with or without WebSocket support. We are hopeful that WebSocket usage will be able to give us even better throughput and latency for cometD than the already impressive results achieved with long-polling.

    Cometd 2 has been released and we now have even more impressive results Websocket support is build into both Jetty and cometd, but uptake has been somewhat hampered by the multiple versions of the protocol in the wild and patchy/changing browser support.

    Programming to a framework like cometd remains the easiest way to achieve a comet application as well as have portability over “old” techniques like long polling and emerging technologies like websockets.

  • CometD 1.1.4 Released

    CometD 1.1.4 has been released.
    This is a minor release that updates the JavaScript toolkits to Dojo 1.6.0 and jQuery 1.5.1, and Jetty to 7.3.1 and 6.1.26.
    Enjoy !

  • Cometd with annotations

    Cometd 2.1 now supports annotations to define cometd services and clients.  Annotations greatly reduces the boiler plate code required to write a cometd service and also links well with new cometd 2.x features such as channel initializers and Authorizers, so that all the code for a service can be grouped in one POJO class rather than spread over several derived entities.  The annotation are some cometd specific ones, plus some standard spring annotations.

    Server Side

    This blog looks at the annotated ChatService example bundled with the 2.1.0 cometd release.

    Creating a Service

    A POJO (Plain Old Java Object) can be turned into a cometd service by the addition of the @Service class annotation:

    package org.cometd.examples;
    import
    org.cometd.java.annotation.Service;

    @Service(“chat”)
    public class
    ChatService { … }

    The service name passed is used in the services session ID, to assist with debugging.

    The annotated version of the CometdServlet then needs be used and to be told the classes that it should instantiate as services and scan for annotations. This is done with a coma separated list of class names in the “services” init-parameter in the web.xml (or similar) as follows:

    <servlet>
      <servlet-name>cometd</servlet-name>
      <servlet-class>org.cometd.java.annotation.AnnotationCometdServlet</servlet-class>
      ...
      <init-param>
        <param-name>services</param-name>
        <param-value>org.cometd.examples.ChatService</param-value>
      </init-param>
    </servlet>

    Configuring a Channel

    A service will frequently need to create, configure and Listen or subscribe to a channel. This can now be done atomically in cometd 2.x so that messages will not be recived before the channel is fully created and configured. For example the chat services configures 1 absolute channel and 2 wild card channel using the @Configure annotations:

    @Configure ({"/chat/**","/members/**"})
    protected void configureChatStarStar(ConfigurableServerChannel channel)
    {
        DataFilterMessageListener noMarkup = 
            new DataFilterMessageListener(_bayeux,new NoMarkupFilter(),new BadWordFilter());
        channel.addListener(noMarkup);
        channel.addAuthorizer(GrantAuthorizer.GRANT_ALL);
    }
    
    @Configure ("/service/members")
    protected void configureMembers(ConfigurableServerChannel channel)
    {
        channel.addAuthorizer(GrantAuthorizer.GRANT_PUBLISH);
        channel.setPersistent(true);
    }

    The @Configure annotation is roughly equivalent to calling the BayeuxServer#createIfAbsent method with the annotated method called as the Initializer and must take a ConfigurableServerChannel as an argument.  The @Configure annotation can also take two boolean arguments: errorIfExists and configureIfExists, to determine how to handle the channel if it already exists.

    The configuration methods for the chat service use the new Authorizer mechanism to define fine grained authorization of what clients can publish and subscribe to a channel. This is similar to the existing SecurityPolicy mechanism, but without the need for a centralized policy instance. An operation on a channel is permitted if it is granted by at least one Authorizer and denied by none, giving black/white list style semantics.

    The configuration of the chat wildcard channels installs DataFilterMessageListeners for all /chat/** and all /members/** channels.  These filters ensure that there is no markup or bad words published to these channels.  To construct the listener, an instance to the BayeuxServer is needed to be passed to the constructor (used only for logging in this case).  A service may obtain a reference to the BayeuxService using the @Inject annotation:

    @Inject
    private BayeuxServer _bayeux;

    Adding a ChannelListener

    A method of a service may be registered as a listener of a channel with the @Listener annotation:

    @Listener("/service/members")
    public void handleMembership(ServerSession client, ServerMessage message)
    {
        ...
    }

    The @Listener annotation may also be passed the boolean argument receiveOwnPublishes, to control if messages published by the service session are filtered out. Note that a Listener is different to a subscription in that the service does not subscribe to the channel, so it will not trigger any subscription listeners nor be counted as a subscriber. There is also a @Subscription annotation available, but it is not used by the ChatService (and is typically more applicable when applied to client side cometd annotations).

    Client Side

    Annotations can also be used on the client side, if the java BayeuxClient is used, either for service testing or for the creation of a rich non-browser client UI:

    @Service
    class MyClient
    {
        @Session
        private ClientSession session;
    
        @PostConstruct
        private void init()
        {
            ...
        }
        @PreDestroy
        private void destroy()
        {
            ...
        }
        @Listener("/meta/*")
        public void handleMetaMessage(Message connect)
        {
            ...
        }
        @Subscription("/foo")
        public void handeFoo(Message message)
        {
            ...
        }
    }

    Note the use of @Session to inject the session used by the service and @PostConstruct and @PreDestroy for lifecycle events.  These annotations are also available on the server side. On the client, the annotations are activated by an explicit call to an annotation processor:

    ClientAnnotationProcessor processor = new ClientAnnotationProcessor(bayeuxClient);
    ...
    MyClient mc = new MyClient();
    processor.process(mc);

    Conclusion

    Annotations have made Cometd services much simpler to create and much easier to understand.  Normally I’m not a big fan of annotations, as they frequently put too much configuration into the “code”, but in this case, they are a perfect match for the semantic needed.  In future, we’ll also look at making JAXB annotations work simply with the JSON mechanisms of cometd.

  • Webtide blogs @ Intalio

    The webtide blogs are moving to http://webtide.intalio.com.  All new postings from the jetty  & cometd team will be made here and over time we will move the content from the old site as well.

  • Cometd with Annotations

     

    Cometd 2.1 now supports annotations to define cometd services and clients.  Annotations greatly reduces the boiler plate code required to write a cometd service and also links well with new cometd 2.x features such as channel initializers and Authorizers, so that all the code for a service can be grouped in one POJO class rather than spread over several derived entities.  The annotation are some cometd specific ones, plus some standard spring annotations.

    Server Side

    This blog looks at the annotated ChatService example bundled with the 2.1.0 cometd release.

    Creating a Service

    A POJO (Plain Old Java Object) can be turned into a cometd service by the addition of the @Service class annotation:

    package org.cometd.examples;
    import org.cometd.java.annotation.Service;

    @Service("chat")
    public class ChatService
    {
    ...
    }

    The service name passed is used in the services session ID, to assist with debugging.

    The annotated version of the CometdServlet then needs be used and to be told the classes that it should instantiate as services and scan for annotations. This is done with a coma separated list of class names in the "services" init-parameter in the web.xml (or similar) as follows:

    <servlet>
      <servlet-name>cometd</servlet-name>
      <servlet-class>org.cometd.java.annotation.AnnotationCometdServlet</servlet-class>

    ...
      <init-param>
        <param-name>services</param-name>
        <param-value>org.cometd.examples.ChatService</param-value>
      </init-param>
    </servlet>

    Configuring a Channel

    A service will frequently need to create, configure and Listen or subscribe to a channel. This can now be done atomically in cometd 2.x so that messages will not be recived before the channel is fully created and configured. For example the chat services configures 1 absolute channel and 2 wild card channel using the @Configure annotations:

    @Configure ({"/chat/**","/members/**"})
    protected void configureChatStarStar(ConfigurableServerChannel channel)
    {
        DataFilterMessageListener noMarkup =

    new DataFilterMessageListener(_bayeux,new NoMarkupFilter(),new BadWordFilter());
        channel.addListener(noMarkup);
        channel.addAuthorizer(GrantAuthorizer.GRANT_ALL);
    }
    @Configure ("/service/members")
    protected void configureMembers(ConfigurableServerChannel channel)
    {
        channel.addAuthorizer(GrantAuthorizer.GRANT_PUBLISH);
        channel.setPersistent(true);
    }

    The @Configure annotation is roughly equivalent to calling the BayeuxServer#createIfAbsent method with the annotated method called as the Initializer and must take a ConfigurableServerChannel as an argument.  The @Configure annotation can also take two boolean arguments: errorIfExists and configureIfExists, to determine how to handle the channel if it already exists.

    The configuration methods for the chat service use the new Authorizer mechanism to define fine grained authorization of what clients can publish and subscribe to a channel. This is similar to the existing SecurityPolicy mechanism, but without the need for a centralized policy instance. An operation on a channel is permitted if it is granted by at least one Authorizer and denied by none, giving black/white list style semantics.

    The configuration of the chat wildcard channels installs DataFilterMessageListeners for all /chat/** and all /members/** channels.  These filters ensure that there is no markup or bad words published to these channels.  To construct the listener, an instance to the BayeuxServer is needed to be passed to the constructor (used only for logging in this case).  A service may obtain a reference to the BayeuxService using the @Inject annotation:

    @Inject
    private BayeuxServer _bayeux;

    Adding a ChannelListener

    A method of a service may be registered as a listener of a channel with the @Listener annotation:

    @Listener("/service/members")
    public void handleMembership(ServerSession client, ServerMessage message)
    {
    ...
    }

    The @Listener annotation may also be passed the boolean argument receiveOwnPublishes, to control if messages published by the service session are filtered out. Note that a Listener is different to a subscription in that the service does not
    subscribe to the channel, so it will not trigger any subscription
    listeners nor be counted as a subscriber. There is also a @Subscription annotation available, but it is not used by the ChatService (and is typically more applicable when applied to client side cometd annotations).

    Client Side

    Annotations can also be used on the client side, if the java BayeuxClient is used, either for service testing or for the creation of a rich non-browser client UI:

    @Service
    class MyClient
    {
        @Session
        private ClientSession session;

        @PostConstruct
        private void init()
        {

    ...
        }
        @PreDestroy
        private void destroy()
        {

    ...     }
        @Listener("/meta/*")
        public void handleMetaMessage(Message connect)
        {

    ...     }
        @Subscription("/foo")
        public void handeFoo(Message message)
        {

    ...     }
    }

    Note the use of @Session to inject the session used by the service and @PostConstruct and @PreDestroy for lifecycle events.  These annotations are also available on the server side. On the client, the annotations are activated by an explicit call to an annotation processor:

    ClientAnnotationProcessor processor = new ClientAnnotationProcessor(bayeuxClient);
    ...
    MyClient mc = new MyClient();
    processor.process(mc);

    Conclusion

    Annotations have made Cometd services much simpler to create and much easier to understand.  Normally I’m not a big fan of annotations, as they frequently put too much configuration into the "code", but in this case, they are a perfect match for the semantic needed.  In future, we’ll also look at making JAXB annotations work simply with the JSON mechanisms of cometd.

  • Jetty WTP Adaptor

    Not too long ago we had a contribution from Angelo Zerr that gave jetty a native WTP adaptor. We are happy to announce its availability now!
    Shockingly, there is some documentation for this plugin, based on the original documentation provided by Angelo…it was a model contribution, code _and_ docs.
    Jetty WTP Plugin Documentation
    The documentation contains installation instructions and we’ll have it available through the WTP Server Adaptor discovery mechanism soon hopefully.
    The plugin itself is largely based off of the tomcat version of the plugin with an addition of a websocket wizard of Angelo’s.
    Feedback on the plugin is welcome and we have a bugzilla component ‘wtp’ for the plugin which I encourage people to report issues to.
    Bugzilla
    We plan to add additional versions of the runtime over time and keep it up to date with the latest jetty releases