Category: Uncategorized

  • Asynchronous BlazeDS Polling with Jetty 7 Continuations

    Jetty now has available an asynchronous implementation for BlazeDS which uses Jetty 7 portable continuations.
    BlazeDS is an open source serverside web messaging technology. It provides real-time data push to flex/flash clients using techniques, such as polling and streaming, to provide a richer and more responsive experience. The asynchronous implementation works for HTTP polling, and was tested against BlazeDS 3.2.0.
    While the techniques BlazeDS use make clients more responsive, they also increase the load on the server by forcing it to hold a thread for idle clients. The advantage of using Jetty continuations with BlazeDS is that it lets your flash clients wait for a response without holding a thread the entire time, greatly increasing scalability. Jetty 7 style continuations are also portable; webapps coded to use the continuations work async on Jetty or any Servlet 3.0 container, and blocking on any Servlet 2.5 container. Greg explains the benefits of (Jetty 7) continuations better than I could.
    To use the asynchronous BlazeDS implementation with one of your applications, go through these quick steps:

    1. Drop jetty-blazeds.jar into your webapp’s classpath
    2. Enable continuations, if you’re using a non-Jetty-7 servlet container:
      • Make sure jetty-continuation-7.jar is on your classpath. Download the latest Jetty distribution from http://www.eclipse.org/jetty/downloads.php and drop lib/jetty-continuation-7*.jar into your webapp’s classpath.
      • Place org.eclipse.jetty.continuation.ContinuationFilter in front of your MessageBrokerServlet. The ContinuationFilter makes it possible for other containers, and even servlet 2.5 containers, to use jetty-7-style portable continuations, which we use as a portability layer on top of asynchronous servlets.

        <!-- Continuation Filter, to enable jetty-7 continuations -->
        <filter>
        <filter-name>ContinuationFilter</filter-name>
        <filter-class>org.eclipse.jetty.continuation.ContinuationFilter</filter-class>
        </filter>
        <filter-mapping>
        <filter-name>ContinuationFilter</filter-name>
        <url-pattern>/messagebroker/*</url-pattern>
        </filter-mapping>

         

    3. Modify your services-config.xml to use Jetty’s AsyncAMFEndpoint instead of AMFEndpoint. AsyncAMFEndpoint uses the same options as AMFEndpoint, e.g.,

      <channel-definition id="my-async-amf" class="mx.messaging.channels.AMFChannel">
      <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amfasync"
      class="org.mortbay.jetty.asyncblazeds.AsyncAMFEndpoint"/>
      <properties>
      <polling-enabled>true</polling-enabled>
      <polling-interval-seconds>0</polling-interval-seconds>
      <max-waiting-poll-requests>10</max-waiting-poll-requests>
      <wait-interval-millis>30000</wait-interval-millis>
      <client-wait-interval-millis>250</client-wait-interval-millis>
      </properties>
      </channel-definition>

       

    Source code is available in svn, and you can check it out and build it:

    $ svn co http://svn.codehaus.org/jetty/jetty/trunk/jetty-blazeds/
    $ cd jetty-blazeds
    $ mvn install

  • The Webtide Experience

    I had a great conversation recently with Benjamin Kuo at socalTECH.com. Webtide is an extremely distributed organization around the world, but indeed I am located in the heart of Southern California, Los Angeles. It was a good time to talk about current business trends, the business climate, and how things have changed since the other times I’ve worked in smaller company settings. Overall, these are great days for Webtide, we’re growing, we’re hiring, and we continue to be profitable and stable. Thank you! and, have a read here for the interview

  • Cometd Features and Extensions

    The cometd project is nearing a 1.0 release and thus we are make a bit of a push to improve the project documentation. As part of this effort, we have realized that there are many cool features and extensions to cometd that have been under-publicized.  So this blog is an attempt to give a whirlwind tour of cometd features and extensions.

    Clients and Servers

    The cometd project provides many implementations of the Bayeux protocol.  The javascript and java implementations are most advanced, but there are also perl and python implementations under development within the project. There are also other implementations of Bayeux available outside the cometd project for groovy, flex, .net and atmosphere.

    Javascript Client

    There is now a common cometd-javascript client implementation used as the basis of the cometd-dojo and cometd-jquery implementations (dojox in 1.3.1 still contains a dojo specific client, but this will eventually be replaced with the common code base).  This common code base should able to be used to easily create implementation for other frameworks (eg ext.js or prototype).
    For simplicity, our documentation get’s around the details of which javascript implementation you are using by assuming that you code is written in the context of a:

    // Dojo stylevar cometd = dojox.cometd;

    or

    // jQuery stylevar cometd = $.cometd;

    Java Server

    The cometd-java server was written originally as part of the Jetty-6 servlet container and included support for asynchronous scaling.  While still based on jetty utility components, the cometd-java server is now portable and will run on most servlet containers and will use the asychronous features of Jetty or any servlet 3.0 container.

    Java Client

    The cometd-java client is based on the Jetty asynchronous HTTP Client, thus it is an excellent basis for devloping scalable load generators for testing your cometd application.  It can also be used in rich java UIs that wish to use cometd to communicate over the internet to a server behind firewalls and proxies.

    Basic Operation

    Publish/Subscribe Messaging

    The core operation of cometd is as a publish/subscribe messaging framework. A message is published to a channel with a URI like name (eg. /chat/room/demo ) and cometd will arrange to deliver that message to all subscribers for that channel, either locally in the server, remote in the client or a client of a clustered server.   The subscription may be for the channel itself (/chat/room/demo), a simple wildcard (/chat/room/*) or a deep wildcard (/chat/**).
    Subscription in javascript needs to provide a callback function to handle the messages:

    // Some initialization codevar subscription1 = cometd.addListener('/meta/connect', function() { ... });var subscription2 = cometd.subscribe('/foo/bar/', function() { ... });
    // Some de-initialization codecometd.unsubscribe(subscription2);cometd.removeListener(subscription1);

    Publishing a method in javascript is simply a matter of passing the channel name and the message itself:

    cometd.publish('/mychannel', { wibble: { foo: 'bar' }, wobble: 2 });

    Similar APIs for publish subscribe are available via a semi-standard cometd API and several java implementations are now using this.

    Service Channels

    With publish/subscribe, the basic feature set for a chat room is available. But non-trivial applications cannot be implemented with all communication broadcast on publicly accessible channels.  Thus cometd has a convention that any channel in the /meta/** or /service/** name space is a non-broadcast service channel (meta channels are used by the protocol itself and service channels are available to the application).  This means that a message published to a service channel will only be delivered to server side clients, listeners and extensions. A message to a service channel will never be remotely distributed to a remote client unless an application explicitly delivers or publishes a message to a particular client.
    This allows a client to publish a message to a service channel and know that it will only be received by the server

    Private Message Delivery

    A cometd application often needs to deliver a message to a specific client.  Thus as an alternative to publishing to a channel, a java server side application can deliver a message to a specific user:

    Client client = bayux.getClient(someClientId);client.deliver(fromClient,"/some/channel",aMsg,msgId);

    Note that a private message delivery still identifies a channel. This is not to broadcast on that channel, but to identify the message handler within the client. This channel may be a service channel, so the client will know it is a private message, or it can be an ordinary channel, in which case the client cannot tell if the message was published or delivered to it.
    Such private deliveries are often used to tell a newly subscribed client the latest state message. For example, consider a client that has subscribed to /portfolio/stock/GOOG.  That client needs to know the current price of the stock and should not have to wait until the price changes. Thus the portfolio application can detect the subscription server side and deliver a private message to the subscriber to tell them the latest price.

    Lazy Messages

    One of the key features of comet is delivering messages to clients from the server with low latency,  but not all messages need low latency. Consider a system status message, sent to all users, telling them something non urgent (eg maintenance scheduled for later in the day).  There is no need for that message to be sent to every single user on the system with minimal latency and there is a significant cost to try to do so.  If you have 10,000 users, then waking up 10k long polls will take a few seconds of server capacity which might be better used for urgent application events.
    Thus the cometd-java server has the concept of lazy messages.  A message may be flagged as lazy by publishing it to a channel that is flagged as lazy (ChannelImpl#setLazy(boolean)) or by publishing to any channel with the ChannelImpl#publishLazy(…) method. [ Note these methods are not yet on the standard API, but should be before 1.0. Until they are, you must cast to ChannelImpl ].
    A Lazy message will be queued for a client, but it will not wake up the long poll of that client.  So a lazy message will only be delivered when another non-lazy message is sent to that client, or the long poll naturally times out (in 30 to 200 seconds).  Thus low priority messages can be delivered with minimal additional load to the server.

    Message Batching

    Comet applications will often need to send several related messages in respond to the same action (for example subscribing to a chat room and sending a hello message).   To maximize communication efficiency, it is desirable that these multiple messages are transported on the same HTTP message.    Thus both the cometd client and server APIs support the concept of batching.  Once a batch is started, messages for a client a queued but not delivered until such time as the batch is ended. Batches may be nested, so it is safe to start a batch and call some other code that may do it’s own batching.
    On the javascript client side, batching can be achieved with code like:

    cometd.startBatch();cometd.unsubscribe(myChatSubscription);cometd.publish("/chat/demo",{text:'Elvis has left the building', from:'Elvis'});cometd.endBatch();

    On the java server side, batching can be achieved with code like:

    public void handleMessage(Client from, Message message){    from.startBatch();    processMessageForAll(from,message);    from.deliver(from,message.getChannel(),processResponseForOne(message),null);    from.endBatch();}

    This will send any message published for all users and the private reply to the client in a single HTTP response.

    Listeners, Data Filters and Extensions

    There are several different ways that application code can attach to the cometd clients and servers in order to receive events about cometd and to modify cometd behaviour:

    • Listeners are available on both client and server implementations and can inform the application of cometd events such as handshake, connections lost, channel subscriptions as well as message delivery.
    • DataFilters are available in the java server and can be mapped to channels so that they filter the data of any messages published to those channels. This allows a 3rd party to review an application and to apply validation and verification logic as an aspect rather than being baked in (which application developers never do). There are several utility data filters available.
    • Extensions are available on both client and server implementations and allow inbound and outbound messages to be intercepted, validated, modified, injected and/or deleted.  The utility extensions provided are detailed in the next section.

    Security Policy

    The SecurityPolicy API  is available in the java server and is used to authorize handshakes, channel creation, channel subscription and publishing.  If an SecurityPolicy implementation is constructed with a reference to the Bayeux instance, then it can call the getCurrentRequest() method to access the current HttpServletRequest and thus use standard web authentication and/or HttpSessions when authorizing actions.

    Extensions

    Timestamp Extension

    The timestamp extension simply adds the optional timestamp field to every message sent.

    Timesync Extension

    The timesync extension implements a NTP-like time synchronization.  Thus a client can be aware of the offset between it’s local clock and the servers clock. This allows an application (eg an Auction site) to send a single message with the semantic: “the auction closes at 18:45 EST” and then each client can use it’s own local clock to count down the auction, without the need for wasteful tick messages from the server.

    Acknowledged Message Extension

    The Cometd/Bayeux protocol is carried over TCP/IP, so it is intrinsically reliable and messages will not get corrupted. However, with cometd, there are some edge cases where messages might get lost (dropped connections) or might arrive out of order (using multiple connections).
    The acknowledge extension piggybacks message acknowlegements onto the long polling transports of cometd, so that messages are not lost or delivered out of order.

    Reload Extension

    The client side only reload extension is provided to allow a comet enabled web page to be reloaded with out needing to rehandshake.  The existing client ID can be passed from page to page, so that an Comet/Ajax style of user interface can be merged with a more traditional page based approach.

    Clustering

    Cometd servers may be aggregated into a cluster using Oort, which is a collection of extensions that use the java cometd client to link the servers together. Currently Oort is under documented, so is best understood by reading this summary and then looking at the Auction example.

    Observing Comets

    The Oort class allows cometd servers to observe each other, which means to open bayeux connections in both directions.  Observations are setup with the Oort#observerComet method and can be used to setup arbitrary topologies of servers (although fully connected is the norm and is implemented by the Oort cloud).
    Once observed, Oort Comets may cluster particular channels by calling Oort#observerChannel,which will cause the local server to subscribe to that channel on all observed comet servers. Any messages received on those subscriptions will be re-published to the local instance of that channel (with loop prevention). Thus messages published to an observed channel will be published on all observered comet servers.

    The Oort Cloud

    The Oort cloud is a self organizing cluster of Oort Comet servers that use the Oort Observed /oort/cloud channel to publish information about discovered Oort Comets.  Once an Oort comet is told of another via the /oort/cloud channel, it will observe it and then publish its own list of known Oort comets.   This allows a fully connected cluster of Oort comets to self organize with only one or two comet nodes known in common.

    Seti

    Once an Oort Cloud has been established, a load balancer will be needed to spread load over the cluster, so a user might be connected to any node in the cloud.   Seti (as in the Search for Extra Terrestial Intelligence), is a mechanism that allows a private message to be sent to a particular user that may be located anywhere in the cloud. Sharding and location caching can be used to make this more efficient.

    Examples

    Chat

    Chat is the hello world of web-2.0 and the introductory demo for cometd.  The server side of the chat monitors the join messages to maintain and distribute a membership list for each room. A services channel is used to provide a private message service.
    There is both a dojo chat client and jquery chat client provided.

    Auction

    The Auction demonstration provided shows the Oort and timesync extensions in use to provide a moderately complete example of a cometd application.


    Note that this example uses cometd-dojo for the client, but the UI is implemented in a mushup of prototype, behaviour and other js libs.  Volunteers desparately needed to make this all dojo or all jquery.

    Archetypes

    Assembling the components needed for a cometd web application can be a little complex as the server components need to be mixed with the javascript framework and the cometd client. To make this process easier, cometd now support cometd maven archetypes, that can build a blank cometd war project in a few lines.
    So what are you waiting for! Go comet!
     
     
     
     

  • Continuations to Continue

    Jetty-6 Continuations introduced the concept of asynchronous servlets to provide scalability and quality of service to web 2.0 applications such as chat, collaborative editing, price publishing, as well as powering HTTP based frameworks like cometd, apache camel, openfire XMPP and flex BlazeDS. wt58jhp2an
    With the introduction of similar  asynchronous features in Servlet-3.0, some have suggested that the Continuation API would be deprecated.  Instead, the Continuation API has been updated to provide a simplified portability run asynchronously on any servlet 3.0 container as well as on Jetty (6,7 & 8).  Continuations will work synchronously (blocking) on any 2.5 servlet container. Thus programming to the Continuations API allows your application to achieve asynchronicity today without waiting for the release of stable 3.0 containers (and needing to upgrade all your associated infrastructure).

    Continuation Improvements

    The old continuation API threw an exception when the continuation was suspended, so that the thread to exit the service method of the servlet/filter. This caused a potential race condition as a continuation would need to be registered with the asynchronous service before the suspend, so that service could do a resume before the actual suspend, unless a common mutex was used.
    Also, the old continuation API had a waiting continuation that would work on non-jetty servers.  However the behaviour of this the waiting continuation was a little different to the normal continuation, so code had to be carefully written to work for both.
    The new continuation API does not throw an exception from suspend, so
    the continuation can be suspended before it is registered with any
    services and the mutex is no longer needed. With the use of a ContinuationFilter for non asynchronous containers, the continuation will now behaive identically in all servers.

    Continuations and Servlet 3.0

    The servlet 3.0 asynchronous API introduced some additional asynchronous features not supported by jetty 6 continuations, including:

    • The ability to complete an asynchronous request without dispatching
    • Support for wrapped requests and responses.
    • Listeners for asynchronous events
    • Dispatching asynchronous requests to specific contexts and/or resources

    While powerful, these additional features may also be very complicated and confusing. Thus the new Continuation API has cherry picked the good ideas and represents a good compromise between power and complexity.  The servlet 3.0 features adopted are:

    • The completing a continuation without resuming.
    • Support for response wrappers.
    • Optional listeners for asynchronous events.

     

    Using The Continuation API

    The new continuation API
    is available in Jetty-7 and is not expected to significantly change in
    future releases.  Also the continuation library is intended to be
    deployed in WEB-INF/lib and is portable.  Thus the jetty-7 continuation
    jar will work asynchronously when deployed in jetty-6, jetty-7, jetty-8
    or any servlet 3.0 container.

    Obtaining a Continuation

    The ContinuationSupport factory class can be used to obtain a continuation instance associated with a request: 

        Continuation continuation = ContinuationSupport.getContinuation(request);

    Suspending a Request

    The suspend a request, the suspend method is called on the continuation: 

      void doGet(HttpServletRequest request, HttpServletResponse response)  {    ...    continuation.suspend();    ...  }

    After this method has been called, the lifecycle of the request will be extended beyond the return to the container from the Servlet.service(…) method and Filter.doFilter(…) calls. After these dispatch methods return to, as suspended request will not be committed and a response will not be sent to the HTTP client.

    Once a request is suspended, the continuation should be registered with an asynchronous service so that it may be used by an asynchronous callback once the waited for event happens.
    The request will be suspended until either continuation.resume() or continuation.complete() is called. If neither is called then the continuation will timeout after a default period or a time set before the suspend by a call to continuation.setTimeout(long). If no timeout listeners resume or complete the continuation, then the continuation is resumed with continuation.isExpired() true.
    There is a variation of suspend for use with request wrappers and the complete lifecycle (see below):

        continuation.suspend(response);

    Suspension is analogous to the servlet 3.0 request.startAsync() method. Unlike jetty-6 continuations, an exception is not thrown by suspend and the method should return normally. This allows the registration of the continuation to occur after suspension and avoids the need for a mutex. If an exception is desirable (to bypass code that is unaware of continuations and may try to commit the response), then continuation.undispatch() may be called to exit the current thread from the current dispatch by throwing a ContinuationThrowable.

    Resuming a Request

    Once an asynchronous event has occurred, the continuation can be resumed: 

      void myAsyncCallback(Object results)  {    continuation.setAttribute("results",results);    continuation.resume();  }

    Once a continuation is resumed, the request is redispatched to the servlet container, almost as if the request had been received again. However during the redispatch, the continuation.isInitial() method returns false and any attributes set by the asynchronous handler are available.

    Continuation resume is analogous to Servlet 3.0 AsyncContext.dispatch().

    Completing Request

    As an alternative to completing a request, an asynchronous handler may write the response itself. After writing the response, the handler must indicate the request handling is complete by calling the complete
    method: 

      void myAsyncCallback(Object results)  {    writeResults(continuation.getServletResponse(),results);    continuation.complete();  }

    After complete is called, the container schedules the response to be committed and flushed.

    Continuation resume is analogous to Servlet 3.0 AsyncContext.complete().

    Continuation Listeners

    An application may monitor the status of a continuation by using a ContinuationListener

      void doGet(HttpServletRequest request, HttpServletResponse response)  {    ...
        Continuation continuation = ContinuationSupport.getContinuation(request);    continuation.addContinuationListener(new ContinuationListener()    {      public void onTimeout(Continuation continuation) { ... }      public void onComplete(Continuation continuation) { ... }    });
        continuation.suspend();    ...  }

    Continuation listeners are analogous to Servlet 3.0 AsyncListeners.
     

    Continuation Patterns

    Suspend Resume Pattern

    The suspend/resume style is used when a servlet and/or filter is used to generate the response after a asynchronous wait that is terminated by an asynchronous handler. Typically a request attribute is used to pass results and to indicate if the request has already been suspended. 

      void doGet(HttpServletRequest request, HttpServletResponse response)  {     // if we need to get asynchronous results     Object results = request.getAttribute("results);     if (results==null)     {       final Continuation continuation = ContinuationSupport.getContinuation(request);
           // if this is not a timeout       if (continuation.isExpired())       {         sendMyTimeoutResponse(response);         return;       }
           // suspend the request       continuation.suspend(); // always suspend before registration
           // register with async service.  The code here will depend on the       // the service used (see Jetty HttpClient for example)       myAsyncHandler.register(new MyHandler()       {          public void onMyEvent(Object result)          {            continuation.setAttribute("results",results);            continuation.resume();          }       });       return; // or continuation.undispatch();     }
         // Send the results     sendMyResultResponse(response,results);   }

    This style is very good when the response needs the facilities of the servlet container (eg it uses a web framework) or if the one event may resume many requests so the containers thread pool can be used to handle each of them.

    Suspend Continue Pattern

    The suspend/complete style is used when an asynchronous handler is used to generate the response: 

      void doGet(HttpServletRequest request, HttpServletResponse response)  {     final Continuation continuation = ContinuationSupport.getContinuation(request);
         // if this is not a timeout     if (continuation.isExpired())     {       sendMyTimeoutResponse(request,response);       return;     }
         // suspend the request     continuation.suspend(response); // response may be wrapped.
         // register with async service.  The code here will depend on the     // the service used (see Jetty HttpClient for example)     myAsyncHandler.register(new MyHandler()     {       public void onMyEvent(Object result)       {         sendMyResultResponse(continuation.getServletResponse(),results);         continuation.complete();       }     });   }

    This style is very good when the response does not needs the facilities of the servlet container (eg it does not use a web framework) and if an event will resume only one continuation. If many responses are to be sent (eg a chat room), then writing one response may block and cause a DOS on the other responses.
     

    Continuation Examples

    Chat Servlet

    The ChatServlet example shows how the suspend/resume style can be used to directly code a chat room. The same principles are applied to frameworks like cometd.org which provide an richer environment for such applications, based on Continuations.

    Quality of Service Filter

    The QoSFilter(javadoc), uses suspend/resume style to limit the number of requests simultaneously within the filter. This can be used to protect a JDBC connection pool or other limited resource from too many simultaneous requests.

    If too many requests are received, the extra requests wait for a short time on a semaphore, before being suspended. As requests within the filter return, they use a priority queue to resume the suspended requests. This allows your authenticated or priority users to get a better share of your servers resources when the machine is under load.

    Denial of Service Filter

    The DosFilter(javadoc) is similar to the QoSFilter, but protects a web application from a denial of service attack (as best you can from within a web application). If too many requests are detected coming from one source, then those requests are suspended and a warning generated. This works on the assumption that the attacker may be written in simple blocking style, so by suspending you are hopefully consuming their resources. True protection from DOS can only be achieved by network devices (or eugenics :).

    Proxy Servlet

    The ProxyServlet uses the suspend/complete style and the jetty asynchronous client to implement a scalable Proxy server (or transparent proxy).

    Gzip Filter

    The jetty GzipFilter is a filter that implements dynamic compression by wrapping the response objects. This filter has been enhanced to understand continuations, so that if a request is suspended in suspend/complete style and the wrapped response is passed to the asynchronous handler, then a ContinuationListener is used to finish the wrapped response. This allows the GzipFilter to work with the asynchronous ProxyServlet and to compress the proxied responses.
     

    Where do you get it?

    You can read about it, or download it with jetty or include it in your maven project like this pom.xml.
     

  • Roadmap for Jetty-6, Jetty-7 and Jetty-8

    This blog updates the roadmap for jetty-6, jetty-7 and jetty-8 with the
    latest plans resulting from the move to the Eclipse Foundation and the
    delay in the servlet-3.0 specification. Previously it was intended that jetty-7 was going to be servlet-3.0, but with the move to eclipse and with the delay of JSR-315, it was decided to delay servlet 3.0 to Jetty-8 later in this year.  Thus the current active branches of jetty are:

    Jetty-6 @ codehaus & mortbay

    The current stable branch is jetty-6 for servlet-2.5 and java-1.4 (some modules are 1.5).  It is in the org.mortbay.* package space and is licensed under the apache 2.0 license.   However, it is now mostly in maintenance mode and new features will not be added to jetty-6 unless there is compelling reasons to do so.  It includes support for both HTTP server and client and comes bundled with a cometd server.
    Jetty-6 is the release for established, production-ready projects.

    Jetty-7 @ eclipse

    The current development branch is jetty-7 for servlet-2.5 and java-1.5.  It is in the org.eclipse.jetty.* package space and is licensed under both the apache 2.0 and eclipse 1.0 licenses and may be distributed under the terms of either license.  Jetty-7 represents a moderate refactor of the jetty code base:

    • Moved to the org.eclipse.jetty packages
    • Remodularized so that dependencies for client, server and servlet container are more separable
    • Updated architecture to anticipate the needs of servlet-3.0
    • Support for some servlet-3.0 features, including
      • asynchronous servlets (updated continuations)
      • web-fragment.xml
      • META-INF/resource static content in jars
    • Improved OSGi integration and availability of OSGi bundles as well as maven artefacts

    The intent of jetty-7 is to allow users to transition to the updated architecture and to access some servlet-3.0 features, within a servlet 2.5 container and without the need to update java 1.6 or to wait for the final specification later this year.  There are milestone builds of jetty-7 available already and we hope to have an official eclipse release in the next month or two.
    The cometd client and server are now in the cometd.org project and are built against jetty-7. Some jetty integrations (eg jetty-maven-plugin, terracotta,
    wadi, etc) and distributions (eg. deb, rpm, hightide) will remain at
    codehaus and are now built from codehaus trunk.
    Jetty-7 is the release for projects starting development now.

    Jetty-8 @ eclipse

    The current experimental branch is jetty-8 for servlet-3.0 and java-1.6. It is in the org.eclipse.jetty.* package space and is licensed under both the apache 2.0 and eclipse 1.0 licenses and may be distributed under the terms of either license. Jetty-8 is being kept in lock-step with jetty-7 as much as possible, so that it represents essentially the same server, but rebuilt with java-1.6 and using the standard servlet-3.0 to access the features already available in jetty-7.
    Jetty-8 is the branch for people who wish to experiment with the emerging APIs now.

    Webtide @ JavaOne

    If you want more information about what exactly are these jetty and servlet-3.0 features, why not come to JavaOne 2009?! Webtide will be have a small booth in the expo (where you will mostly find me) and Sun have invited me to participate in their technical
    session on Servlet 3.0 at JavaOne, together with Rajiv Mordani and Jan
    Leuhe.  I’ll be presenting a section on the Asynchronous Servlets API
    and giving a demonstration that uses some ease-of-deployment features
    to deploy a webapp on glassfish using the Jetty asynchronous HTTP
    client  in a 3.0 asynchronous servlet. The session is TS-3790Java

  • Bidirectional Web Transfer Protocol – BWTP

    I really like the idea behind the HTML5 Websocket API – namely that a datagram model should be used for web application communication rather than a request/response paradigm (this is also the idea behind cometd).  But unfortunately, the proposed protocol to carry websocket traffic is neither a good protocol nor is it well specified.
    After failing in an attempt to get the WebSocket protocol improved, I decided to try to define a better solution.  I had intended to work privately on this for a while, but the twittersphere has pointed out an early draft, so I’ve put the work-in-progress on http://bwtp.wikidot.com and I invite review, feedback and collaborators.
    So what’s so bad about the Websocket protocol proposal? The main things I dislike are that the document is impenetrable, the protocol inflexible and it is entirely different from other IETF application protocols for no good reason. But rather than throw (more)mud, I’d rather sing the praises of the approach that I have taken:

    • The BTWP protocol is very much an IETF style application protocol.  In fact it is just RFC2616 with anything non bidirectional ripped out. It is not trying to be a revolution in web protocols, but simply to solve the problems at hand, without discarding decades of protocol experience. 
    • The protocol document is written very much in IETF style. In fact it is just RFC2616 with anything non bidirectional ripped out.   BNF is used to specify the protocol and unlike the WebSocket proposal there are no binary blobs or algorithmic specifications.
    • The principal of “be strict in what you generate and forgiving in what your parse” is adhered to.
    • Because of it’s similarity to HTTP, it is intended that existing HTTP clients, servers and intermediaries will be able to be minimally updated to support this protocol. This will not require entirely new protocol libraries to be developed.
    • Existing development and analysis tools will also be able to be easily updated, plus the protocol is mostly human readable.
    • It supports full mime encapsulated payloads, so non text payload and/or compressed payloads can be sent without the need for clent and server to make assumptions with regards to content.
    • It has a default meta data mechanism, so that it can have detail per message meta data, but not at the expense of the redundent inforamation sent in normal HTTP.
    • The minimal overhead per message is 11 bytes, which is a little more than the websocket proposal, but is hardly significant.
    • There is no formal channel mechanism like BEEP has, but each message may be to/from a different URI if need be.  This makes multiplexing easy to support.
    • There is no formal segmentation mechanism, but Content-Ranges are supported so that large content can be sent in smaller bits if desired.
    • The protocol recognizes that intermediaries (proxies) may wish to be an active party on a bi direction connection. For example, this proposal allows an intermediary to initiate and orderly lossless close of the connection. I’m sure innovative proxy applications will be developed over time, just as they have been for HTTP.
    • BWTP well supports the current HTML WebSocket API but is also flexible and extensible so that non browser clients may use it and future APIs will not need protocol changes.

    If you are interested, I encourage you to join the IETF Hybi mailing list and to join the discussion ragarding the bidirectional web.
     
     
     
     

  • Google Wave – A new paradigm?

    The announcement of Google Wave is a bold declaration of where Google sees the future of the web. Google, unsurprisingly enough, sees the future of the web as a server side paradigm, with dynamic updates being used to drive the thin client model to capture even more of tasks that where once done client side.  Google are extending the server side model of webmail to apply to applications that have been fundamentally client side, such as document authoring, IM and chat.

    Some have said that Wave’s use of XMPP represents the death of HTTP, but I think they’ve got the wrong end of the banana! Wave is using XMPP to federate servers together, not clients.  When it comes to client/server communications, Wave is using GWT over good old HTTP, with some push extensions so that a client can get a dynamic view onto a Wave document, which is a fundamentally server side entity.  If anything, Wave has declared that HTTP is king and a near immortal one at that.

    Google’s use of XMPP is roughly equivalent to the existing use of SMTP between mail servers. Instead of passing mail documents between servers using a store and forward model, Wave has the servers dynamically collaborating to maintain a live Wave document that contains content, style, history, permissions and private content. The protocols that Wave might put on the endangered species list are SMTP, POP and IMAP (but have any protocols gone extinct? Has a gopher been sighted in the wild recently or only in captivity?).

    If Wave is successful (and it certainly looks pretty compelling), then more traditionally client side state is going to be captured on the server side.  This is a great model for google, as it lets them use their massive serverside databases to power their serverside robots like spelly and rosey, which access the vast databases of Google to do contextual spell checking and translation. You will never get such robots running client side and it is services like these that makes Google confident that they can offer better wave servers than anybody else – hence they do not fear opening up their Wave servers to competition.   So Googles’ webmail competitors had better start thinking of compelling reasons that people will want to host their waves on non-Google servers.

    Of course for Jetty, Google Wave is just a brilliant story.  To implement a Wave server, you will need a flexible, performant web server that can well support dynamic push content and will affordably scale into your wave clouds (should they be called oceans rather than clouds now?)  Jetty is the ideal Wave server!  In fact because Wave uses GWT, Google AppEngine and links to shindig, it is already based on and/or using the Google services that are based on or use Jetty. 

    For our other key project, cometd.org, the picture is a little less clear.  Google Wave does it’s own comet implementation based on long polling using GWT RPC. But Wave reinforces that comet is now a core web paradigm. Any alternative implementations of Wave that do not use the google GWT code base, would do well to look to cometd.org as a core technology.

     

  • Webtide/Jetty gathering at JavaOne

    For SnoracleZero (aka Java One) this year, we are planning a social get together of Jetty users and Webtide clients  8pm Tuesday (June 2).

    If you’d like to come along, email javaone@webtide.com and we’ll pick a venue depending on the estimated numbers.  See you there!

     

  • Servlet 3.0 Proposed "Final" Draft

    In my December 2008 blog, I strongly criticised the Servlet 3.0 JSR-315 process and the resulting Public Review Draft, describing it as a: “poor document and the product of a discordant expert group (EG) working within a flawed process” and of producing a “Frankenstein monster, cobbled together from eviscerated good ideas and misguided best intentions.
    Perhaps because of these harsh words (or more probably in spite of them), JSR-315 has become
    significantly less discordant and some good technical progress has been
    made.  While I remain somewhat concerned about the  process (Eg we have a Proposed Final Draft while some significant issues have yet to
    be resolved and/or prototyped), I’d like to focus on the improved
    spirit of the group and highlight some of the technical achievements that have
    resulted.

    Asychronous Servlets

    JSR-315 has made significant progress on asynchronous servlets. The proposal, identified in my update on the Public Review Draft,
    to define a specific dispatch type for async requests has been adopted
    and that has resulted in a very workable asynchronous servlet proposal.
    Once asynchronous dispatches were separated from normal dispatches,
    this made irelevent most of the differing opinions about how filters
    should apply and if forward semantics should apply. As a result the
    methods previously named AsyncContext.forward(…) have now been renamed to AsyncContext.dispatch(…) and there is general agreement and support of the different asynchronous style of  usage possible with this API.
    While
    I think the final async proposal is far from perfect and perhaps over
    complex, I don’t think any of the proposals (including my own) could
    perfectly retrofit asynchronous behaviour to the servlet spec. The
    benefit of the complexity is that the proposal well supports multiple
    asynchronous paradigms and usage styles.  Most of my prior complaints
    were more about specific usage styles and are thus not so important if
    multiple styles are well supported.
    Without a single imposed asynchronous model and
    there will be significant
    opportunity for frameworks to inovate in providing various asynchronous
    models to the developer community. To this end, the Jetty continuation
    style of jetty-6 has been updated in jetty-7 with ideas from
    servlet-3.0 and should now be seen as a framework that builds upon the
    servlet 3.0 capabilities.

    Annotations and pluggability

    Some of the key new features of servlet 3.0 is the increased support for ease of deployment with new ways to discovery, configure and deploy Filters and Servlets:

    • Annotated filters and servlets may be deployed without the need for a web.xml entry.
    • Jars may contain /META-INF/web-fragment.xml
      files with a subset of web.xml configuration.
    • Programmatic
      configuration of Filters and Servlets from ServletContextListeners, which are potentially discovered in /META-INF/*.tld files within jars.

    Since the Public Review Draft, an additional feature has been added for automatic discovery of webapplication configuration:

    • ServletContainerInitializers are discovered via the jar services API and can specify a list of types that they handle. Any classes of those types discovered in any jar contained in WEB-INF/lib are passed to the ServletCotnainerInitializer and it is able to use the same programmatic configuration APIs as ServletContextListeners.

    I believe these mechanisms are good improvements in the specification and I support their inclusion. I previously expressed concerns about the flexibility and optionality of their usage. Specifically that:

    Accidental Deployment: Web applications can
    contain many third party jars and that deployers may not be willing to trust all of them to the same degree to be able to deploy and configure arbitrary filters and servlets.
    Slow Deployment: Web applications can contain many many jars and that scanning of the classes of all the jars could slow deployments.
    Ordering:
    There was no mechanism to specify ordering, thus limiting the usefulness of the features for modularization.
    Parameterization: The configuration baked into a JAR cannot be parameterized, thus unpacking is needed to discover/change default configuration.

    The Proposed Final Draft (8.2.2 & 8.2.3) has addressed all but the last of these concerns with the ability to specify in web.xml an absolute ordering of jars within WEB-INF/lib/ that allows jars to be excluded from the ordering. Each jar in WEB-INF/lib may be given a name by having a <name> element within a META-INF/web-fragment.xml file. The webapps WEB-INF/web.xml file can then have an <absolute-ordering> element that lists the fragment names in the order they will be applied, together with the optional <others/> element to specify if and when unnamed jars are included.
    As well as my ordering concern, this feature addresses accidental deployment, as a deployer can list only known well trusted jars; and slow deployment, as an ordering can exclude jars that need not be scanned (other than to discover any web-fragment.xml file).
    However, my lingering concern is that the PFD as written does not well express that Filters and Servlets cannot be configured by annotations or TLD listeners from jars excluded from the ordering, nor is it clear that ServletContainerInitializers can be excluded in this fashion.   For the purposes of avoiding accidental and/or slow deployment, it does not matter which of the mechanisms a jar uses for ease-of-deployment, exclusion should mean exclusion.  The responses from the servlet expert group have generally been in agreement with this, but I think the specification needs to be clearer. I have proposed that the following text be added to section 8.2.3:

    If the web.xml contains an <absolute-ordering> that does not include the <others/> element, then only the jars containing the fragments listed in the ordering will be able to instantiate Filters, Listeners and Servlets using the Annotations and Pluggability features. Specifically:

    • The web-fragment.xml of excluded jars is not processed.
    • Excluded jars are not scanned for annotated servlets, filters or listeners. However, if a servlet, filter or listener from an excluded jar is listed in web.xml or a non-excluded web-fragment.xml, then it’s annotations will apply unless otherwise excluded by metadata-complete.
    • ServletContextListeners discovered in TLD files of excluded jars are not able to configure filters and servlets using the programmatic APIs. Any attempt to do so will result in an IllegalStateException.
    • If a discovered ServletContainerInitializer is loaded from an excluded jar, it will be ignored.
    • Excluded jars are not scanned for classes to be handled by ServletContainerInitializers.
    If the exclusion of jars from the configuration discovery mechanism is made explicit, then my main concerns will have been addressed. Parametrization will not be addressed, but I think that is something for consideration in 3.1.

    Conclusion

    Despite being in Proposed Final Draft, I think we are not quite at the conclusion stage. However excellent progress has been made and work is continuing. I hope that P in the current PFD is significant and that there will be at least one more draft before we are final. There is still a little time to send your own thoughts to to jsr-315-comments@jcp.org.
    Sun Microsystems has invited me to participate in their technical session on Servlet 3.0 at Javaone, together with Rajiv Mordani and Jan Leuhe.  I’ll be presenting a section on the Asynchronous Servlets API and giving a demonstration that uses some ease-of-deployment features to deploy a webapp on glassfish using the Jetty asynchronous HTTP client  in a 3.0 asynchronous servlet. The session is TS-3790Java

  • Jetty: eclipse update site

    For the last couple of jetty @ eclipse releases I have been working with getting a bit more used to the eclipse way of doing things. One of those ‘eclipse way’ deals is the p2 update site and I am announcing here the availability of a certain subset of jetty features that are available for download from within eclipse. What does this actually mean?
    Well, I am also working on an eclipse plugin or two that ultimately would make use of these features so I went through the process of figuring out this deployment mechanism. I also worked on a updating the osgi HttpService for use with the jetty7 artifacts as well so that was another use I had for it. But ultimately, part of joining eclipse was to bring the jetty components to a larger audience and these update sites seem to be one of the major deployment mechanisms for software within eclipse. Pulling from this update site will not given the average user a new deployment mechanism for their webapp development, that is one of the plugins I have knocking around on the machine which I’ll try to get out pretty soon. What this will give you is either the jetty server, the asynchronous jetty-client, or the jetty servlet components built on top of the jetty server feature. These are functionally repackaged artifacts from the maven repository where we have traditionally deployed our artifacts, renamed into the more eclipse conventional naming and masked into ‘features’. I am very interested in getting feedback on this and I would like to engage directly with developers that make use of these artifacts to both make sure that the update site itself is of the right format (and working against the correct target platform, ouch what a pain that was) and then also to make sure that the feature breakdown makes sense. If this preliminary setup works then I’ll run with it and get the rest of the jetty artifacts we ship (JMX, JNDI, etc etc) into separate features and get them deployed up to the update site.
    Anyway, the update site for the last 7.0.0.M2 release is located at:

    http://download.eclipse.org/jetty/stable-7/update/

    Anyway, feel free to email me feedback directly to ‘jesse DOT mcconnell AT gmail.com’ or you can leave feedback as a comment on this blog, your choice. Your feedback is welcome and will help us get this right for the eclipse developer community.