Category: Uncategorized

  • Jetty Release 6.1.5

    This release is only available from Codehaus and the MortBay site. For more information, please see:

    http://docs.codehaus.org/display/JETTY/Downloading+and+Installing

    This release includes a few minor fixes and some good improvements and few new features, including: RPMs, OSGi manifests, graceful shutdown and a new gzip filter.

    The ant module jar, the RPMs and the JBoss sar are also available for download by following the link above.

    jetty-6.1.5 - 19 Jul 2007
    + Fixed GzipFilter for dispatchers
    + Fixed reset of reason
    + JETTY-392 - updated LikeJettyXml example

    jetty-6.1.5rc0 - 15 Jul 200
    + update terracotta session clustering to terracotta 2.4
    + SetUID option to only open connectors before setUID.
    + Protect SslSelectChannelConnector from exceptions during close
    + Improved Request log configuration options
    + Added GzipFilter and UserAgentFilter
    + make OSGi manifests for jetty jars
    + update terracotta configs for tc 2.4 stable1
    + remove call to open connectors in jetty.xml
    + update links on website
    + make jetty plus example webapps use ContextDeployer
    + Dispatch SslEngine expiry (non atomic)
    + Make SLF4JLog impl public, add mbean descriptors
    + SPR-3682 - dont hide forward attr in include.
    + Upgrade to Jasper 2.1 tag SJSAS-9_1-B50G-BETA3-27_June_2007
    + JETTY-253 - Improved graceful shutdown
    + JETTY-373 - Stop all dependent lifecycles
    + JETTY-374 - HttpTesters handles large requests/responses
    + JETTY-375 - IllegalStateException when committed.
    + JETTY-376 - allow spaces in reason string
    + JETTY-377 - allow sessions to be wrapped with AbstractSesssionManager.SessionIf
    + JETTY-378 - handle JVMs with non ISO/UTF default encodings
    + JETTY-380 - handle pipelines of more than 4 requests!
    + JETTY-385 - EncodeURL for new sessions from dispatch
    + JETTY-386 - Allow // in file resources
  • Comet Performance

    In response to the recent discussion of push v pull Ajax performance, I decided to do some performance testing of the Jetty implementation of bayeux for the cometd project. This was also a great way to test the  asynchronous http client  that is now included with Jetty.
    The test scenarios was simple: 1000,2000,5000 & 10000 users simultaneously connect to the cometd server and subscribe to a chat room. We publish a fixed number of messages to the rooms and vary the number of users per room while measuring message throughput and latency. The software was written using dojo-0.9 for the cometd client and Jetty 6.1.5 for the server. A long polling transport was used, which means that each client has an outstanding request parked in the server waiting for an event so that a response may be sent to the client.
    The server  machine  was a Intel 1.83GHz core duo minimac with 1GB ram running OSX. The client machine was an Intel 2GHz centrino duo thinkpad  with 512M ram running ubuntu linux.  Suns Java 1.5.11  was used on both.  These are pretty small machines to be testing 10000 simultaneous users, but they managed well enough to draw some conclusions. The crucial configuration for the server was:

    <Set name="ThreadPool">  <New class="org.mortbay.thread.BoundedThreadPool">    <Set name="minThreads">10</Set>    <Set name="maxThreads">250</Set>    <Set name="lowThreads">25</Set>  </New></Set><Call name="addConnector">  <Arg>    <New class="org.mortbay.jetty.nio.SelectChannelConnector">      <Set name="port">8080</Set>      <Set name="maxIdleTime">240000</Set>      <Set name="Acceptors">2</Set>      <Set name="acceptQueueSize">1000</Set>      <Set name="lowResourcesConnections">11000</Set>      <Set name="lowResourcesMaxIdleTime">1000</Set>    </New>  </Arg></Call>

    The jetty asynchronous http client is based on the same NIO technology as the server. But instead or parsing requests and generating responses, it generates requests and parses responses.   Thus the latency due to NIO scheduling will measured twice in this test, once for the server and once for the client. In reality the 10000 clients would be running on 10000 machines each using blocking IO for the 1 or 2 TCP/IP connections they maintain to the server.  So the actual results for Bayeux can be expected to be better (perhaps significantly) than the numbers here:
    Results
    full size
    The results show large numbers of simultaneous users can indeed be handled with low latency. It must be remember that each connected user has 1 or 2 TCP/IP connections and will have at least 1 outstanding HTTP request.  With a non NIO or non Continuation based  server, this would require around 11,000 threads to handle 10,000 simultaneous users.  Jetty handles this number of connections with only 250 threads.
    Below 1000 message deliveries per second, the average latency is small and almost constant for 1000, 2000 and 5000 users, but for 10,000 users the latency starts creeping up to a few hundred milliseconds, which still highly interactive and sufficient for a chat, collaborative editing and many games.
    Above 1000 messages per second, the latency starts to suffer and at 3000 messages per second, reaches 1.5 seconds for 1000 users and 7 seconds for 5000 users. For less than 10,000 users, this degradation can be described as graceful and is a reflection of the bayeux protocols ability to batch more messages when under duress in a classic latency vs throughput tradeoff.
    Above all, it must be recognized that all results in this test have superior latency to the pull solutions referred to in the link above. To achieve 1 second average latency, pull solutions would need to poll every 2 seconds, generating 5000 requests per second for a server with 10000 idle users!
    The conclusion from these tests is that Bayeux + Jetty + Continuations does indeed provide scalable low latency communications for Ajax applications.

    
    
  • Async Servlet 3.0 Proposal

    A recently blogged about my thoughts on the JSR-315 Asynchronous servlet 3.0 concerns. I have now worked those thoughts into a proposal that I have submitted to the expert group.

    This proposal is clearly influenced by Jetty Continuations in that it allows requests handling to be suspended and resumed.  However it avoids the contentious RetryException thrown by Jetty.

    This proposal also goes beyond continuations by including an extensible mechanism for containers to handle content on behalf of the servlets. This will allow containers to use the most efficient IO and reduce the complexity that a servlet developer needs to contend with in an asynchronous world.

  • Maven Archetypes for Web Applications

    Webtide’s freely dowloadable collection of maven archetypes for popular web frameworks has been updated to reflect the most recent release of each framework.

    The frameworks supported are:

    • ActiveMQ
    • DOJO
    • DWR
    • JSF
    • SiteMesh
    • Spring
    • Spring JPA
    • Struts
    • Tapestry
    • WebWork
    • Wicket     **New**

    Contact us  if you’d like to suggest a framework to add, or if you’d like to contribute an archetype.

  • TAE: Ajax Comet Communications

    Greg Wilkins will be speaking at the Ajax Experience in San Francisco 25-27 July on
    the subject of Ajax Comet Communications: The Bayeux protocol and standardization efforts from the Open Ajax Alliance.

    Communications for Comet (or Ajax Push) remain a problematic issue for deploying scalable Ajax applications. This talk looks at two related efforts to deal with the many concerns of Ajax Comet communications.

    The Bayeux protocol from the Dojo foundation is a multi-channel event bus that spans client and server over a variety of Ajax transports. The protocol has multiple implementations and aims to become a defacto standard for Ajax push communications. This talk examines the protocol and its scalable implementation in the Jetty web server.

    The Open Ajax Alliance is an industry organization formed to deal with the interoperability issues of Ajax. Through their communications task force, the alliance is investigating common API solutions that will allow the semantics of Ajax communications to be captured without mandating a protocol solution or preventing continuing innovation in Ajax transports, interoperability and browser support.

    The session is scheduled for 14:20 July 27.

     

  • Hightide 6.1H.4-beta Release

    Release 6.1H.4-beta of Hightide is available for download at:

    http://www.webtide.com/downloads.jsp.

    Hightide is an open source, versioned and optimized distribution of Jetty providing a comprehensive toolset for the development of scalable, state-of-the-art web 2.0/JavaEE applications.

    New features in this release include:

    • More performance optimizations.
    • Updated Bayuex (cometd) implementation.
    • Updated Jetty to 6.1.4
    • Update of all library jars to most recent versions.
    • Maven plugin for Hightide providing the same style of rapid webapp development as the Jetty maven plugin, but with all J2EE services instantiated and available. Targets are:
    •   mvn hightide:run mvn
    •   hightide:run-war
    •   mvn hightide:run-war-exploded
    • Maven api dependency for simplified webapp development. The following single dependency will ensure that all J2EE apis supported by the Hightide runtime are transitively included, reducing the number of dependencies you need to keep track of:

     <dependency>
        <groupId>com.webtide.hightide</groupId>
        <artifactId>hightide-provided-apis</artifactId>
        <version>6.1H.4-beta</version>
        <type>pom</type>
        <scope>provided</scope>
     </dependency>

    • Maven dependency for all jars used by Hightide runtime. The following single dependency will ensure that all jars that are present on the runtime classpath will be transitively available to your webapp, reducing the complexity and tedium of maintaining your pom:

      <dependency>
        <groupId>com.webtide.hightide</groupId> 
        <artifactId>hightide-server-dependencies</artifactId>
        <version>6.1H.4-beta</version>
        <type>pom</type>
        <scope>provided</scope>
      </dependency>

  • Push vs Pull

    The Ajaxian has covered a study of push vs pull Ajax  communication techniques. The original report was produced by Delft University. I welcome this report as a good start on objectively measuring the communication techniques needed for web 2.0 applications, but unfortunately the report contains a couple of minor misconceptions and is somewhat misdirected in the type of application studied.
    One minor misconception is that long polling and the bayeux protocol disconnect between long polls.  While it is true that long polls return if they are idle for a period, the underlying TCP/IP connection is kept open and no reconnection is needed.  The HTTP response and new request sent just keep the connection open, just like the heart beat messages sent over a streamed connection.  The only cost is some small additional latency for events that occur while the long-poll is being renewed.

    The report also could be read as implying that the bayeux protocol is only long polling. While long polling is the default, it can also support polling or streaming.  In fact with clever use of timeout settings, bayeux can support a combination of polling, long polling and streaming (credit to Joe Walker of DWR for this idea). A client can pause between requests (normal polling), the server can hold onto a request while waiting for an event (long polling) and then keep the response open after an event (streaming).  This is not yet fully implemented, but the protocol certainly allows this and other transport techniques.

    Other than this minor issues, the main issue I have with the report is that it does not include event latency as a degree of freedom.  The pull implementation they test has a 15 second period, which means that events will have an average 7.5 second latency for a perfect implementation.   While there are many many applications that can live such latency (or longer), they are not the target applications for Ajax Comet techniques.  A  15 second latency is simply too much for chat, for collaborative editing, for help line operators manipulating the pages of calling clients, for gaming and for the applications that have not yet been thought of that can take advantage of low latency client server, server client messaging.

    So while the report finds that long polling can take 7 times more CPU that polling, if the polling period was reduced from 15s to 1.5s to provide low latency, then the CPU usage for polling would be increased 10 times.   (More over, I think the 7 times figure is at least partially due to an early implementation of Bayeux and a buggy release of jetty).

    To be realistic, as well as varying the event rate and the number of clients, the study should have varied the target latency.  For applications that can tolerate long latency, then push/comet techniques are probably not applicable and polling is an ideal and efficient technique. (this is NOT to say that the bayeux protocol and messaging paradigm is also not applicable for low latency).

    As the required latency is reduced,  there will be a cut over point where push is more efficient that pull.  However, that cut over point will depend greatly on the message rate, the message distribution over clients and the message size.

    So while the Delft report is a welcome first data point, it is only a single data point in a wide spectrum of application load profiles.  I hope they follow up with more broadly based studies (or lend me their super computer to do the same 🙂

  • JSR-315: Servlet 3.0 Specification – part I

    The Java Community Process has proposed JSR-315 to consider the 3.0 servlet specification. In this blog entry, I look at the Async and Comet considerations listed in the JSR-315 and discuss how 3.0 servlets might address them.

    From JSR-315 – Async and Comet support:

    • Non-blocking input – The ability to receive data from a client without blocking if the data is slow arriving.
    • Non-blocking output – The ability to send data to a client without blocking if the client or network is slow.
    • Delay request handling – The comet style of Ajax web application can require that a request handling is delayed until either a timeout or an event has occurred. Delaying request handling is also useful if a remote/slow resource must be obtained before servicing the request or if access to a specific resource needs to be throttled to prevent too many simultaneous accesses.
    • Delay response close – The comet style of Ajax web application can require that a response is held open to allow additional data to be sent when asynchronous events occur.
    • Blocking – Non-blocking notification – The ability to notify push blocking or non-blocking events. Channels concept – The ability to subscribe to a channel and get asyncronous events from that channel. This implies being able to create, subscribe, unsubscribe and also apply some security restriction on who can join and who cannot.

    Non-blocking Input

    Undoubtedly modern HTTP servers need to be able to receive request content without blocking. Blocking while waiting for request content consumes threads and memory that can be better used servicing requests that have already been received.

    But is non-blocking a capability that needs to be exposed to the servlet developer? Are developers going to be able to do anything valuable with 10% of a XML document, 31% of a SOAP request or 1 byte of a multi-byte character? Do we really want servlet developers to have to deal with all the complexities of asynchronous event handling?

    Instead of exposing asynchrous IO to the developers, I believe this concern is better addressed by allowing
    servlet containers to do the asynchronous IO and only call the servlet once the entire content is available. Jetty already has this option for content of known size that will fit within the header buffer. Jetty can receive small requests asynchronously and will only dispatch the request to a servlet once the entire content is available and thus the servlet will never block while reading input.

    To standardize this approach, there would need to be a way for a servlet to indicate that it wanted the container to perform the IO of request content. The container would only dispatch to the filters and servlets when all the content is available. The content could be made available to the servlet via either the standard getInputStream() API or perhaps via a new getContent() API that could return a byte array, a CharSequence, a File, a DOM document or even an object representing multipart form content (ie file upload).

    The concern with this approach would be that large content could consume too much memory if it is not streamed. However, if the container was performing the IO, it could decide to store large content as a file. If there really is a use-case for a servlet to handle streamed data, then I would suggest that the container should aggregate bytes into partial char sequences and parse them into a stream of SAX events passed to the servlet. A servlet developer could handle a SAX event far better than 3 bytes of a 6 byte unicode character!

    In summary, to deal with the issues raised by blocking IO, I think that capabilities should be added to the container rather than lower level IO events be exposed to the servlet developer.

    Non-blocking Output

    My comments about non-blocking Input all apply to Output, only more so! Servlets are application components, and application developers think procedurally for the most part when generating content. If a write returns saying 0 bytes are written, then what is an application code going to do? Wait? Retry? Do something else? More importantly, if that content is being generated from a DOM, JSF, XLS or any famework, then the servlet developer will not have the opportunity to do anything if the write returns with zero bytes written.

    Again I would advocate trying to make the container do the asynchronous IO rather than the servlet developer. This can be done now in Jetty simply by providing big buffers, that get flushed asynchronously after the Servlet.service(...) method has completed.

    To generalize this, perhaps there should be a Response.sendContent(Object) method, with which a servlet could pass a File, a DOM document or similar. Once the servlet has completed, the container would then take responsibility for generating bytes and asynchronously flushing them to the client.

    Again in summary, to deal with the issues raised by blocking IO, I think that capabilities should be added to the container rather than lower level IO events be exposed to the servlet developer.

    Delay Request Handling

    The ability to delay request handling is very important for dealing with use-cases such as:

    • Ajax Comet applications that wait for client specific events before generating a response.
    • Servlets that must wait for limited resources such as connections from a datasource.
    • Servlets that must wait for asynchronous events such as webservice call responses or proxied request responses.

    The key word in these use-cases is “wait” and thus I think “delayed request handling” is not the correct description of this concern. You cannot delay handling of a request without first partially handling the request:

    • The request can be authenticated and authorized
    • The state can be examined to determine if further request handling must wait for an event
    • Requests to slow services may be issued/sent
    • Allocation of scarse resources may be registered

    So in reality, request handling is not delayed, but is suspended. Handling of the request needs to commence and
    progress to a point where it is suspended and then resumed when the criteria for the wait are met. Asynchronous servlet API’s from BEA and Tomcat, address this by special calls to the servlet that allow request handling to be commenced and then completed. I have blogged about the problems with this approach, which in summary is that these new APIs are not existing servlet APIs, so no existing servlets or servlet frameworks can be used to handle these requests or generate content!

    The Jetty server has addressed this concern with Continuations that allow request handling to be suspended (or delayed) from within a call to the normal Servlet.service(...); method. Thus existing frameworks and servlets may easily be used with applications that suspend (or delay) request handling.

    I strongly believe that the semantics that Servlet 3.0 needs is suspend/resume of normal request handling. The API for this could be Continuations, but with support of the servlet API, a less controversial approach could be to simply add ServletRequest.suspend(long timeout) and ServletRequest.resume() methods. After suspend is called, the response object could disabled until the service method is exited (similar to after a RequestDispatcher.forward() call). When the timeout expires, or the resume method is called, the service method is simply recalled with the same request and response objects. All the currently specified security, session and JEE JNDI context facilities are available for handling and do not need to be redefined and reimplemented for any new APIs.

    An indication of the power of suspend/resume sematics of service calls, is that this approach can be used to emulate Asynchronous servlet API’s but the converse is not true.

    Delay Response Close

    If suspend/resume semantics are adopted for delayed request handling, then delayed response close is also addressed. A response is closed when a service call exits without a suspend call.

    Note that the style of Ajax that relies on delayed close is actually a bit of protocol abuse and is not guaranteed to transit proxies. However it is widely deployed and it should be supported, but not encouraged.

    Blocking – Non blocking notification

    I’m not clear exactly what this concern means? It could be referring to Channel as in an NIO channel, In which case my comments about asynchronous IO above apply. Alternately it could be referring to Channel in terms of a public/subscribe messaging bus like bayeux/cometd, in which case I am hugely in favour of a standard event mechanism being adopted.

    In Summary

    The async considerations listed in JSR-315 have indeed captured some significant use-cases that need to be addressed by modern servlet containers. Furthermore, I believe that only relatively minor changes in the API are needed to address the majority of these concerns. My next blog entry will expand on the API changes that I have alluded to in this entry.

  • Servlets 3.0

    I’m just fresh out of a session at JavaOne where Sun have revealed their road map for the servlet 3.0 specification. My initial reaction is that it contains both some good and bad items as well as quite a few concerns in between.
    The 10 words or less version is: Annotations, JSF, Ajax, Comet, REST, scripts, security and Misc.

    Java Community Process

    My first concern with the road map is that I, as a member of the JCP servlet expert group had to go to a JavaOne session to see it for the first time. I hope that this does not signal a return to the bad old days where the JCP was used only as a post process rubber stamp for Sun’s internal design process. The talk did mention consultation with the expert group, but I would have been less concerned if we have been involved in setting the agenda as well as helping find a solution for it

    Annotations

    I’m not a huge fan of annotations, but given that they are already in 2.5 the additional annotations shown all looked reasonable. The aim is to make web.xml either redundant or at least only used for deployment overrides of reasonable defaults.

    Java Server Faces

    There was a lot about improved integration with JSF. This may be good or bad, but I don’t really care either way. I continue to not understand why the servlet specification should be closely tied any one web framework. The servlet spec should be web framework neutral and having a “favored son” will just lead to more special cases – like the need for all webapps to scanned for JSP and JSF descriptors even if they do not use JSP or JSF!

    Ajax Comet

    The good news is that support for Ajax comet (Ajax Push) is on the agenda. The bad news is that the talk appeared to describe the only use case for asynchronous servets to be Comet Ajax, and then only the forever response flavor of that. As I have described in my blog, Ajax Comet is only one of many use cases for asynchronous servlets. Any servlet that might block for a JDBC connection pool, a remote webservice or any other slow resource could benefit from threadless waiting support. So I hope the agenda revealed was only a subset of the use-cases to be addressed and not the limit of the vision.

    REST

    The integration of REST support will be welcomed. Well designed REST support will allow developers to focus on content rather than HTTP protocol. The annotations shown looked like a good step in the right direction, with my only concern being the use of streams to define content could restrict servers from efficient non-blocking handling of content. Hopefully other ways to provide content can be included.

    Scripts

    Exciting development to allow alternate languages other than java to run on the JVM and produce content efficiently through servlets. One size does not fit all and one language does not suit all. Looking forward to this bit.

    Security and Misc

    Web security has always been lumped in with Misc in the servlet spec and thus is half baked. In the road map it got its own heading as it deserves and hopefully this blog will be the last time you see security as Misc.
    Most of the other sore points in the servlet spec (eg welcome files) were listed as needing attention in 3.0

    Summary

    It is important that we keep servlet spec relevant to the way developers produce content. If servlets do not grow to support most of the issues mentioned above, then developers will continue to find ways to bypass standard servlets. Servlets are a key foundation to the whole JEE stack and if they are bypassed, so much of the work done to achieve interoperability and standards will be lost as different foundations are used by different web innovations.
    So while progress on a 3.0 servlet spec is well overdue, I welcome Sun’s statement of intent and look forward to the months ahead and hope that some good solutions may quickly be agreed and specified.