Greg Wilkins, the lead developer of Jetty and CEO of Webtide, will be speaking at this years The Ajax Experience in Boston. Greg’s presentation is about the challenges of scalably serving Ajax Comet from Java Servlets. The various asynchronous servlet extensions are evaluated and several case studies examined, including the jetty implementation of cometd.
Author: admin
-
Jetty 6.0.0 Stable Release
Jetty Release 6.0.0 is now available via http://jetty.mortbay.org
Jetty 6.x now becomes the stable release of Jetty and the 5.1.x
series will now be in security fix only mode after it’s next
maintenance release.
Jetty 6.x is a major refactoring of the Jetty code base, that
combines the best from Jetty -
Greg Wilkins to speak at AjaxWorld
Greg Wilkins, the lead developer of Jetty and CEO of
Webtide, will be speaking at this years AJAXWorld Conference. Greg’s presentation is about the challenges of scalably serving Ajax Comet from Java Servlets. The various asynchronous servlet extensions are evaluated and several case studies examined. -
Webtide joins the Open Ajax Alliance
Webtide has joined the Open Ajax Alliance.
The chief goal of the alliance is to accelerate customer success with Ajax by promoting a customer’s ability to mix and match solutions from Ajax technology providers and by helping to drive the future of the Ajax ecosystem. Among the organizations who have joined so far: IBM, Sun, Yahoo, Google, Mozilla, Opera, Adobe, Oracle, SAP, BEA, TIBCO, SoftwareAG, Eclipse Foundation, Intel, Novell, RedHat, Borland, Dojo Foundation, Zimbra (leaders beyind the Kabuki toolkit), Zend (the PHP company), Backbase, Jackbe, Icesoft, Laszlo, and Nexaweb.Webtides involvement in the alliance reflects their focus on open standards and desire to be bring standardization and interoperability to the java servlet extensions currently used to scalably serve Ajax webapplications.
-
Cometd with Jetty
Cometd is a scalable HTTP-based event routing bus that uses a push technology pattern known as Comet. The term ‘Comet’ was coined by Alex Russell in his post ‘Comet: Low Latency Data for the Browser‘. Cometd consists of a protocol spec called Bayeux, javacript libraries (dojo toolkit), and an event server.
Jetty now has just an implementation of the cometd event server that
uses the Continuation mechanism for asynchronous servlets.
Jetty already has comet implementations for DWR and activemq, but both of these use custom protocols which can lead to interoperability problems that cometd intends to solve.
Because browsers commonly permit only two connections to each server, it is not possible for a web page to use more than one Ajax library that is using comet techniques. The intent of cometd is to define a common protocol that can be shared between libraries and thus
encourage interoperability.
Cometd provides a two multi-channel communications paradigm that allows asynchronous message delivery from server to client as well as client to server. The multi-channel nature of the protocol, will eventually allow a single comet connections to be shared between multiple Ajax toolkits
Jetty has implemented the server side of this protocol, which will allow it to be used with whatever client side implementations emerge (currently only dojo, but I plan to port activemq once it is stable). However to achieve true
interoperability, we will need to develop standardized APIs on both the client and server side. Having standard protocol is a start on this, as it defines the capabilities that will need to be expressed in the APIs -
Async Servlets – take II
I have reconsidered the API for asynchronous servlets that I proposed in my recent blog – it is not good!. It makes some of the same mistakes as weblogic and tomcat have made with their asynchronous extensions.
So let’s have a brief review of all the available and
proposed solutions.BEA WebLogic
BEA added AbstractAsyncServlet in WebLogic 9.2 to support threadless waiting for events (be they Ajax Comet events or otherwise eg. an available JDBC connection from a pool). Their API separates the handling of a request and the production of a response intodoRequestanddoResponsemethods. A call to the anotifymethod or a timeout triggers the invocation of thedoResponsecall.This API can certainly be used to handle most of the important use-cases that I have previously discussed, but it suffers from several major flaws:
It’s not really a servlet – The user cannot implement
doGetorservicemethods to generate content so there is limited benefit from tools or programmer familiarity. Furthermore, the javadoc states an AbstractAsyncServlet “cannot be used as the target of request dispatching includes or forwards... Servlet filters which get applied before AbstractAsyncServlets, will not be able to take advantage of post processing of the response.“.es
So an AbstractAsyncServlet cannot be used like a servlet and the URLs that it served will not be able to be re-used (eg. in portlets) and it cannot live behind common filters that may apply aspects (eg security, authentication, compression) . It is a cuckoo in a nest of servlets, only pretending to be a servlet and breaking all the other eggs in the process.There can only be one. There are many reasons that a wait may be required while handling a request and all are candidates for asynchronous waiting. But as this solution is tied to a single Servlet, either it will have to implement all the waiting concerns or there can only be one efficient wait. For example, it would be unreasonable for a single servlet to implement async waits for a remote authentication server and the arrival of a JMS message and for an available JDBC connection from a limited pool.
It is not portable and servlets that implement this API will fail on containers that do not support it.
I believe AbstractAsyncServlet is a good solution for a particular async use-case, but it is not a candidate as a general approach.
Tomcat 6.x
After my initial blogging on this issue, the tomcat developers added CometProcessor and CometServlet (unfortunately without engaging in an open discussion as I had encouraged). It is essentially the same solution as BEAs, but with a few extras, a few gotchas and the same major issues.It is still a special type of servlet and the
beginandendmethods take the
place of BEAsdoRequestandnotifycalls. Asynchronous code directly calls the response object until it a call toendindicates the end of the handling of the request.The badly named
CometProcessor(Comet is only one use-case for async servlets) does support asynchronous IO, but I’ve argued there is little need for this and it would be better to get the container to do IO if there was.The
startmethod does execute in the context of the Filters mapped to the servlet/URL. However, there is no support for the asynchronous code itself to operate within the context of the filter chain. Thus any non-trivial filters that are unaware of the tomcat mechanism are unlikely to work. Any authentication information or other actions taken by filters will not apply to the code that generates the response. So like the BEA solution it is not a servlet except by name and would not be able to be the used with arbitrary dispatches or generic filters.
There still can only be one and multiple asynchronous aspects may not be combined with this API.The implementation makes a naive attempt at portability
and will call thebeginandendmethods in a container that does not support the mechanism. But if the implementation ofbeginschedules asynchornous writers to the response object (as it should), then this breaks the servlet contract and simply will not work as the response will be committed long before any asynchronous handling.ServletCoordinator
My proposed ServletCoordinator suffers from many of these same issues. It does meet one of my main concerns in that responses are generated by normal servlets code using normal techniques and within the scope of the applicable filter chain. But there still can only be one and there is no support for multiple asynchronous aspects. It avoids being an ugly ducking servlet, but only by not being called a servlet. It is still a new non portable mechanism that is unlikely to work with arbitrary dispatchers
It’s not a cuckoo, it’s a dodo!Jetty 6 Continuations
The Jetty 6 Continuation mechanism is not an extension to the Servlet API. Instead it as a suspend/resume mechanism that operates within the context of a Servlet container to allow threadless waiting and reaction to asynchronous events by retrying requests.Continuations well address the concerns I have raised above:
- Request handling and response generation is done within normal servlets and always within the scope of the applicable filter chain. Common tools and frameworks can be used without modification.
- If RuntimeExceptions are propogated and the stateless nature of HTTP respected, then there is a reasonable expectation that arbitrary filters and dispatchers may be applied.
- There can be multiple asynchronous concerns applied as each may independently use a continuation. For example, it is possible to apply the ThrottlingFilter in front of the activemqAjaxServlet and while both use Continuations, neither will interfer with the other.
- It is truely portable and if run within a container that does not support Continuations, will fall back to threadful waiting.
While some (including myself) are a little perturbed by the way RuntimeExceptions are used by Continuations, I argue that should be seen as an implementation detail and that the semantics of the API are correct for the purpose. There are already byte-code manipulating continuation solutions available and rumours of future JVM support, so the implementation can be improved.
Thus I have not yet seen a better API than continuations for the majority of the asynchronous use-cases within the
servlet model. More over, I believe that the async APIs of BEA and tomcat can be trivially implemented in a container supporting continuations, but that the inverse is not true. -
Introducing Webtide
Mort Bay has been a successful small open source services company for over 10 years. But the business of open source is changing and is there is more demand for support and services from larger organizations. Thus Mort Bay has partnered with
Exist to form Webtide and joined a family of open source companies based around Simulalabs that includes Logicblaze (activemq/servicemix) and Mergere (maven).We will continue to provide the same training, development and support services, but with Webtide we will be able to scale our offerrings to a higher level of professionalism.
If you are at OScon Portland, then please come to our briefing and get-together at the Red Lion this Friday evening.
Webtide is an new services company that provides training, development and support for web 2.0 applications, with particular focus on the server side of scalable Ajax Comet solutions. Webtide can train or mentor your engineers in these emerging technologies, provide open outsourced development and give you the assurance of 24×7 support.
Webtide is a joint venture between Mort Bay Consulting, the creator of the highly regarded Jetty open source Java web container, and Exist, a premier software development company supporting open source technology. Webtide will be formally launched during the O~@~YReilly Open Source Convention in Poortland, Oregon on July 24-28, 2006.
To learn more about Webtide and its products please visit the Webtide booth (Booth 723) at the OSCon Exhibit Area. There will also be a Webtide Product Briefing on July 27, 2006, 6:15-7:15pm at the Broadway Room, 6th Flr of the Red Lion Hotel, 1021 NE Grand Ave., Portland, OR. The first 30 registered participants will get a chance to win an HP iPAQ running Jetty. For more info, please email mailto:training@webtide.com or visit http://www.webtide.com
-
Jetty for AJAX Released During Webtide's Launch
Greg Wilkins and Jan Bartel, the lead developers of Jetty, recently launched Webtide at the O’Reilly’s Open Source Convention 2006 held at the Oregon Convention Center. Webtide, a company specializing in Web 2.0 and AJAX technologies, is a partnership between Mortbay – the creator of Jetty, the highly regarded open source Java web container and Exist – a premier software development company supporting open source technology.
The launch was followed by an introduction to Webtide’s latest product called Hightide, a versioned distribution of Jetty that is optimized for Ajax. The event was held at the Red Lion Hotel in Portland, Oregon where thumb drives containing Hightide were distributed to guests. One lucky guest
-
Webtide Gears Up for OSCon
Webtide, a global expert in implementing Ajax Technology, is gearing up for the O’Reilly Open Source Convention (OSCon) in Portland, Oregon from July 24 to 28, 2006.
At OSCon, Webtide will present its latest product, Hightide. Hightide is a versioned distribution of Jetty, which provides a comprehensive toolset for the development of highly scalable, state-of-the-art web applications. It is optimized for Ajax, and ships with DWR and ActiveMQ Ajax libraries to help you get started quickly. Implementations of J2EE services, such as JNDI, JTA, JMS, JDBC, and web services are pre-integrated.
To learn more about Webtide and its products, please visit the Webtide booth (Booth 723) at the OSCon Exhibit Area. A Hightide Product Briefing is scheduled on July 27, 2006, from 6:15 to 7:15pm at the Broadway Room, 6th Floor, Red Lion Hotel, 1021 NE Grand Ave., Portland, Oregon. The first 30 registered participants will get a chance to win an HP iPAQ.
Webtide will also conduct an Ajax Training at its LA Training Room from August 2 to 4, 2006. Greg Wilkins, the Lead Developer of Jetty and the CEO of Webtide, will lead the training team. For more information, please email training@webtide.com or visit www.webtide.com .
Webtide is a joint venture between Exist and Mortbay Consulting. Exist is a premier software development company supporting open source technology, while Mortbay is the creator of Jetty, which is the highly regarded open source Java web container.
-
An Asynchronous Servlet API?
Now that the 2.5 servlet specification is final, we must start thinking
about the next revision and what is needed. I believe that the most
important change needed is that the Servlet API must be evolved to
support an asynchronous model.
I see 5 main use-cases for asynchronous servlets:- Non-blocking input – The ability to receive data from a
client without blocking if the data is slow arriving. This
is actually not a significant driver for an asynchronous API, as
most request arrive in a single packet, or handling can be delayed
until the arrival of the first content packet. More over, I would
like to see the servlet API evolve so that applications do
not have to do any IO. - Non-blocking output – The ability to send data to a
client without blocking if the client or network is slow.
While the need for asynchronous output is much greater than
asynchronous input, I also believe this is not a
significant driver. Large buffers can allow the
container to flush most responses asynchronously
and for larger responses it would still be better to
avoid the application code handling IO. - Delay request handling – The comet style of
Ajax web application can require that a request handling
is delayed until either a timeout or an event has occured.
Delaying request handling is also useful if a remote/slow
resource must be obtained before servicing the request
or if access to a specific resource needs to be throttled to prevent
too many simultaneous accesses. Currently the only
compliant option to support this is to wait within
the servlet, consuming a thread and other resources. - Delay response close – The comet style of
Ajax web application can require that a response is held
open to allow additional data to be sent when asynchronous
events occur. Currently the only compliant option to support
this is to wait within the servlet, consuming a thread and
other resources. - 100 Continue Handling – A client may request
a handshake from the server before sending a request body.
If this is sent automatically by the container, it prevents
this mechanism being meaningfuly used. If the application
is able to decide if a 100-Continue is to be sent, then
an asynchronous API would prevent a thread being consumed
during the round trip to the client.
All these use cases can be summarized as “sometimes you just
have to wait for something” with the perspective that waiting
within theServlet.servicemethod is an expensive
place to park a request while doing this waiting as:- A Thread must be allocated.
- If IO has begun then buffers must be allocated.
- If Readers/Writers are obtained, then character converters are allocated
- The session cannot be passivated
- Anything else allocated by the filter chain is held
These are all resources that are frequently pooled or passivated
when a request is idle. Because comet style Ajax applications require
a waiting request for every user, this invalidates the use of
pools for these resources and requires maximal resource usage.
To avoid this resource crisis, the servlet spec requires some low
cost short term parking for requests.The Current Solutions
Givin the need for a solution, the servlet container implementations have
started providing this with an assortment of non-compliant extensions:- Jetty has Continuation
which are targetted at comet applications - BEA has a future response mechanism also targetted Comet applications
- Glassfish has an extensible NIO layer for async IO below the servlet model
- The tomcat developers have just started developing Comet support in tomcat 6
It is ironic that just as the 2.5 specification resolves most of the
outstanding portability issues, new portability issues are being created.
A standard solution is needed if webapplications are to remain portable and
if Ajax framework developers are not going to be forced to support
multiple servers
as well as multiple browsers.A Proposed Standard Solution?
I am still not exactly sure how a standard solution should look, but I’m already
pretty sure how it should NOT look:- It should not be an API on a specific servlet. By the time a container has
identified a specific servlet, much of the work has been done. More over, as filters
and dispatchers give the abilility to redirect a request, any asynchronous API on
a servlet would have to follow the same path. - It probably will not be based on Continuations. While Continuations are
a useful abstraction (and will continue to be so), a lower level solution can
offer greater efficiencies and solve additional use-cases. - It is not exposing Channels or other NIO mechanisms to the servlet programmer.
These are details that the container should implement and hide and NIO may not be
the actually mechanism used.
An approach that I’m currently considering is based around a Coordinator
entity that can be defined and mapped to URL patterns just like filters
and servlets. A Coordinator would be called by the container in response
to asynchronous event and would coordinate the call of the synchronous
service method.
The default coordinator would provide the normal servlet style of
scheduling and could look like:class DefaultCoordinator implements ServletCoordinator { void doRequest(ServletRequest request) { request.continue(); request.service(); } void doResponse(Response response) { response.complete(); } }The
ServletRequest.continue()call would trigger any required100-Continueresponse
and an alternative Coordinator may not call this method if a request body is not required or
should not be sent.
TheServletRequest.service()call will trigger the dispatch of a thread to the the normal Filter chain
and and Servlet service methods. An alternative Coordinator may choose not to callserviceduring
the call todoRequest. Instead it may register with asynchronous event sources and
callservice()when an event occurs or after a timeout. This can delay event handling
until the required resources are available for that request.
TheServletResponse.complete()call will cleanup a response and close the response streams (if not
already closed). An alternative Coordinator may choose not to callcompleteduring the call to
doResponse(), thus leaving the response open for asynchronous events to write more content.
An subsequent event or timeout may callcompleteto close the response and return it’s connection
to the scheduling for new requests.
The coordinator lifecycle would probably be such that an instance would be allocated to a request, so
that fields in derived coordinator can be used to communicate between thedoRequestand
doResponsemethods.
It would also be possible to extend the Coordinator approach to make available events such as arrival
of request content or the possibility of writing more response content. However, I believe that asynchronous
IO is of secondary importance and the approach should be validated for the other use-cases first.
If feedback of this approach is good, I will probably implement a prototype in Jetty 6 soon. - Non-blocking input – The ability to receive data from a