Jetty’s HttpClient is a fast, scalable, asynchronous implementation of a HTTP client.
But it is even more.
Jetty’s HttpClient provides a high level API with HTTP semantic. This means that your applications will be able to perform HTTP requests and receive HTTP responses with a rich API. For example, you can use HttpClient to perform REST requests from the client or from within your web application to third party REST services.
Jetty’s HttpClient provides also pluggable transports. This means that the concept of a HTTP request and response is translated by HttpClient to SPDY, FastCGI, HTTP/1.1 or other protocols and transported over the network in SPDY, FastCGI and HTTP/1.1 formats, in a way that is totally transparent for the application, which will only see a high level HTTP request and a response.
Applications will get improved performance when using more performant transports.
The new addition in Jetty 9.3 is a HTTP/2 transport for HttpClient, replacing the SPDY transport.
This means that now HttpClient can talk to a regular HTTP/1.1 server, or to a FastCGI server that serves PHP pages, or to a HTTP/2 server transparently.
The HTTP/2 specification is in its final phases, so the HTTP/2 protocol is now stable and well supported: Firefox, Chrome, Internet Explorer 11 already supports HTTP/2, and as the time passes they will be enabling HTTP/2 by default (some already have).
And it’s not only browsers and servers such as Google, Twitter, etc.: also tools and libraries such as curl and nghttp2, among many others.
The Jetty project implemented HTTP/2 since June 2014 and this very website has been served using Jetty’s HTTP/2 implementation for now over 6 months, helping to finalize the interoperability among different implementations.
You are probably already reading this blog entry served via HTTP/2, if you are using a recent browser.
Contact us if you are interested in deploying HTTP/2 in your infrastructure and benefit from the performance improvements that it brings.
Blog
-
HTTP/2 Support for HttpClient
-
Phasing out SPDY support
Now that the HTTP/2 specification is in its final phases of approval, big players announced that they will remove support for SPDY in favor of long term support of HTTP/2 (Chromium blog). We expect others to follow soon.
Based on this trend and feedback from users the Jetty Project is announcing that it will drop support for SPDY in Jetty 9.3.x, replacing its functionalities with HTTP/2. We have milestone builds available for Jetty 9.3.0 now if you would like to try them out, they can be downloaded through Maven Central now. A new milestone release will be released shortly followed by a full release once the specification is finalized.
The SPDY protocol will remain supported in the Jetty 9.2.x series, but no further work will be done on it unless it is sponsored by a client. This will allow us to concentrate fully on a first class quality implementation of HTTP/2.
Along these same lines, Jetty 9.3 will drop support for NPN (the TLS Next Protocol Negotiation Extension), replacing its functionalities with ALPN (the TLS Application Layer Protocol Negotiation Extension, RFC 7301). NPN should remain supported in the Jetty 9.2.x series, and updated as new JDK 7 versions will be released.
Contact us if you are interested in migrating your existing SPDY solutions to HTTP/2. -
HTTP2 Last Call as Proposed Standard
The HTTP2 protocol has been submitted on the next stage to becoming an internet standard, the last call to the IESG. Some feedback has been highly critical, and has sparked its own lengthy feedback. I have previously given my own critical feedback and the WG last call, but since then the protocol has improved a little and my views have moderated a touch. However, I still have significant reservations as expressed in my lastest feedback and reproduced below. However, the Jetty implementation of this protocol is well advanced in our master branch and will soon be released with Jetty 9.3.0.
IESG Last Call Feedback
The HTTP/2 proposal describes a protocol that has some significant benefits over HTTP/1 and considering it’s current deployment status, should be progress to an RFC more or less as is. However, the IESG should know that it is a technical compromise that fails to meet some significant aspects of it’s charter.
I and others have discussed in the WG at length about what we see as technical problems. These issues were given fair and repeated consideration by the WG (albeit somewhat hurried on a schedule that was not really apparent why it existed). The resulting document does represent a very rough consensus of the active participants, which is to the credit of the chair who had to deal with a deeply divided WG. But the IESG should note that this also means that there are many parts of the protocol that are border line “can’t live with” for many participants. It is not an exemplar of technical excellence.
I believe that this is the result of a charter that sets up opposing goals with little guidance on how to balance them. Thus the core of my feedback to this LC is not to iterate past technical discussions, but rather to draw the IESG attention to the parts of the charter that are not well met.
The httpbis charter for http2 begins be defining why HTTP/1 should be replaced (my emphasis):
There is emerging implementation experience and interest in a protocol that retains the semantics of HTTP without the legacy of HTTP/1.x message framing and syntax, which have been identified as hampering performance and encouraging misuse of the underlying transport.
Some examples of the protocol misuse include:
- breaking the 2 connection limit by clients in order to reduce request latency and to maximise throughput via the utilisation of multiple flow control windows.
- use of long polling patterns to establish two way communication between client and server.
- use of the upgrade mechanism to replace the HTTP semantic with websocket semantic has been described by some as a misappropriation of ports 80/443 for an alternative semantic.
I believe that the emphasis on performance (specifically browser “end-user perceived latency”, which is called out by the charter) has prevented the draft from significantly addressing the misuse goal of the charter. This emphasis on efficiency over other design aspects was well characterised by the editor’s comment:
“What we’ve actually done here is conflate some of the stream control functions with the application semantics functions in the interests of efficiency” – Martin Thomson 8/May/2014
This conflation of HTTP efficiency concerns into the multiplex framing layer has caused the draft to fail to meet it’s charter in several ways:
Headers are not multiplexed nor flow controlled.
Because the WG was chartered to “Retain the semantics of HTTP/1.1,” it was the rough consensus that http/2 must support arbitrarily large headers. However, in the interests of supposed efficiency, HTTP header semantics have been conflated with the framing layer and are treated specially so that they are not flow controlled and they cannot be interleaved by other multiplexed streams.
Unconstrained header size can thus result in head of line blocking (if for example a large header hits TCP/IP flow control), which is a concern that was explicitly called out by the charter. Even without TCP/IP flow control, small headers slowly sent, can hold out other messages from initiating, progressing and/or completing.
While large headers are infrequently used today, the lack of flow control and interleaving of headers represents a significant incentive for large data to be moved from the body to the header. History has shown that in the pursuit of performances, protocols will be perverted! So not only will this likely increase the occurrence of head of line blocking, it is an encouragement for to misuse the protocol, which breaks one of the two primary goals of the charter.
Websocket semantics are not supported.
While the WG was chartered to “coordinate this item with: … * The HYBI Working Group, regarding the possible future extension of HTTP/2.0 to carry WebSockets semantics”, this has not been done to any workable resolution. The conflation of the framing layer with HTTP means that it cannot operate independently of HTTP semantics and there is now uncertainty as to how websocket semantics can be carried over HTTP2.
An initial websocket proposal was based on using the existing DATA frames, but segmentation features needed by websocket were removed as they were not needed to support HTTP semantics.
Another websocket proposal is to define new frame types that can carry the websocket semantic, however this suffers from the issue that intermediaries may not understand these new frame types, so the websocket semantic will not be able to penetrate the real world web. Upgrading intermediaries is a slow and difficult process that will never complete.
Yet another approach has been proposed and has been mentioned in this thread as a feature. It is to use replace long polling with pushed streams. A HTTP2 request/response stream is kept open without any data transfer so that PUSH_PROMISE frames can be used to initiate new streams to carry websocket style messages server to client in pretend HTTP request/responses. This proposal has the benefit that by pretending to be HTTP, websocket style semantic can penetrate a web that does not known about the websocket semantic. However, this is also the kind of protocol abuse that the WG was chartered to avoid.
Instead of simply catering for the websocket semantic, the solution has been to come up with an even more tricky and convoluted abuse of the HTTP semantic for two way messaging.
Priority is client side consideration only
The entire frame priority mechanism is focused not only on HTTP semantics, but on client side priorities. The consideration has only been given to what resources a client wishes to receive first and little if any consideration has been given to the servers concerns for maximising throughput and fairness for all clients. The priority mechanism does not have widely working code and is essentially a thought bubble waiting to see if it will pop or float.My own server (Jetty) currently plans to entirely ignore the priority mechanism because it is expressed in a style and at a point that it is entirely too late for the server to retrieve any substantial resources already committed to a low client priority resource. EG. If the server has launched an SQL query and is currently converting the data from a cursor into HTML, it serves no purpose to subsequently tell it that the client thinks it is a low priority data stream and would prefer other resources first. The server has committed thread, buffers, and scarce DB connections to the request and it’s priority is to complete the response and recover those resources.
In summary
The draft as presented does represent a consensus of the WG, but also a poor technical compromised between conflicting goals set by the charter. While the performance goal appears to have been well met (at least for the client side web traffic), the protocol does not remove incentives nor avoid the need for protocol misuse, which ultimately may then end up compromising the performance goals. I would suggest that this is a result of insufficient clarity in the charter rather than a poorly executed WG process.
Further, I believe that by creating a proposal that is so specific to the HTTP semantic we are missing an opportunity to create a multi semantic web, where all traffic does not need to pretend to be HTTP and that new semantics could be introduced without needing to redeploy or to trick the web intermediaries and/or data centre infrastructure.
Unfortunately we are well along the path of deploying http2: many if not most browsers have support for it; server implementations are available and more are on their way; several significant websites are already running the protocol and reporting benefits; intermediaries can mostly be bypassed by the forced use of TLS (many say the misuse of TLS) and the experts are exhausted from continual trench warfare on core technical issues.
I don’t think that http2 is a genie that can’t easily be put back in the bottle, nor can it be polished much more than it is, without a change of charter. Thus on balance I think it should probably be made an RFC. Perhaps not in the standard track, but eitherway, we should do so knowing that it has benefits, compromises, failures and missed opportunities. We need to work out how to do better next time.
-
JavaOne 2014 Servlet 3.1 Async I/O Session
Greg Wilkins gave the following session at JavaOne 2014 about Servlet 3.1 Async I/O.
It’s a great talk in many ways.
You get to know from an insider of the Servlet Expert Group about the design of the Servlet 3.1 Async I/O APIs.
You get to know from the person the created, developed Jetty and implemented Servlet specifications for 19 years what are the gotchas of these new APIs.
You get to know many great insights on how to write correct asynchronous code, and believe it – it’s not as straightforward as you think !
You get to know that Jetty 9 supports Servlet 3.1 and you should start using it today 🙂
Enjoy !
-
CometD RemoteCall APIs
CometD is a library collection that allows developers to write web messaging applications: you can send messages from servers to client, from clients to servers, from clients to other clients (via servers), and from server to server (using its clustering solution).
I wrote about these styles of communications in a previous blog post.
CometD 3.0.3 has been released, and it ships a new feature: a simpler API for remote calls.
Most web messaging applications typically just need the concept of message exchanges: they don’t need a request/response paradigm (although they may require some form of acknowledgment that the message has been received by the server).
However, for certain use cases, the application may need a way to send a message that requires a response, effectively using messages to implement remote procedure calls.
CometD has always supported this use case through the use of service channels.
However, applications had to write some code to correlate the request with the response and to handle timeouts and errors. Not much code, but still.
CometD 3.0.3 introduces a new, simpler API to perform remote calls, that takes care under the hood to perform the request/response correlation and to handle timeouts and errors.
From a JavaScript client you can now do (full docs):cometd.remoteCall("findContacts", { status: "online" }, 5000, function(response) { if (response.successful) displayContacts(response.data); });You specify a target for the remote call (
"findContacts"), the argument object ({status:"online"}), the timeout in milliseconds for the call to complete (5000), and a function that handles the response.
On the server side, you define a method target of the remote call using the annotation@RemoteCall(full docs):@Service public class RoomService { @RemoteCall("findContacts") public void findContacts(RemoteCall.Caller caller, Object data) { Mapfilter = (Map )data; ServerSession client = caller.getServerSession(); try { List contacts = findContacts(client, filter); // Respond to the client with a success. caller.result(contacts); } catch (Exception x) { // Respond to the client with a failure. caller.failure(x.getMessage()); } } } The target method may implement its logic in a different thread (this is recommended if the computation takes a long time), or even forward the request to another node via an OortService.
The target method must respond to the caller, but it may also broadcast other messages or send to the caller additional messages along with the response.
The possibilities are limitless.
Contacts us if you have any question or if you want commercial support for your CometD application development and support. -
Jetty 7 and Jetty 8 – End of Life
Five years ago we migrated the Jetty project from The Codehaus to the Eclipse Foundation. In that time we have pushed out 101 releases of Jetty 7 and Jetty 8, double that if you count the artifacts that had to remain at the Codehaus for the interim.
Four years ago we ceased open source support for Jetty 6.
Two years ago we released the first milestone of Jetty 9 and there have been 34 releases since. Jetty 9 has been very well received and feedback on it has been overwhelmingly positive from both our client community and the broader open source community. We will continue to improve upon Jetty 9 for years to come and we are very excited to see how innovative features like HTTP/2 support play out as these rather fundamental changes take root. Some additional highlights for Jetty 9 are: Java 7+, Servlet 3.1+, JSR 356 WebSocket, and SPDY! You can read more about Jetty 9.2 from the release blog here. Additionally we will have Jetty 9.3 releasing soon which contains support for HTTP/2!
This year will mark the end of our open source support for Jetty 7 and Jetty 8. Earlier this week we pushed out a maintenance release that only had a handful of issues resolved over the last five months so releases have obviously slowed to a trickle. Barring any significant security related issue it is unlikely we will see more then a release or two remaining on Jetty 7 and Jetty 8. We recommend users update their Jetty versions to Jetty 9 as soon as they are able to work it into their schedule. For most people we work with, the migration has been trivial, certainly nothing on the scale of the migration between foundations.
Important to note is that this is strictly regarding open source support. Webtide is the professional services arm of the Jetty Project and there will always be active professional developer and production support available for clients on Jetty 7 and Jetty 8. We even have clients with Jetty 6 support who have been unable to migrate for a host of other reasons. The services and support we provide through Webtide are what fund the ongoing development of the Jetty platform. We have no licensed version of Jetty that we try and sell through to enterprise users, the open source version of Jetty is the professional version. Feel free to explore the rest of this site to learn more about Webtide and if you have any questions feel free to comment, post to the message to the mailing lists, or fill out the contact form on this site (and I will follow up!).
-
Jetty @ JavaOne 2014
I’ll be attending JavaOne Sept 29 to Oct 1 and will be presenting several talks on Jetty:
- CON2236 Servlet Async IO: I’ll be looking at the servlet 3.1 asynchronous IO API and how to use it for scale and low latency. Also covers a little bit about how we are using it with http2. There is an introduction video but the talk will be a lot more detailed and hopefully interesting.

- BOF2237 Jetty Features: This will be a free form two way discussion about new features in jetty and it’s future direction: http2, modules, admin consoles, dockers etc. are all good topics for discussion.
- CON5100 Java in the Cloud: This is primarily a Google session, but I’ve been invited to present the work we have done improving the integration of Jetty into their cloud offerings.
I’ll be in the Bay area from the 23rd and I’d be really pleased to meet up with Jetty users in the area before or during the conference for anything from an informal chat/drink/coffee up to the full sales pitch of Intalio|Webtide services – or even both!) – <gregw@intalio.com>
- CON2236 Servlet Async IO: I’ll be looking at the servlet 3.1 asynchronous IO API and how to use it for scale and low latency. Also covers a little bit about how we are using it with http2. There is an introduction video but the talk will be a lot more detailed and hopefully interesting.
-
HTTP/2 Push with experimental Servlet API
As promised on my last post on HTTP/2, we have implemented and deployed the HTTP/2 Push functionality on this very website, webtide.com. For the other HTTP/2 implementers out there, if you request
"/"on webtide.com, you will get"/wp-includes/js/jquery/jquery.js"pushed. We have already implemented SPDY Push in the past, but this time we wanted to go a step further and implement HTTP/2 Push in the context of an experimental Servlet API that applications can use to decide what to resources needs to be pushed.The experimental Servlet API (designed by @gregwilkins) is very simple and would consist of only one additional method in
javax.servlet.RequestDispatcher:public interface RequestDispatcher { public void push(ServletRequest request); .... }An application receiving a request for a primary resource, say
index.html, would identify what secondary resources it would like to push along with the primary resource. For each secondary resource, the application would obtain aRequestDispatcher, and then callpush()on it, passing the primary resource request:public class MyServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String uri = request.getRequestURI(); if ("/index.html".equals(uri)) { String resourceToPush = "/js/jquery.js"; RequestDispatcher dispatcher = request.getRequestDispatcher(resourceToPush); dispatcher.push(request); } } }For applications that use web frameworks, in general, is difficult to identify a resource to push. For example, if you use a JSF library, your application is not in control of what secondary resources the JSF library may need to push (for example, css, javascript snippets, images, etc. associated to the JSF components that are being rendered).
Browsers, on the other hand, are in a much better position to identify secondary resources belonging to a primary resource, when they parse the primary resource. It would be great if browsers could request those resources with a special HTTP header that marks the secondary resource request as associated to the primary resource. Not only, it would be great if this could be completely automated, so that applications need not to worry about primary and secondary resources.
This is exactly what we have done in
PushCacheFilter. We have implemented a strategy where theRefererheader is being used to associate secondary resources to primary resources. With this association information, the filter builds a cache where secondary resources are linked to a primary resource, and every time a primary resource is being requested, we push also the associated secondary resources.The
PushCacheFilterlooks for the resource being requested; if it is not known to the filter, it assumes it is a primary resource and assigns a timestamp to it. Then it “opens” a window of – by default – 2000 ms where other requests may arrive; if these other requests have the former request as referrer, then these are secondary resources associated to the primary resource. The next time that the primary resource is requested, the filter knows about it, and pushes its secondary resources via the experimental Servlet API discussed above.We have kept the filter intentionally simple to foster discussion about what strategies could be more useful, and what features would be needed, for example:
- Would browsers use a special header (not the
Refererheader) to mark the a resource as associated to another resource ? - How would be possible to evict entries from the push cache without manual intervention ?
- Is there a relationship between the cacheability of the primary resource and that of the secondary resources that we can leverage ?
- How can a browser tell the server to not push a resource that it is already in the browser’s cache ?
We encourage anyone that is interested to join the Jetty mailing lists and contribute to the discussion.
If you are interested to make your website faster, look at what HTTP/2 Push could do to your website (from our SPDY Push Demo Video), and contact us.
- Would browsers use a special header (not the
-
CometD 3: RPC, PubSub, Peer-to-Peer Web Messaging
A couple of months ago the CometD Project released its third major version, CometD 3.0.0 (announcement).
Since then I wanted to write a blog about this major release, but work on HTTP 2 kept me busy.
Today CometD 3.0.1 was released, so it’s time for a new CometD blog entry.
CometD is an open source (Apache 2 licensed) project that started in 2008 under the umbrella of the Dojo Foundation, and already at the time it designed a web application messaging protocol named Bayeux.
Similar efforts have started more recently, see for example WAMP.
The Bayeux protocol is transport independent (works with HTTP and WebSocket) which allows CometD to easily fallback to HTTP when WebSocket, for any reason, does not work. Of course it works also with SPDY and HTTP 2.
The Bayeux protocol supports two types of message delivery: the pubsub (broadcast) message delivery, and the peer-to-peer message delivery.
In the pubsub style, a publisher sends a message to the server, which broadcast it to all clients that subscribed.
In the peer-to-peer style, a sender sends a message to the server, which then decide what to do: it may broadcast the message to certain clients only, or to only one, or back to the original sender. This latter case allows for an implementation to offer RPC (remote procedure call) message delivery.
Bayeux is also an extensible protocol, that allows to implement extensions on top of the protocol itself, so that implementations can add additional features such as time synchronization between client and server, and various grades of reliable message delivery (also called “quality of service” by MQTT).
The CometD Project is an implementation of the Bayeux protocol, and over the years implemented not only the major features that the Bayeux protocol enables (pubsub messaging, peer-to-peer messaging, RPC messaging, message acknowledgment, etc.), but also additional features that make CometD the right choice if you have a web messaging use case.
One such features is a clustering system, called Oort, that enables even more features such as tracking which node a client is connected to (Seti) and distributed objects and services.
CometD 3.x leverages the standards for its implementation.
In particular, it ships a HTTP transport based on the Servlet 3.1 Async I/O API, which makes CometD very scalable (no threads will be blocked by I/O operations).
Furthermore, it ships a WebSocket transport based on the standard Java WebSocket API, also using asynchronous features, that make CometD even more scalable.
If you have a web messaging use case, being it RPC style, pubsub style or peer-to-peer style, CometD is the one stop shop for you.
Webtide provides commercial support for CometD, so you are not alone when using CometD. You don’t have to spend countless hours googling around in search of solutions when you can have the CometD committers helping you to design, deploy and scale your project.
Contact us. -
HTTP/2 Interoperability and HTTP/2 Push
Following my previous post, several players tried their HTTP/2 implementation of draft 14 (h2-14) against webtide.com.
A few issues were found and quickly fixed on our side, and this is very good for interoperability.
Having worked many times at implementing specifications, I know that different people interpret the same specification in slightly different ways that may lead to incompatibilities.
@badger and @tatsuhiro_t reported thatcurl + nghttp2is working correctly against webtide.com.
On the Firefox side, @todesschaf reported a couple of edge cases that were fixed, so expect a Firefox nightly soon (if not already out?) that supports h2-14.
We are actively working at porting the SPDY Push implementation to HTTP/2, and Firefox should already support HTTP/2 Push, so there will be more interoperability testing to do, which is good.
This work is being done in conjunction with an experimental Servlet API so that web applications will be able to tell the container what resources should be pushed. This experimental push API is scheduled to be defined by the Servlet 4.0 specification, so once again the Jetty project is leading the path like it did for async Servlets, SPDY and SPDY Push.
Why you should care about all this ?
Because SPDY Push can boost your website performance, and more performance means more money for your business.
Interested ? Contact us.