Author: Simone Bordet

  • HTTP/2 draft 14 is live !

    Greg Wilkins (@gregwilkins) and I (@simonebordet) have been working on implementing HTTP/2 draft 14 (h2-14), which is the draft that will probably undergo the “last call” at the IETF.

    We will blog very soon with our opinions about HTTP/2 (stay tuned, it’ll be interesting!), but for the time being Jetty proves once again to be a trailblazer when it comes with new web technologies and web protocols.

    Jetty started to innovate with Jetty Continuations, that were standardized (with improvements) into Servlet 3.0.

    Jetty was one of the first Java server to offer support for asynchronous I/O back in 2006 with Jetty 6.

    In 2012 we were the first Java server to implement SPDY, we have written libraries that provide support for NPN in Java (that are now used by many other Java servers that provide SPDY support). We also were the first to implement a completely automatic way of leveraging SPDY Push, that can boost your web site performance.

    Today, to my knowledge, we are again the first Java server exposing the implementation of the HTTP/2 protocol, draft 14, live on our own website.

    Along with HTTP/2 support, that will be coming in Jetty 9.3, we have also implemented a library that provides support for ALPN in Java (the successor of NPN), allowing every Java application (client or server) to implement HTTP/2 over SSL. This library is already available in the Jetty 9.2.x series. We want other implementers (client and server) to test our HTTP/2 implementation in order to generate feedback about HTTP/2 that can be reported at the IETF.

    As of today, both Mozilla Firefox and Google Chrome only support HTTP/2 draft 13 (h2-13). They are keeping the pace at implementing new drafts, so expect both browsers to offer draft 14 support in matter of days (in their nightly/unstable versions). When that will happen, you will be able to use those browsers to connect to our HTTP/2 enabled website.

    The Jetty project offers not only a server, but a HTTP/2 client as well. You can take a look at how it’s used to connect to a HTTP/2 server here.

    Where is it ? https://webtide.com.

    Lastly, contact us for any news or information about what Jetty can do for you in the realms of async I/O, PubSub over the web (via CometD), SPDY and HTTP/2.

  • Jetty 9.1.4 Open Sources FastCGI Proxy

    I wrote in the past about the support that was added to Jetty 9.1 to proxy HTTP requests to a FastCGI server.
    A typical configuration to serve PHP applications such as WordPress or Drupal is to put Apache or Nginx in the front and have them proxy the HTTP requests to, typically, php-fpm (a FastCGI server included in the PHP distribution), which in turn runs the PHP scripts that generate HTML.
    Jetty’s support for FastCGI proxying has been kept private until now.
    With the release of Jetty 9.1.4 it is now part of the main Jetty distribution, released under the same license (Apache License or Eclipse Public License) as Jetty.
    Since we like to eat our own dog food, Jetty is currently serving the pages of this blog (which is WordPress) using Jetty 9.1.4 and the newly released FastCGI module.
    And it is doing so via SPDY, rather than HTTP, allowing you to serve Java EE Web Applications and PHP Web Applications from the same Jetty instance and leveraging the benefits that the SPDY protocol brings to the Web.
    For further information and details on how to use this new module, please check the Jetty FastCGI documentation.
    Enjoy !

  • How to install JIRA 6.1 in Jetty 9.1

    Atlassian JIRA is a very good issue tracking system. Many open source projects use it, including our own CometD project and most notably OpenJDK.
    While Atlassian supports JIRA on Tomcat, JIRA runs in Jetty as well, and can benefit of Jetty’s support for SPDY.
    Below you can find the instructions on how to setup JIRA 6.1.5 in Jetty 9.1.0 with HTTP and SPDY support on Linux.

    1. Download JIRA’s WAR version

    JIRA can be downloaded in two versions: the distro installer version (the default download from JIRA’s website), and the distro WAR version. You need to download the distro WAR version by clicking on “All JIRA Download Options” on the download page.

    2. Build the JIRA WAR

    Unpack the JIRA distro WAR file. It will create a directory called atlassian-jira-<version>-war referred to later as $JIRA_DISTRO_WAR.

    2.1 Specify the JIRA HOME directory

    $ cd $JIRA_DISTRO_WAR/edit-webapp/WEB-INF/classes/
    $ vi jira-application.properties
    

    The jira-application.properties file contains just one property:

    jira.home =
    

    You need to specify the full path of your JIRA home directory, for example:

    jira.home = /var/jira
    

    2.2 Change the JNDI name for UserTransaction

    The JIRA configuration files come with a non standard JNDI name for the UserTransaction object.
    This non standard name works in Tomcat, but it’s wrong for any other compliant Servlet container, so it must be modified to the standard name to work in Jetty.

    $ cd $JIRA_DISTRO_WAR/edit-webapp/WEB-INF/classes/
    $ vi entityengine.xml
    

    You need to search in the entityengine.xml file for two lines inside the <transaction-factory> element:

    <transaction-factory class="org.ofbiz.core.entity.transaction.JNDIFactory">
        <user-transaction-jndi jndi-server-name="default" jndi-name="java:comp/env/UserTransaction"/>    <-- First line to change
        <transaction-manager-jndi jndi-server-name="default" jndi-name="java:comp/env/UserTransaction"/> <-- Second line to change
    </transaction-factory>
    

    You need to change the jndi-name attribute from the non standard name java:comp/env/UserTransaction to the standard name java:comp/UserTransaction. You have to remove the /env part in the JNDI name in both lines.

    2.3 Execute the build

    At this point you need to build the JIRA WAR file, starting from the JIRA distro WAR file:

    $ cd $JIRA_DISTRO_WAR
    $ ./build.sh
    

    When the build completes, it generates a file called $JIRA_DISTRO_WAR/dist-generic/atlassian-jira-<version>.war. The build also generates a Tomcat version, but you need to use the generic version of the WAR.

    3. Install Jetty 9.1

    Download Jetty 9.1 and unpack it in the directory of your choice, for example /opt, so that Jetty will be installed in a directory such as /opt/jetty-distribution-9.1.0.v20131115 referred to later as $JETTY_HOME.
    We will not modify any file in this directory, but only refer to it to start Jetty.

    4. Setup the database

    JIRA requires a relational database to work. Follow the instructions on how to setup a database for JIRA.
    When you run JIRA for the first time, it will ask you for the database name, user name and password.

    5. Setup the Jetty Base

    Jetty 9.1 introduced a mechanism to separate the Jetty installation directory ($JETTY_HOME) from the directory where you configure your web applications, referred to as $JETTY_BASE. The documentation offers more information about this mechanism.
    Create the Jetty base directory in the location you prefer, for example /var/www/jira, referred to later as $JETTY_BASE.

    $ mkdir -p /var/www/jira
    

    5.1 Setup the transaction manager

    JIRA requires a transaction manager to work, and Jetty does not provide one out of the box. However, it’s not difficult to provide support for it.
    You will use Atomikos’ TransactionsEssentials, an open source transaction manager released under the Apache 2 License.
    For a transaction manager to work, you need the transaction manager jars (in this case Atomikos’) and you need to instruct Jetty to bind a UserTransaction object in JNDI.

    5.1.1 Create the Jetty module definition

    You can use a Jetty module to define the transaction manager support in Jetty following these instructions:

    $ cd $JETTY_BASE
    $ mkdir modules
    $ vi modules/atomikos.mod
    

    Create a file in the modules directory called atomikos.mod with this content:

    # Atomikos Module
    [depend]
    plus
    resources
    [lib]
    lib/atomikos/*.jar
    [xml]
    etc/jetty-atomikos.xml
    

    This file states that the atomikos module depends on Jetty’s built-in plus and resources modules, that requires all the jars in the $JETTY_BASE/lib/atomikos directory in the classpath, and that it is configured by a file in the $JETTY_BASE/etc directory called jetty-atomikos.xml.

    5.1.2 Download the module dependencies

    Create the lib/atomikos directory:

    $ cd $JETTY_BASE
    $ mkdir -p lib/atomikos
    

    and save into it the following files, downloaded from Maven Central from this location (at the time of this writing the latest Atomikos version is 3.9.1):

    • atomikos-util.jar
    • transactions-api.jar
    • transactions.jar
    • transactions-jta.jar
    • transactions-jdbc.jar

    5.1.3 Create the module XML file

    Create the etc directory:

    $ cd $JETTY_BASE
    $ mkdir etc
    $ vi etc/jetty-atomikos.xml
    

    Create a file in the etc directory called jetty-atomikos.xml with this content:

    <?xml version="1.0"?>
    <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_0.dtd">
    <Configure id="Server" class="org.eclipse.jetty.server.Server">
        <New class="org.eclipse.jetty.plus.jndi.Transaction">
            <Arg>
                <New class="com.atomikos.icatch.jta.UserTransactionImp" />
            </Arg>
        </New>
    </Configure>
    

    5.1.4 Create the jta.properties file

    Create the resources directory:

    $ cd $JETTY_BASE
    $ mkdir resources
    $ vi resources/jta.properties
    

    Create a file in the resources directory called jta.properties with this content:

    com.atomikos.icatch.service = com.atomikos.icatch.standalone.UserTransactionServiceFactory
    

    This file configures the Atomikos transaction manager; you can read about other supported properties in the Atomikos documentation, but the one specified above is sufficient.

    5.2 Setup the JDBC driver

    JIRA is able to autoconfigure the database connectivity during the first run, but it requires the JDBC driver to be available.
    Create the lib/ext directory:

    $ cd $JETTY_BASE
    $ mkdir -p lib/ext
    

    Download the JDBC driver and copy it in the $JETTY_BASE/lib/ext directory.
    For example, if you use MySQL you would copy the JDBC driver as $JETTY_BASE/lib/ext/mysql-connector-java-5.1.27.jar.

    5.3 Deploy the JIRA WAR

    Create the webapps directory:

    $ cd $JETTY_BASE
    $ mkdir webapps
    

    Copy the JIRA generic WAR created at step 2.3 in the webapps directory:

    $ cd $JETTY_BASE
    $ cp $JIRA_DISTRO_WAR/dist-generic/atlassian-jira-<version>.war webapps/
    

    5.5 Create the Jetty start.ini file

    Jetty 9.1 uses the $JETTY_BASE/start.ini file to configure the modules that will be activated when Jetty starts. You need:

    • the atomikos module that you created above
    • Jetty’s built-in ext module, to have the JDBC driver in classpath
    • Jetty’s built-in deploy module, to deploy web applications present in the webapps directory
    • Jetty’s built-in jsp module, for JSP support required by JIRA
    • Jetty’s built-in http module, to have Jetty listen for HTTP requests

    Create the start.ini file:

    $ cd $JETTY_BASE
    $ vi start.ini
    

    with the following content:

    --module=atomikos
    --module=ext
    --module=deploy
    --module=jsp
    --module=http
    jetty.port=8080
    

    The last property instructs Jetty to listen on port 8080 for HTTP requests.

    6. Start Jetty

    At this point you are ready to start Jetty.
    If you are using JDK 7, JIRA requires a larger than default permanent generation, and a larger than default heap as well to work properly.
    Start Jetty in the following way:

    $ cd $JETTY_BASE
    $ java -Xmx1g -XX:MaxPermSize=256m -jar $JETTY_HOME/start.jar
    

    Be patient, JIRA may be slow to start up.

    7. SPDY support

    In order to enable SPDY support, you need to deploy your site over SSL. In order to do so, you need to have a valid X509 certificate for your site.
    Follow the SSL documentation for details about how to configure SSL for your site.
    If you want to just try SPDY locally on your computer, you can use a self-signed certificate stored in a keystore.

    7.1 Create the keystore

    Create a keystore with a self-signed certificate:

    $ cd $JETTY_BASE
    $ keytool -genkeypair -keystore etc/keystore -keyalg RSA -dname "cn=localhost" -storepass <password> -keypass <password>
    

    The certificate must be stored in the $JETTY_BASE/etc/keystore file so that it will be automatically picked up by Jetty (this is configurable via the jetty.keystore property if you prefer a different location).

    7.2 Setup the NPN jar

    SPDY also requires the NPN jar, that depends on the JDK version you are using.
    Please refer to the NPN versions table to download the right NPN jar for your JDK.
    Create the lib/npn directory:

    $ cd $JETTY_BASE
    $ mkdir -p lib/npn
    

    Download the right version of the NPN boot jar from this location and save it in the $JETTY_BASE/lib/npn directory.

    7.3 Modify the start.ini file

    Modify the $JETTY_BASE/start.ini file to have the following content:

    --module=atomikos
    --module=ext
    --module=deploy
    --module=jsp
    --module=spdy
    spdy.port=8443
    jetty.keystore.password=<password>
    jetty.keymanager.password=<password>
    jetty.truststore.password=<password>
    

    The http module has been replaced by the spdy module, and so has the configuration property for the port to listen to.
    There are new properties that specifies the password you used to create the keystore in various ways to obfuscate them to avoid that they appear in clear in configuration files.

    7.4 Start Jetty with SPDY support

    $ cd $JETTY_BASE
    $ java -Xbootclasspath/p:lib/npn/npn-boot-<version>.jar -Xmx1g -XX:MaxPermSize=256m -jar $JETTY_HOME/start.jar
    

    Conclusions

    We have seen how JIRA 6.1 can be deployed to Jetty 9.1 following the steps above.
    The advantage of using Jetty are the support for SPDY, which will improve the performance of the website, the high scalability of Jetty and the flexibility of Jetty in defining modules and starting only the modules needed by your web applications, and no more.
    One disadvantage over Tomcat is the lack of out-of-the-box support for a transaction manager. While the step to add this support are not complicated, we recognize that it will be great if the transaction manager module would be available out-of-the-box.
    We are working on providing such feature to improve Jetty, so stay tuned !

  • WordPress & Jetty: perfect fit

    I posted a while back about the capability of Jetty 9.1’s HttpClient to speak HTTP over different transports: by default HTTP, but we also provide a SPDY implementation, where the HTTP requests and responses are carried using the SPDY transport rather than the HTTP transport.
    Another transport that is able to carry HTTP requests and responses is FastCGI.
    The neat feature about FastCGI is that it is the default way to deploy PHP applications: fire up a proxy server (usually Apache or Nginx) in the front and proxy requests/responses to the FastCGI server (usually the PHP FastCGI Process Manager, or php-fpm).
    In this way you can deploy the most used PHP frameworks like WordPress, Drupal and others.
    And you are not limited to PHP: FastCGI allows you to easily deploy other dynamic web languages and frameworks such as Django (Python-based), Rails (Ruby-based) and others.
    We are happy to announce that Jetty 9.1 can now proxy to FastCGI, enabling deployment of PHP frameworks.
    Why this is good, and how different it is from having – say – Apache or Nginx in the front instead of Jetty ?
    The first and foremost reason is that Jetty is the only server that supports SPDY Push.
    SPDY Push is the biggest performance improvement you can make to your website, without a single change to the application being served, be it a Java web application or WordPress.
    Watch our video that shows how the SPDY Push feature that Jetty provides makes a big performance difference.
    The second reason is that SPDY version 2 is being deprecated really fast in favor of SPDY version 3 or greater.
    Browsers will not speak SPDY/2 anymore, basically reverting your website to HTTPS behaviour, losing all the SPDY benefits if your server does not support SPDY 3 or greater.
    As of the time of this writing, only servers like Apache or Jetty implement SPDY version 3 or later of the SPDY protocol, while Nginx only implements SPDY version 2.
    At the Jetty Project we like to eat our own dogfood, so the blog site you are reading is WordPress served via Jetty.
    If you’re using Firefox or Chrome, just open the browser network console, and you will see something like this:
    jetty-wordpress
    As you can see from the response headers, the response is served by Jetty (Server: Jetty(9.1.0.v20131115)) from PHP (X-Powered-By: PHP/5.5.3-1ubuntu2).
    Of course, since both Jetty 9.1’s server and HttpClient are fully asynchronous, you have a very scalable solution for your PHP-enabled website: currently the JVM that runs this very website only uses 25 MiB of heap.
    And of course you get all the SPDY performance improvements over HTTP, along with Jetty’s unique SPDY Push features.
    This is good for cloud vendors too, since they can run Jetty and expose PHP applications with a minimal amount of resources, high scalability, and unique features like SPDY Push.
    FastCGI for Jetty is sponsored by Intalio. If you are interested in knowing more about how Jetty can speed up your website or how to setup your PHP web application in Jetty, contact us or send an email to Jesse McConnell.

  • Speaking at Devoxx 2013

    Thomas Becker and I will be speaking at Devoxx, presenting two BOFs: HTTP 2.0/SPDY and Jetty in depth and The Jetty Community BOF.
    The first is a more technical session devoted to the internals of SPDY and HTTP 2.0, while the second is more an interactive session about Jetty 9.x new features and improvements (and we have many) with the audience about how people use Jetty, what feature they like most (or least), so it will be fun.
    As far as I understand, BOF sessions are free and informal: anyone can attend even if does not have a Devoxx Conference Pass (very interesting if you live in the area).
    If you’re attending Devoxx, please stop by even just to say “Hi!” 🙂
    See you there !

  • Pluggable Transports for Jetty 9.1's HttpClient

    In Jetty 9, the HttpClient was completely rewritten, as we posted a while back.
    In Jetty 9.1, we took one step forward and we made Jetty’s HttpClient polyglot. This means that now applications can use the HTTP API and semantic (“I want to GET the resource at the http://host/myresource URI”) but can now choose how this request is carried over the network.
    Currently, three transports are implemented: HTTP, SPDY and FastCGI.
    The usage is really simple; the following snippet shows how to setup HttpClient with the default HTTP transport:

    // Default transport uses HTTP
    HttpClient httpClient = new HttpClient();
    httpClient.start();
    

    while the next snippet shows how to setup HttpClient with the SPDY transport:

    // Using the SPDY transport in clear text
    // Create the SPDYClient factory
    SPDYClient.Factory spdyClientFactory = new SPDYClient.Factory();
    spdyClientFactory.start();
    // Create the SPDYClient
    SPDYClient spdyClient = spdyClientFactory.newSPDYClient(SPDY.V3);
    // Create the HttpClient transport
    HttpClientTransport transport = new HttpClientTransportOverSPDY(spdyClient);
    // HTTP over SPDY !
    HttpClient httpSPDYClient = new HttpClient(transport, null);
    httpSPDYClient.start();
    // Send request, receive response
    ContentResponse response = httpSPDYClient.newRequest("http://host/path")
            .method("GET")
            .send();
    

    This last snippet allows the application to still use the HTTP API, but have the request and the response transported via SPDY, rather than HTTP.
    Why this is useful ?
    First of all, more and more websites are converting to SPDY because it offers performance improvements (and if you use Jetty as the server behind your website, the performance improvements can be stunning, check out this video).
    This means that with a very simple change in the HttpClient configuration, your client application connecting to servers can benefit of the performance boost that SPDY provides.
    If you are using HttpClient for server-to-server communication, you can use SPDY in clear text (rather than encrypted) to achieve even more efficiency because there is no encryption involved. Jetty is perfectly capable of speaking SPDY in clear text, so this could be a major performance win for your applications.
    Furthermore, you can parallelize HTTP requests thanks to SPDY’s multiplexing rather than opening multiple connections, saving network resources.
    I encourage you to try out these features and report your feedback here in the comments or on the Jetty mailing list.

  • On JDK 7's asynchronous I/O

    I have been working lately with the new JDK 7’s Async I/O APIs (“AIO” from here), and I would like to summarize here my findings, for future reference (mostly my own).
    My understanding is that the design of the AIO API aimed at simplifying non-blocking operations, and it does: what in AIO requires 1-5 lines of code, in JDK 1.4’s non-blocking APIs (“NIO” from here) requires 50+ lines of code, and a careful threading design of those.
    The context I work in is that of scalable network servers, so this post is mostly about AIO seen from my point of view and from the point of view of API design.
    Studying AIO served as a great stimulus to review ideas for Jetty and learn something new.

    Introduction

    Synchronous API are simple: ServerSocketChannel.accept() blocks until a channel is accepted; SocketChannel.read(ByteBuffer) blocks until some bytes are read, and SocketChannel.write(ByteBuffer) is guaranteed to write everything from the buffer and return only when the write has completed.
    With asynchronous I/O (and therefore both AIO and NIO), the blocking guarantee is gone, and this alone complicates things a lot more, and I mean a lot.

    AIO Accept

    To accept a connection with AIO, the application needs to call:

    <A> AsynchronousServerSocketChannel.accept(A attachment, CompletionHandler<AsynchronousSocketChannel, ? super A> handler)

    As you can see, the CompletionHandler is parametrized, and the parameters are an AsynchronousSocketChannel (the channel that will be accepted), and a generic attachment (that can be whatever you want).
    This is a typical implementation of the CompletionHandler for accept():

    class AcceptHandler implements CompletionHandler<AsynchronousSocketChannel, Void>
    {
        public void completed(AsynchronousSocketChannel channel, Void attachment)
        {
            // Call accept() again
            AsynchronousServerSocketChannel serverSocket = ???
            serverSocket.accept(attachment, this);
            // Do something with the accepted channel
            ...
        }
        ...
    }

    Note that Void it is used as attachment, because in general, there is not much to attach for the accept handler.
    But nevertheless the attachment feature is a powerful idea.
    It turns out immediately that the code needs the AsynchronousServerSocketChannel reference (see the ??? in above code snippet) because it needs to call AsynchronousServerSocketChannel.accept() again (otherwise no further connections will be accepted).
    Unfortunately the signature of the CompletionHandler does not contain any reference to the AsynchronousServerSocketChannel that the code needs.
    Ok, no big deal, it can be referenced with other means.
    At the end it is the application code that creates both the AsynchronousServerSocketChannel and the CompletionHandler, so the application can certainly pass the AsynchronousServerSocketChannel reference to the CompletionHandler.
    Or the class can be implemented as anonymous inner class, and therefore will have the AsynchronousServerSocketChannel reference in lexical scope.
    It is even possible to use the attachment to pass the AsynchronousServerSocketChannel reference, instead of using Void.
    I do not like this design of recovering needed references with application intervention; my reasoning is as follows: if the API forces me to do something, in this case call AsynchronousServerSocketChannel.accept(), should not have been better that the AsynchronousServerSocketChannel reference was passed as a parameter of CompletionHandler.completed(...) ?
    You will see how this lack is the tip of the iceberg in the following sections.
    Let’s move on for now, and see how you can connect with AIO.

    AIO Connect

    To connect using AIO, the application needs to call:

    <A> AsynchronousSocketChannel.connect(SocketAddress remote, A attachment, CompletionHandler<Void, ? super A> handler);

    The CompletionHandler is parametrized, but this time the first parameter is forcefully Void.
    The first thing to notice is the absence of a timeout parameter.
    AIO solves the connect timeout problem in the following way: if the application wants a timeout for connection attempts, it has to use the blocking version:

    channel.connect(address).get(10, TimeUnit.SECONDS);

    The application can either block and have an optional timeout by calling get(...), or can be non-blocking and hope that the connection succeeds or fails, because there is no mean to time it out.
    This is a problem, because it is not uncommon that opening a connection takes few hundreds of milliseconds (or even seconds), and if an application wants to open 5-10 connections concurrently, then the right way to do it would be to use a non-blocking API (otherwise it has to open the first, wait, then open the second, wait, etc.).
    Alas, it starts to appear that some facility (a “framework”) is needed on top of AIO, to provide additional useful features like asynchronous connect timeouts.
    This is a typical implementation of the CompletionHandler for connect(...):

    class ConnectHandler implements CompletionHandler<Void, Void>
    {
        public void completed(Void result, Void attachment)
        {
            // Connected, now must read
            ByteBuffer buffer = ByteBuffer.allocate(8192);
            AsynchronousSocketChannel channel = ???
            channel.read(buffer, null, readHandler);
        }
    }

    Like before, Void it is used as attachment (it is not evident what I need to attach to a connect handler), so the signature of completed() takes two Void parameters. Uhm.
    It turns out that after connecting, most often the application needs to signal its interest in reading from the channel and therefore needs to call AsynchronousSocketChannel.read(...).
    Like before, the AsynchronousSocketChannel reference is not immediately available from the API as parameter (and like before, the solutions for this problem are similar).
    The important thing to note here is that the API forces the application to allocate a ByteBuffer in order to call AsynchronousSocketChannel.read(...).
    This is a problem because it wastes resources: imagine what happens if the application has 20k connections opened, but none is actually reading: it has 20k * 8KiB = 160 MiB of buffers allocated, for nothing.
    Most, if not all, scalable network servers out there use some form of buffer pooling (Jetty certainly does), and can serve 20k connection with a very small amount of allocated buffer memory, leveraging the fact that not all connections are active exactly at the same time.
    This optimization is very similar to what it is done with thread pooling: in asynchronous I/O, in general, threads are pooled and there is no need to allocate one thread per connection. You can happily run a busy server with very few threads, and ditto for buffers.
    But in AIO, it is the API that forces the application to allocate a buffer even if there may be nothing (yet) to read, because you have to pass that buffer as a parameter to AsynchronousSocketChannel.read(...) to signal your interest to read.
    All right, 160 MiB is not that much with modern computers (my laptop has 8GiB), but differently from the connect timeout problem, there is not much that a “framework” on top of AIO can do here to reduce memory footprint. Shame.

    AIO Read

    Both accept and connect operations will normally need to read just after having completed their operation.
    To read using AIO, the application needs to call:

    <A> AsynchronousSocketChannel.read(ByteBuffer buffer, A attachment, CompletionHandler<Integer, ? super A> handler)

    This is a typical implementation of the CompletionHandler for read(...):

    class ReadHandler implements CompletionHandler<Integer, ReadContext>
    {
        public void completed(Integer read, ReadContext readContext)
        {
            // Read some byte, process them, and read more
            if (read < 0)
            {
                // Connection closed by the other peer
                ...
            }
            else
            {
                // Process the bytes read
                ByteBuffer buffer = ???
                ...
                // Read more bytes
                AsynchronousSocketChannel channel = ???
                channel.read(buffer, readContext, this);
            }
        }
    }

    This is where things get really… weird: the application, in the read handler, is supposed to process the bytes just read, but it has no reference to the buffer that is supposed to contain those bytes.
    And, as before, the application will need a reference to the channel in order to call again read(...) (to read more data), but that also is missing.
    Like before, the application has the burden to pack the buffer and the channel into some sort of read context (shown in the code above using the ReadContext class), and pass it as the attachment (or be able to reference those from the lexical scope).
    Again, a “framework” could take care of this step, which is always required, and it is required because of the way the AIO APIs have been designed.
    The reason why the number of bytes read is passed as first parameter of completed(...) is that it can be negative when the connection is closed by the remote peer.
    If it is non-negative this parameter is basically useless, since the buffer must be available in the completion handler and one can figure out how many bytes were read from the buffer itself.
    In my humble opinion, it is a vestige from the past that the application has to read to know whether the other end has closed the connection or not. The I/O subsystem should do this, and notify the application of a remote close event, not of a read event. It will also save the application to always do the check on the number of bytes read to test if it is negative or not.
    I sorely missed this remote close event in NIO, and I am missing it in AIO too.
    As before, a “framework” on top of AIO could take care of this.
    Differently from the connect operation, asynchronous reads may take a timeout parameter (which makes the absence of this parameter in connect(...) look like an oversight).
    Fortunately, there cannot be concurrent reads for the same connection (unless the application really messes up badly with threads), so the read handler normally stays quite simple, if you can bear the if statement that checks if you read -1 bytes.
    But things get more complicated with writes.

    AIO Write

    To write bytes in AIO, the application needs to call:

    <A> AsynchronousSocketChannel.write(ByteBuffer buffer, A attachment, CompletionHandler<Integer, ? super A> handler)

    This is a naive, non-thread safe, implementation of the CompletionHandler for write(...):

    class WriteHandler implements CompletionHandler<Integer, WriteContext>
    {
        public void completed(Integer written, WriteContext writeContext)
        {
            ByteBuffer buffer = ???
            // Decide whether all bytes have been written
            if (buffer.hasRemaining())
            {
                // Not all bytes have been written, write again
                AsynchronousSocketChannel channel = ???
                channel.write(buffer, writeContext, this);
            }
            else
            {
                // All bytes have been written
                ...
            }
        }
    }

    Like before, the write completion handler is missing the required references to do its work, in particular the write buffer and the AsynchronousSocketChannel to call write(...).
    The completion handler parameters provide the number of bytes written, that may be different from the number of bytes that were requested to be written, determined by the remaining bytes at the time
    of the call to AsynchronousSocketChannel.write(...).
    This leads to partial writes: to fully write a buffer you may need multiple partial writes, and the application has the burden to pack the some sort of write context (referencing the buffer and the channel) like it had to do for reads.
    But the main problem here is that this write completion handler is not safe for concurrent writes, and applications – in general – may write concurrently.
    What happens if one thread starts a write, but this write cannot be fully completed (and hence only some of the bytes in the buffer are written), and another thread concurrently starts another write ?
    There are two cases: the first case happens when the second thread starts concurrently a write while the first thread is still writing, and in this case a WritePendingException is thrown to the second thread; the second case happens when the second write starts after the first thread has completed a partial write but not yet started writing the remaining, and in this case the output will be garbled (will be a mix of the bytes of the two writes), but no errors will be reported.
    Asynchronous writes are hard, because each write must be fully completed before starting the next one, and differently from reads, writes can – and often are – concurrent.
    What AIO provides is a guard against concurrent partial writes (by throwing WritePendingException), but not against interleaved partial writes.
    While in principles there is nothing wrong with this scheme (apart being complex to use), my opinion is that it would have been better for the AIO API to have a “fully written” semantic such that CompletionHandlers were invoked when the write was fully completed, not for every partial write.
    How can you allow applications to do concurrent asynchronous writes ?
    The typical solution is that the application must buffer concurrent writes by maintaining a queue of buffers to be written and by using the completion handler to dequeue the next buffer when a write is fully completed.
    This is pretty complicated to get right (the enqueuing/dequeuing mechanism must be thread safe, fast and memory-leak free), and it is entirely a burden that the AIO APIs put on the application.
    Furthermore, buffer queuing opens up for more issues, like deciding if the queue can have an infinite size (or, if it is bounded, decide what to do when the limit is reached), like deciding the exact lifecycle of the buffer, which impacts the buffer pooling strategy, if present (since buffers are enqueued, the application cannot assume they have been written and therefore cannot reuse them), like deciding if you can tolerate the extra latency due to the permanence of the buffer in the queue before it is written, etc.
    Like before, the buffer queuing can be taken care of by a “framework” on top of AIO.

    AIO Threading

    AIO performs the actual reads and writes and invokes completion handlers via threads that are part of a AsynchronousChannelGroup.
    If I/O operations are requested by a thread that is not belonging to the group, it is scheduled to be executed by a group thread, with the consequent context switch.
    Compare this with NIO, where there is only one thread that runs the selector loop waiting for I/O events and upon an I/O event, depending on the pattern used, the selector thread may perform the I/O operation and call the application or another thread may be tasked to perform the I/O operation and invoke the application freeing the selector thread.
    In the NIO model, it is easy to block the I/O system by using the selector thread to invoke the application, and then having the application performing a blocking call (for example, a JDBC query that lasts minutes): since there is only one thread doing I/O (the selector thread) and this thread is now blocked in the JDBC call, it cannot listen for other I/O events and the system blocks.
    The AIO model “powers up” the NIO model because now there are multiple threads (the ones belonging to the group) that take care of I/O events, perform I/O operations and invoke the application (that is, the completion handlers).
    This model is flexible and allows the configuration of the thread pool for the AsynchronousChannelGroup, so it is really matter for the application to decide the size of the thread pool, whether to have it bounded or not, etc.

    Conclusions

    JDK 7’s AIO API are certainly an improvement over NIO, but my impression is that they are still too low level for the casual user (lack of remote close event, lack of an asynchronous connect timeout, lack of full write semantic), and potentially scale less than a good framework built on top of NIO, due to the lack of buffer pooling strategies and less control over threading.
    Applications will probably need to write some sort of framework on top of AIO, which defeats a bit what I think was one of the main goals of this new API: to simplify usage of asynchronous I/O.
    For me, the glass is half empty because I had higher expectations.
    But if you want to write a quick small program that does network I/O asynchronously, and you don’t want any library dependency, by all means use AIO and forget about NIO.

  • The new Jetty 9 HTTP client

    Introduction

    One of the big refactorings in Jetty 9 is the complete rewrite of the HTTP client.
    The reasons behind the rewrite are many:

    • We wrote the codebase several years ago; while we have actively maintained, it was starting to show its age.
    • The HTTP client guarded internal data structures from multithreaded access using the synchronized keyword, rather than using non-blocking data structures.
    • We exposed as main concept the HTTP exchange that, while representing correctly what an HTTP request/response cycle is,  did not match user expectations of a request and a response.
    • HTTP client did not have out of the box features such as authentication, redirect and cookie support.
    • Users somehow perceived the Jetty HTTP client as cumbersome to program.

    The rewrite takes into account many community inputs, requires JDK 7 to take advantage of the latest programming features, and is forward-looking because the new API is JDK 8 Lambda-ready (that is, you can use Jetty 9’s HTTP client with JDK 7 without Lambda, but if you use it in JDK 8 you can use lambda expressions to specify callbacks; see examples below).

    Programming with Jetty 9’s HTTP Client

    The main class is named, as in Jetty 7 and Jetty 8, org.eclipse.jetty.client.HttpClient (although it is not backward compatible with the same class in Jetty 7 and Jetty 8).
    You can think of an HttpClient instance as a browser instance.
    Like a browser, it can make requests to different domains, it manages redirects, cookies and authentications, you can configure it with a proxy, and it provides you with the responses to the requests you make.
    You need to configure an HttpClient instance and then start it:

    HttpClient httpClient = new HttpClient();
    // Configure HttpClient here
    httpClient.start();
    

    Simple GET requests require just  one line:

    ContentResponse response = httpClient
            .GET("http://domain.com/path?query")
            .get();
    

    Method HttpClient.GET(...) returns a Future<ContentResponse> that you can use to cancel the request or to impose a total timeout for the request/response conversation.
    Class ContentResponse represents a response with content; the content is limited by default to 2 MiB, but you can configure it to be larger.
    Simple POST requests also require just one line:

    ContentResponse response = httpClient
            .POST("http://domain.com/entity/1")
            .param("p", "value")
            .send()
            .get(5, TimeUnit.SECONDS);
    

    Jetty 9’s HttpClient automatically follows redirects, so automatically handles the typical web pattern POST/Redirect/GET, and the response object contains the content of the response of the GET request. Following redirects is a feature that you can enable/disable on a per-request basis or globally.
    File uploads also require one line, and make use of JDK 7’s java.nio.file classes:

    ContentResponse response = httpClient
            .newRequest("http://domain.com/entity/1")
            .file(Paths.get("file_to_upload.txt"))
            .send()
            .get(5, TimeUnit.SECONDS);
    

    Asynchronous Programming

    So far we have shown how to use HttpClient in a blocking style, that is the thread that issues the request blocks until the request/response conversation is complete. However, to unleash the full power of Jetty 9’s HttpClient you should look at its non-blocking (asynchronous) features.
    Jetty 9’s HttpClient fully supports the asynchronous programming style. You can write a simple GET request in this way:

    httpClient.newRequest("http://domain.com/path")
            .send(new Response.CompleteListener()
            {
                @Override
                public void onComplete(Result result)
                {
                    // Your logic here
                }
            });
    

    Method send(Response.CompleteListener) returns void and does not block; the Listener provided as a parameter is notified when the request/response conversation is complete, and the Result parameter  allows you to access the response object.
    You can write the same code using JDK 8’s lambda expressions:

    httpClient.newRequest("http://domain.com/path")
            .send((result) -> { /* Your logic here */ });
    

    HttpClient uses Listeners extensively to provide hooks for all possible request and response events, and with JDK 8’s lambda expressions they’re even more fun to use:

    httpClient.newRequest("http://domain.com/path")
            // Add request hooks
            .onRequestQueued((request) -> { ... })
            .onRequestBegin((request) -> { ... })
            // More request hooks available
            // Add response hooks
            .onResponseBegin((response) -> { ... })
            .onResponseHeaders((response) -> { ... })
            .onResponseContent((response, buffer) -> { ... })
            // More response hooks available
            .send((result) -> { ... });
    

    This makes Jetty 9’s HttpClient suitable for HTTP load testing because, for example, you can accurately time every step of the request/response conversation (thus knowing where the request/response time is really spent).

    Content Handling

    Jetty 9’s HTTP client provides a number of utility classes off the shelf to handle request content and response content.
    You can provide request content as String, byte[], ByteBuffer, java.nio.file.Path, InputStream, and provide your own implementation of ContentProvider. Here’s an example that provides the request content using an InputStream:

    httpClient.newRequest("http://domain.com/path")
            .content(new InputStreamContentProvider(
                getClass().getResourceAsStream("R.properties")))
            .send((result) -> { ... });
    

    HttpClient can handle Response content in different ways:
    The most common is via blocking calls that return a ContentResponse, as shown above.
    When using non-blocking calls, you can use a BufferingResponseListener in this way:

    httpClient.newRequest("http://domain.com/path")
            // Buffer response content up to 8 MiB
            .send(new BufferingResponseListener(8 * 1024 * 1024)
            {
                @Override
                public void onComplete(Result result)
                {
                    if (!result.isFailed())
                    {
                        byte[] responseContent = getContent();
                        // Your logic here
                    }
                }
            });
    

    To be efficient and avoid copying to a buffer the response content, you can use a Response.ContentListener, or a subclass:

    ContentResponse response = httpClient
            .newRequest("http://domain.com/path")
            .send(new Response.Listener.Empty()
            {
                @Override
                public void onContent(Response r, ByteBuffer b)
                {
                    // Your logic here
                }
            });
    

    To stream the response content, you can use InputStreamResponseListener in this way:

    InputStreamResponseListener listener =
            new InputStreamResponseListener();
    httpClient.newRequest("http://domain.com/path")
            .send(listener);
    // Wait for the response headers to arrive
    Response response = listener.get(5, TimeUnit.SECONDS);
    // Look at the response
    if (response.getStatus() == 200)
    {
        InputStream stream = listener.getInputStream();
        // Your logic here
    }
    

    Cookies Support

    HttpClient stores and accesses HTTP cookies through a CookieStore:

    Destination d = httpClient
            .getDestination("http", "domain.com", 80);
    CookieStore c = httpClient.getCookieStore();
    List cookies = c.findCookies(d, "/path");
    

    You can add cookies that you want to send along with your requests (if they match the domain and path and are not expired), and responses containing cookies automatically populate the cookie store, so that you can query it to find the cookies you are expecting with your responses.

    Authentication Support

    HttpClient suports HTTP Basic and Digest authentications, and other mechanisms are pluggable.
    You can configure authentication credentials in the HTTP client instance as follows:

    String uri = "http://domain.com/secure";
    String realm = "MyRealm";
    String u = "username";
    String p = "password";
    // Add authentication credentials
    AuthenticationStore a = httpClient.getAuthenticationStore();
    a.addAuthentication(
        new BasicAuthentication(uri, realm, u, p));
    ContentResponse response = httpClient
            .newRequest(uri)
            .send()
            .get(5, TimeUnit.SECONDS);
    

    HttpClient tests authentication credentials against the challenge(s) the server issues, and if they match it automatically sends the right authentication headers to the server for authentication. If the authentication is successful, it caches the result and reuses it for subsequent requests for the same domain and matching URIs.

    Proxy Support

    You can also configure HttpClient  with a proxy:

    httpClient.setProxyConfiguration(
        new ProxyConfiguration("proxyHost", proxyPort);
    ContentResponse response = httpClient
            .newRequest(uri)
            .send()
            .get(5, TimeUnit.SECONDS);
    

    Configured in this way, HttpClient makes requests to the proxy (for plain-text HTTP requests) or establishes a tunnel via HTTP CONNECT (for encrypted HTTPS requests).

    Conclusions

    The new Jetty 9  HTTP client is easier to use, has more features and it’s faster and better than Jetty 7’s or Jetty 8’s.
    The Jetty project continues to lead the way when it’s about the Web: years ago with Jetty Continuations, then with Jetty WebSocket, recently with Jetty SPDY and now with the first complete, ready to use, JDK 8’s Lambda -ready HTTP client.
    Go get it while it’s hot !
    Maven coordinates:

    
        org.eclipse.jetty
        jetty-client
        9.0.0.M3
    
    

    Direct Downloads:
    Main jar: jetty-client.jar
    Dependencies: jetty-http.jar, jetty-io.jar, jetty-util.jar

  • Jetty, SPDY and HAProxy

    The SPDY protocol will be the next web revolution.
    The HTTP-bis working group has been rechartered to use SPDY as the basis for HTTP 2.0, so network and server vendors are starting to update their offerings to include SPDY support.
    Jetty has a long story of staying cutting edge when it is about web features and network protocols.

    • Jetty first implemented web continuations (2005) as a portable library, deployed them successfully for years to customers, until web continuations eventually become part of the Servlet 3.0 standard.
    • Jetty first supported the WebSocket protocol within the Servlet model (2009), deployed it successfully for years to customers, and now the WebSocket APIs are in the course of becoming a standard via JSR 356.

    Jetty is the first and today practically the only Java server that offers complete SPDY support, with advanced features that we demonstrated at JavaOne (watch the demo if you’re not convinced).
    If you have not switched to Jetty yet, you are missing the revolutions that are happening on the web, you are probably going to lose technical ground to your competitors, and lose money upgrading too late when it will cost (or already costs) you a lot more.
    Jetty is open source, released with friendly licenses, and with full commercial support in case you need our expertise about developer advice, training, tuning, configuring and using Jetty.
    While SPDY is now well supported by browsers and its support is increasing in servers, it is still lagging a bit behind in intermediaries such as load balancers, proxies and firewalls.
    To exploit the full power of SPDY, you want not only SPDY in the communication between the browser and the load balancer, but also between the load balancer and the servers.
    We are actively opening discussion channels with the providers of such products, and one of them is HAProxy. With the collaboration of Willy Tarreau, HAProxy mindmaster, we have recently been able to perform a full SPDY communication between a SPDY client (we tested latest Chrome, latest Firefox and Jetty’s Java SPDY client) through HAProxy to a Jetty SPDY server.
    This sets a new milestone in the adoption of the SPDY protocol because now large deployments can leverage the goodness of HAProxy as load balancer *and* leverage the goodness of SPDY as well as provided by Jetty SPDY servers.
    The HAProxy SPDY features are available in the latest development snapshots of HAProxy. A few details will probably be subject to changes (in particular the HAProxy configuration keywords), but SPDY support in HAProxy is there.
    The Jetty SPDY features are already available in Jetty 7, 8 and 9.
    If you are interested in knowing how you can use SPDY in your deployments, don’t hesitate to contact us. Most likely, you will be contacting us using the SPDY protocol from your browser to our server 🙂

  • Jetty-SPDY blogged

    Jos Dirksen has written a nice blog about Jetty-SPDY, thanks Jos !
    In the upcoming Jetty 7.6.3 and 8.1.3 (due in the next days), the Jetty-SPDY module has been enhanced with support for prioritized streams and for SPDY push (although the latter only available via the pure SPDY API), and we have fixed a few bugs that we spotted and were reported by early adopters.
    Also, we are working on making really easy for Jetty users to enable SPDY, so that the configuration changes needed to enable SPDY in Jetty will be minimal.
    After these releases we will be working on full support for SPDY/3 (currently Jetty-SPDY supports SPDY/2, with some feature of SPDY/3).
    Browsers such as Chromium and Firefox are already updating their implementations to support also SPDY/3, so we will soon have support for the new version of the SPDY protocol also in the browsers.
    Stay tuned !