Blog

  • i-jetty 3.1 Released

    Release 3.1 of i-jetty for Android is now available from the Android Market and the i-jetty download page.
    This release updates the embedded Jetty to jetty-7.6.0.RC4, although the majority of the changes have been to the Console, which is a webapp that allows you to interact with your Android device from a remote browser.
    Higlights include:

    • pagination of large data sets such as Contacts and Media thumbnails (images, videos)
    • re-implementation of generated content as json & ajax REST
    • ability to cause the device to ring (helpful for finding it around the house!)
    • ability to show current location of the Android device on Google maps , or track its location on the map as the device moves.

    Here’s a screenshot showing tracking my phone as it moves from the Sydney Opera House to Fort Denison on Sydney Harbour:

    Tracking phone via i-jetty console webapp
    Tracking phone via i-jetty console webapp

    Enjoy.

  • WebSocket over SSL in Jetty

    Jetty has always been in the front line on the implementation of the WebSocket Protocol.
    The CometD project leverages the Jetty WebSocket implementation to its maximum, to achieve great scalability and minimal latencies.
    Until now, however, support for WebSocket over SSL was lacking in Jetty.
    In Jetty 7.6.x a redesign of the connection layer allows for more pluggability of SSL encryption/decryption and of connection upgrade (from HTTP to WebSocket), and these changes combined allowed to implement very easily WebSocket over SSL.
    These changes are now merged into Jetty’s master branch, and will be shipped with the next version of Jetty.
    Developers will now be able to use the wss:// protocol in web pages in conjunction with Jetty on the server side, or just rely on the CometD framework to forget about transport details and always have the fastest, most reliable and now also confidential transport available, and concentrate in writing application logic rather than transport logic.
    WebSocket over SSL is of course also available in the Java WebSocket client provided by Jetty.
    Enjoy !

  • CometD, Dojo and XDomainRequest

    The CometD project implements various Comet techniques to implement a web messaging bus.
    You can find an introduction to CometD here.
    Web applications often need to access resources residing on different servers, making the request to access those resources a cross origin request and therefore subject to the same origin policy.
    Fortunately, all modern browsers implement the Cross Origin Resource Sharing (CORS) specification, and with the support of Jetty‘s Cross Origin Filter, it’s a breeze to write applications that allow cross origin resource sharing.
    That is, all modern browsers apart Internet Explorer 8 and 9.
    Without CORS support, CometD fallbacks to another Comet technique known as JSONP.
    While JSONP is much less efficient than a CORS request, it guarantees the CometD functionality, but it’s 2011 and JSONP should be a relic of the past.
    Microsoft’s browsers have another JavaScript object that allows to make cross origin request: XDomainRequest.
    Unfortunately this object is non-standard, and it is not, in general, supported by the JavaScript toolkits on which CometD relies for the actual communication with the server.
    I cannot really blame toolkits authors for this lack of support.
    However, I recently found a way to make XDomain request work with CometD 2.4.0 and the Dojo toolkit library.
    The solution (see this blog post for reference) is the following:
    Add this code to your JavaScript application:

    dojo.require("dojox.io.xhrPlugins");
    ...
    dojox.io.xhrPlugins.addCrossSiteXhr("http://<crossOriginHost>:<crossOriginPort>");
    

    What remains is to configure CometD with the crossOriginHost:

    dojox.cometd.configure({
        url: "http://<crossOriginHost>:<crossOriginPort>"
    });
    

    The last glitch is that XDomainRequest does not seem to allow to send the Content-Type HTTP header, so all of the above will only work in CometD 2.4.0.RC1 or greater where this improvement has been made.
    I do not particularly recommend this hack, but sometimes it’s the only way to support cross origin requests for the obsolete Internet Explorers.

  • mvn jetty:run-forked

    Being able to run the jetty maven plugin on your webapp – but in a freshly forked jvm – is a feature that has been requested for a loooong time. With jetty-7.5.2 release, this feature has been implemented, and it even works on your unassembled webapp.

    How to Run


    mvn jetty:run-forked

    That will kick off a Jetty instance in a brand new jvm and deploy your unassemabled webapp to it. The forked Jetty will keep on running until either:

    • you execute a mvn jetty:stop (in another terminal window)
    • you <cntrl-c> the plugin

    The plugin will keep on executing until either:

    • you stop it with a <cntrl-c>
    • the forked jvm terminates

    NOTE: I’m interested in obtaining feedback about the lifecycles of the plugin and the forked Jetty. Is the lifecycle linkage that I’ve implemented the way you want to use it? Do you want the forked jvm to continue on, even if the plugin exits? Please post your input to the Jetty list at jetty-users@eclipse.org.

    How to Configure

    You need a few different configuration parameters from the usual jetty:run ones. Let’s look at an example:

         <plugin>
            <groupId>org.mortbay.jetty</groupId>
            <artifactId>jetty-maven-plugin</artifactId>
            <version>7.5.2.v20111006</version>
            <configuration>
              <stopPort>8087</stopPort>
              <stopKey>foo</stopKey>
              <jettyXml>src/main/config/jetty.xml</jetty.xml>
              <contextXml>src/main/config/context.xml</jetty.xml>
              <contextPath>/foo</contextPath>
              <tmpDirectory>${project.build.directory}/tmp</tmpDirectory>
              <jvmArgs>-verbose:gc -Xmx80m</jvmArgs>
            </configuration>
          </plugin>
    

    You need to specify the stopKey and stopPort so that you can control the forked Jetty using the handy maven goal mvn jetty:stop.
    You can use the jettyXml parameter to specify a comma separated list of jetty xml configuration files that you can use to configure the container. There’s nothing special about these config files, they’re just normal jetty configuration files. You can also use this parameter with the jetty:run goal too.
    The contextXml parameter specifies the location of a webapp context xml configuration file. Again, this is a normal jetty context xml configuration file. You can also use this with the jetty:run goal too, either in conjunction with, or instead of, the <webAppConfig> parameter (which configures the webapp right there in the pom). As the jetty:run-forked goal does NOT support the <webAppConfig> element, you MUST use contextXml if you need to configure the webapp.
    The contextPath parameter specifies the context path at which to deploy the webapp. You can use this as a simple shortcut instead of the contextXml parameter if you have no other configuration that you need to do for the webapp. Or, you can specify both this AND the contextXml parameter, in which case the contextPath takes precedence over the context path inside the context xml file.
    tmpDirectory is the location of a temporary working directory for the webapp. You can configure it either here, or in a contextXml file. If specified in both places, the tmpDirectory takes precedence.
    With the jvmArgs parameter, you can specify an arbitrary list of args that will be passed as-is to the newly forked jvm.
    There’s also the same parameters as the mvn jetty:run goal:

    • skip – if true the execution of the plugin is skipped
    • useTestScope – if true, jars of <scope>test</scope> and the test classes are placed on the webapp’s classpath inside the forked jvm
    • useProvidedScope – if true, jars of <scope>provided</scope> are placed on the container’s classpath inside the forked jvm
    • classesDirectory – the location of the classes for the webapp
    • testClassesDirectory – the location of the test classes
    • webAppSourceDirectory – the location of the static resources for the webapp

    Also, just like the mvn jetty:run case, if you have dependencies that are <type>war</type> , then their resources will be overlaid onto the webapp when it is deployed in the new jvm.

  • CometD and Opera

    The Opera browser is working well with the CometD JavaScript library.
    However, recently a problem was reported by the BlastChat guys: with Opera, long-polling requests were strangely disconnecting and immediately reconnecting. This problem was only happening if the long poll request was held by the CometD server for the whole duration of the long-polling timeout.
    Reducing the long-polling timeout from the default 30 seconds to 20 seconds made the problem disappear.
    This made me think that some other entity had a 30 seconds timeout, and it was killing the request just before it had the chance to be responded by the CometD server.
    Such entities may be front-end web servers (such as when Apache Httpd is deployed in front of the CometD server), as well as firewalls or other network components.
    But in this case, all other major browsers were working fine, only Opera was failing.
    So I typed about:config in Opera’s address bar to access Opera’s configuration options, and filtered with the keyword timeout in the “Quick find” text field.
    The second entry is “HTTP Loading Delayed Timeout” and it is set at 30 seconds.
    Increasing that value to 45 seconds made the problem disappear.
    In my opinion, that value is a bit too aggressive, especially these days where Comet techniques are commonly used and where WebSocket is not yet widely deployed.
    The simple workaround is to set the CometD long poll timeout to 20-25 seconds as explained here, but it would be great if Opera’s default was set to a bigger value.

  • CometD 2.4.0 WebSocket Benchmarks

    Slightly more than one year has passed since the last CometD 2 benchmarks, and more than three years since the CometD 1 benchmark. During this year we have done a lot of work on CometD, both by adding features and by continuously improving performance and stability to make it faster and more scalable.
    With the upcoming CometD 2.4.0 release, one of the biggest changes is the implementation of a WebSocket transport for both the Java client and the Java server.
    The WebSocket protocol is finalizing at the IETF, major browsers all support various draft versions of the protocol (and Jetty supports all draft versions), so while WebSocket is slowly picking up, it is interesting to compare how WebSocket behaves with respect to HTTP for the typical scenarios that use CometD.
    We conducted several benchmarks using the CometD load tools on Amazon EC2 instances.

    HTTP Benchmark Results

    Below you can find the benchmark result graph when using the CometD long-polling transport, based on plain HTTP.

    Differently from the previous benchmark, where we reported the average latency, this time we report the median latency, which is a better indicator of the latencies seen by the clients.
    Comparison with the previous benchmark would be unfair, since the hosts were different (both in number and computing power), and the JVM also was different.
    As you can see from the graph above, the median latency is pretty much the same no matter the number of clients, with the exception of 50k clients at 50k messages/s.
    The median latency stays well under 200 ms even at more than 50k messages/s, and it is in the range of 2-4 ms until 10k messages/s, and around 50 ms for 20k messages/s, even for 50k clients.
    The result for 50k clients and 50k messages/s is a bit strange, since the hosts (both server and clients) had plenty of CPU available and plenty of threads available (which rules out locking contention issues in the code that would have bumped up threads use).
    Could it be possible that at that message rate we hit some limit of the EC2 platform ? It might be possible and this blog post confirms that indeed there are limits in the virtualization of the network interfaces between host and guest. I have words from other people who have performed benchmarks on EC2 that they also hit limits very close to what the blog post above describes.
    In any case, one server with 20k clients serving 50k messages/s with 150 ms median latency is a very good result.
    For completeness, the 99th percentile latency is around 350 ms for 20k and 50k clients at 20k messages/s and around 1500 ms for 20k clients at 50k messages/s, and much less–quite close to the median latency–for the other results.

    WebSocket Benchmark Results

    The results for the same benchmarks using the WebSocket transport were quite impressive, and you can see them below.

    Note that this graph uses a totally different scale for latencies and number of clients.
    Whereas for HTTP we had a 800 ms as maximum latency (on the Y axis), for WebSocket we have 6 ms (yes you read that right); and whereas for HTTP we somehow topped at 50k clients per server, here we could go up to 200k.
    We did not merge the two graphs into a single one to avoid that the WebSocket resulting trend lines were collapsed onto the X axis.
    With HTTP, having more than 50k clients on the server was troublesome at any message rate, but with WebSocket 200k clients were stable up to 20k messages/s. Beyond that, we probably hit EC2 limits again, and the results were unstable–some runs could complete successfully, others could not.

    • The median latencies, for almost any number of clients and any message rate, are below 10 ms, which is quite impressive.
    • The 99th percentile latency is around 300 ms for 200k clients at 20k messages/s, and around 200 ms for 50k clients at 50k messages/s.

    We have also conducted some benchmarks by varying the payload size from the default of 50 bytes to 500 bytes to 2000 bytes, but the results we obtained with different payload size were very similar, so we can say that payload size has a very little impact (if any) on latencies in this benchmark configuration.
    We have also monitored memory consumption in “idle” state (that is, with clients connected and sending meta connect requests every 30 seconds, but not sending messages):

    • HTTP: 50k clients occupy around 2.1 GiB
    • WebSocket: 50k clients occupy around 1.2 GiB, and 200k clients occupy 3.2 GiB.

    The benefits of WebSocket being a lighter weight protocol with respect to HTTP are clear in all cases.

    Conclusions

    The conclusions are:

    • The work the CometD project has done to improve performances and scalability were worth the effort, and CometD offers a truly scalable solution for server-side event driven web applications, for both HTTP and WebSocket.
    • As the WebSocket protocol gains adoption, CometD can leverage the new protocol without any change required to applications; they will just perform faster.
    • Server-to-server CometD communication can now be extremely fast by using WebSocket. We have already updated the CometD scalability cluster Oort to take advantage of these enhancements.

    Appendix–Benchmark Details

    The server was one EC2 instance of type “m2.4xlarge” (67 GiB RAM, 8 cores Intel(R) Xeon(R) X5550 @2.67GHz) running Ubuntu Linux 11.04 (2.6.38-11-virtual #48-Ubuntu SMP 64-bit).
    The clients were 10 EC2 instances of type “c1.xlarge” (7 GiB RAM, 8 cores Intel Xeon E5410 @2.33GHz) running Ubuntu Linux 11.04 (2.6.38-11-virtual #48-Ubuntu SMP 64-bit).
    The JVM used was Oracle’s Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode) version 1.7.0 for both clients and server.
    The server was started with the following options:

    -Xmx32g -Xms32g -Xmn16g -XX:-UseSplitVerifier -XX:+UseParallelOldGC -XX:-UseAdaptiveSizePolicy -XX:+UseNUMA

    while the clients were started with the following options:

    -Xmx6g -Xms6g -Xmn3g -XX:-UseSplitVerifier -XX:+UseParallelOldGC -XX:-UseAdaptiveSizePolicy -XX:+UseNUMA

    The OS was tuned for allowing a larger number of file descriptors, as described here.

  • CometD 2.4.0.beta1 Released

    CometD 2.4.0.beta1 has been released.
    This is a major release that brings in a few new Java API (see this issue) – client-side channels can now be released to save memory, along with an API deprecation (see this issue) – client-side publish() should not specify the message id.
    On the WebSocket front, the WebSocket transports have been overhauled and made up-to-date with the latest WebSocket drafts (currently Jetty implements up to draft 13, while browsers are still a bit back on draft 7/8 or so), and made scalable as well in both threading and memory usage.
    Following these changes, BayeuxClient has been updated to negotiate transports with the server, and Oort has also been updated to use WebSocket by default for server-to-server communication, making server-to-server communication more efficient and with less latency.
    WebSocket is now supported on Firefox 6 through the use of the Firefox-specific MozWebSocket object in the javascript library.
    We have performed some preliminary benchmarks with WebSocket; they look really promising, although have been done before the latest changes to the CometD WebSocket transports.
    We plan to do a more accurate benchmarking in the next days/weeks.
    The other major change is the pluggability of the JSON library to handle JSON generation and parsing (see this issue).
    CometD has been long time based on Jetty’s JSON library, but now also Jackson can be used (the default will still be Jetty’s however, to avoid breaking deployed applications that were using the Jetty JSON classes).
    Jackson proved to be faster than Jetty in both parsing and generation, and will likely to become the default in few releases, to allow gradual migration of application that made use of Jetty JSON classes directly.
    The applications should be written independently of the JSON library used.
    Of course Jackson also brings in its powerful configurability and annotation processing so that your custom classes can be de/serialized from/to JSON.
    Here you can find the release notes.
    Download it, use it, and report back, any feedback is important before the final 2.4.0 release.

  • Jetty WebSocket Client API updated

    With the release of Jetty 7.5.0 and the latest draft 13 of the WebSocket protocol, the API for the client has be re-factored a little since my last blog on WebSocket: Server, Client and Load Test.

    WebSocketClientFactory

    When creating many instances of the java WebSocketClient, there is much that can be shared between multiple instances: buffer pools, thread pools and NIO selectors.  Thus the client API has been updated to use a factory pattern, where the factory can hold the configuration and instances of the common infrastructure:

    WebSocketClientFactory factory = new WebSocketClientFactory();
    factory.setBufferSize(4096);
    factory.start();

    WebSocketClient

    Once the WebSocketClientFactory is started, WebSocketClient instances can be created and configured:

    WebSocketClient client = factory.newWebSocketClient();
    client.setMaxIdleTime(30000);
    client.setMaxTextMessageSize(1024);
    client.setProtocol("chat");

    The WebSocketClient does not need to be started and the configuration set is copied to the connection instances as they are opened.

    WebSocketClient.open(…)

    A websocket connection can be created from a WebSocketClient by calling open and passing the URI and the websocket instance that will handle the call backs (eg onOpen, onMessage etc.):

    Future future = client.open(uri,mywebsocket);
    WebSocket.Connection connection = future.get(10,TimeUnit.SECONDS);

    The open call returns a Future to the WebSocket.Connection.  Like the NIO.2 API in JDK7, calling get with a timeout imposes a connect time on the connection attempt and the connection will be aborted if the get times out.   If the connection is successful, the connection returned by the get is the same object passed to the WebSocket.onOpen(Connection) callback, so it may be access and used in either way.

    WebSocket.Connection

    The connection instance accessed via the onOpen callback or Future.get() is used to send messages and also to configure the connection:

    connection.setMaxIdleTime(10000);
    connection.setMaxTextMessageSize(2*1024);
    connection.setMaxBinaryMessageSize(64*1024);

    The  maximum message sizes are used to control how large messages can grow when they are being aggregated from multiple websocket frames.  Small max message sizes protect a server against DOS attack.

  • GWT and JNDI

    Many folks want to use some features beyond the bare servlet basics with GWT, such as JNDI lookups. It’s not hard to set up, but there are a couple of steps to it so here’s a detailed guide.
    Since GWT switched to using Jetty for its hosted mode (also known as development mode) back at GWT 1.6, lots of people have been asking how to use features such as JNDI lookups in their webapps.  Several people have posted helpful instructions, perhaps the best of them being from Nicolas Wetzel in this thread on Google Groups, and from Henning on his blog (in German).
    In this blog post, we’ll put all these instructions together in the one place, and give you a couple of projects you can download to get you started faster. You might want to skip down to the downloadable projects.

    Customizing the GWT Launcher

    The first step is to customize the JettyLauncher provided by GWT.  Unfortunately, at the time of writing (GWT2.3.0) you cannot customize by extension due to the use of final inner classes and private constructors. Therefore, you will need to copy and paste the entire class in order to make the necessary and trivial modifications to enable JNDI.
    You can find the source of the JettyLauncher.java class inside the gwt-dev.jar in your local installation of the GWT SDK.  Here’s a link to the jar from Maven Central Repository for convenience: gwt-dev-2.3.0.jar.  Unjar it, and copy the com/google/gwt/dev/shell/jetty/JettyLauncher.java class to a new location and name.
    Edit your new class and paste in this declaration:

    public static final String[] DEFAULT_CONFIG_CLASSES =
    {
        "org.mortbay.jetty.webapp.WebInfConfiguration",    //init webapp structure
        "org.mortbay.jetty.plus.webapp.EnvConfiguration",  //process jetty-env
        "org.mortbay.jetty.plus.webapp.Configuration",     //process web.xml
        "org.mortbay.jetty.webapp.JettyWebXmlConfiguration",//process jetty-web.xml
    };

    This declaration tells Jetty to setup JNDI for your web app and process the various xml files concerned.  Nearly done now. All you need to do is now apply these Configuration classes to the WebAppContext that represents your web app. Find the line that creates the WebAppContext:

    WebAppContext wac = createWebAppContext(logger, appRootDir);

    Now, add this line straight afterwards:

    wac.setConfigurationClasses(DEFAULT_CONFIG_CLASSES);

    Build your new class and you’re done. To save you some time, here’s a small project with the class modifications already done for you (variants for Ant and Maven):

    Modifying your Web App

    Step 1

    Add the extra jetty jars that implement JDNI lookups to your web app’s WEB-INF/lib directory. Here’s the links to version 6.1.26 of these jars – these have been tested against GWT 2.3.0 and will work, even though GWT is using a much older version of jetty (6.1.11?):

    Step 2

    Now you can create a WEB-INF/jetty-env.xml file to define the resources that you want to link into your web.xml file, and lookup at runtime with JNDI.
    That’s it, you’re good to go with runtime JNDI lookups in GWT hosted mode. Your webapp should also be able to run without modification when deployed into standalone Jetty. If you deploy to a different container (huh?!), then you’ll need to define the JNDI resources appropriately for that container (but you can leave WEB-INF/jetty-env.xml in place and it will be ignored).

    If You’re Not Sure How To Define JNDI Resources For Jetty…

    The Jetty 6 Wiki contains instructions on how to do his, but here’s a short example that defines a MySQL datasource:
    In WEB-INF/jetty-env.xml:

    <New id="DSTest" class="org.mortbay.jetty.plus.naming.Resource">
        <Arg>jdbc/DSTest</Arg>
        <Arg>
         <New class="com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource">
           <Set name="Url">jdbc:mysql://localhost:3306/databasename</Set>
           <Set name="User">user</Set>
           <Set name="Password">pass</Set>
         </New>
        </Arg>
    </New>

    Now link this into your web app with a corresponding entry in your WEB-INF/web.xml:

    <resource-ref>
        <description>My DataSource Reference</description>
        <res-ref-name>jdbc/DSTest</res-ref-name>
        <res-type>javax.sql.DataSource</res-type>
        <res-auth>Container</res-auth>
    </resource-ref>

    Of course, you will also need to copy any jars required by your resources – in this case the MySQL jar – into your WEB-INF/lib.
    You can then lookup the JNDI resource inside your servlet, filter etc:

    import javax.naming.InitialContext;
    import javax.sql.DataSource;
    InitialContext ic = new InitialContext();
    DataSource ds = (DataSource)ic.lookup("java:comp/env/jdbc/DSTest");
    

    An Example WebApp

    An example usually helps, so I’ve put together a silly, tiny webapp that does a JNDI lookup. It is based on the standard GWT “Hello World” webapp that is generated by default by the GWT webAppCreator script. This webapp does an RPC call to a servlet to get a message incorporating the name entered by the user. I’ve simply modified the message that is returned to also include an extra sentence obtained by doing a java:com/env lookup.
    Here’s my WEB-INF/jetty-env.xml:

    <Configure id='wac' class="org.mortbay.jetty.webapp.WebAppContext">
      <!-- An example EnvEntry that acts like it was defined in web.xml as an env-entry -->
      <New class="org.mortbay.jetty.plus.naming.EnvEntry">
        <Arg>msg</Arg>
        <Arg type="java.lang.String">A bird in the hand is worth 2 in the bush </Arg>
        <Arg type="boolean">true</Arg>
      </New>
    

    This defines the equivalent of an <env-entry> outside of web.xml. In fact, the boolean argument set to “true” means that it would override the value of an <env-entry> of the same name inside WEB-INF/web.xml. This is actually most useful when used in a Jetty context xml file for the webapp instead of WEB-INF/jetty-env.xml, as it would allow you to define a default value inside WEB-INF/web.xml and then customize for each deployment in the context xml file (which is external to the webapp). For this example, we could have just as well defined the <env-entry> in WEB-INF/web.xml instead, but I wanted to show you a WEB-INF/jetty-env.xml file so you have an example of where to define your resources.
    Here’s the extra code that does the lookup inside of GreetingServletImpl.java:

      private String lookupMessage (String user) {
        try {
            InitialContext ic = new InitialContext();
            String message = (String)ic.lookup("java:comp/env/msg");
            return message +" "+user;
        } catch (Exception e) {
            return e.getMessage();
        }
      }
    

    Running the built project in hosted mode and hitting the url http://127.0.0.1:8888/HelloJNDI.html?gwt.codesvr=127.0.0.1:9997 I see:
    Screen shot of webapp in action.
    Here’s an Ant project for this trivial webapp: HelloJNDI

    1. edit the build.xml file to change the property gwt.sdk to where you have the GWT SDK locally installed.
    2. build and run it in hosted mode with: ant devmode
    3. follow the hosted mode instructions to cut and paste the url into your browser

    Resource Listing

  • Sifting Logs in Jetty with Logback

    Ever wanted to create log files at the server level that are named based on some sort of arbitrary context?It is possible to do with Slf4j + Logback + Jetty Webapp Logging in the mix.
    Example projects for this can be found at github
    https://github.com/jetty-project/jetty-and-logback-example
    Modules:

    /jetty-distro-with-logback-basic/
    This configures the jetty distribution with logback enabled at the server level with     an example logback configuration.
    /jetty-distro-with-logback-sifting/
    This configures the jetty distribution with logback, centralized webapp logging,     a MDC handler, and a sample logback configuration that performs sifting based  on the incoming Host header on the requests.
    /jetty-slf4j-mdc-handler/
    This provides the Slf4J MDC key/value pairs that are needed to perform the     sample sifting with.
    /jetty-slf4j-test-webapp/
    This is a sample webapp+servlet that accepts arbitrary values on a form POST     and logs them via Slf4J, so that we can see the results of this example.

    Basic Logback Configuration for Jetty

    See the /jetty-distro-with-logback-basic/ for a maven project that builds this configuration.Note: the output directory /jetty-distro-with-logback-basic/target/jetty-distro/ is where this configuration will be built by maven.
    What is being done:

    1. Unpack your Jetty 7.x Distribution Zip of choice
      The example uses the latest stable release
      (7.4.5.v20110725 at the time of writing this)
    2. Install the slf4j and logback jars into ${jetty.home}/lib/logging/
    3. Configure ${jetty.home}/start.ini to add the lib/logging directory into the server classpath
      #===========================================================
      # Start classpath OPTIONS.
      # These control what classes are on the classpath
      # for a full listing do
      #   java -jar start.jar --list-options
      #-----------------------------------------------------------
      OPTIONS=Server,resources,logging,websocket,ext
      #-----------------------------------------------------------
      #===========================================================
      # Configuration files.
      # For a full list of available configuration files do
      #   java -jar start.jar --help
      #-----------------------------------------------------------
      etc/jetty.xml
      # etc/jetty-requestlog.xml
      etc/jetty-deploy.xml
      etc/jetty-webapps.xml
      etc/jetty-contexts.xml
      etc/jetty-testrealm.xml
      #===========================================================
    4. Create a ${jetty.home}/resources/logback.xml file with the configuration you want.
      <?xml version="1.0" encoding="UTF-8"?>
      <!--
        Example LOGBACK Configuration File
        http://logback.qos.ch/manual/configuration.html
        -->
      <configuration>
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
          <!-- encoders are assigned the type
               ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
          <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
          </encoder>
        </appender>
        <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
          <file>${jetty.home}/logs/jetty.log</file>
          <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>jetty_%d{yyyy-MM-dd}.log</fileNamePattern>
            <!-- keep 30 days' worth of history -->
            <maxHistory>30</maxHistory>
          </rollingPolicy>
          <encoder>
            <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
          </encoder>
        </appender>
        <root level="info">
          <appender-ref ref="STDOUT" />
          <appender-ref ref="FILE" />
        </root>
      </configuration>

    That’s it, now you have (in the following order)

    1. Jetty configured to use slf4j
      (via the existance slf4j-api.jar in the classpath on Jetty startup)
    2. slf4j configured to use logback
      (via the existance of logback-core.jar in the classpath at Jetty startup)
    3. logback configured to produce output to:
      • ${jetty.home}/logs/jetty.log (with daily rolling)
      • and STDOUT console

    Pretty easy huh?
    Go ahead and start Jetty.

    $ java -jar start.jar

    You’ll notice that the log events being produced by Jetty are being handled by Slf4j and Logback is doing the writing of those events to the STDOUT console and logs/jetty.log file
    Now lets try something a bit more complex.

    Sifting Logs produced by webapps via Hostname using Logback in Jetty

    Lets say we have several virtual hosts, or a variety of DNS hostnames for the Jetty instance that is running.And you want to have the logging events being produced by the webapps captured into uniquely named log files by the hostname that the request came in on.
    This too is possible with logback, albeit with a little help from slf4j and jettty WebappContextClassloader configuration.
    See the /jetty-distro-with-logback-sifting/ project example from the github project above for a build-able configuration of the following instructions:

    1. Unpack your Jetty 7.x Distribution Zip of choice.
      The example uses the latest stable release.
      (7.4.5.v20110725 at the time of writing this)
    2. Install the slf4j and logback jars into ${jetty.home}/lib/logging/
    3. Configure ${jetty.home}/start.ini to add the lib/logging directory into the server classpath
      #===========================================================
      # Start classpath OPTIONS.
      # These control what classes are on the classpath
      # for a full listing do
      #   java -jar start.jar --list-options
      #-----------------------------------------------------------
      OPTIONS=Server,resources,logging,websocket,ext
      #-----------------------------------------------------------
      #===========================================================
      # Configuration files.
      # For a full list of available configuration files do
      #   java -jar start.jar --help
      #-----------------------------------------------------------
      etc/jetty.xml
      # etc/jetty-requestlog.xml
      etc/jetty-mdc-handler.xml
      etc/jetty-deploy.xml
      etc/jetty-webapps.xml
      etc/jetty-contexts.xml
      etc/jetty-webapp-logging.xml
      etc/jetty-testrealm.xml
      #===========================================================

      The key entries here are the addition of the “logging” OPTION to load the classes in ${jetty.home}/lib/logging into the jetty server classpath, and the 2 new configuration files:

      etc/jetty-mdc-handler.xml
      This adds wraps the MDCHandler found in jetty-slf4j-mdc-handler around all of the handlers in Jetty Server.
      etc/jetty-webapp-logging.xml
      This adds a DeploymentManager lifecycle handler that configures the created Webapp’s Classloaders to deny      acccess to any webapp (war) file contained logger implementations in favor of using the ones that exist      on the server classpath.      This is a concept known as Centralized Webapp Logging.
    4. Create a ${jetty.home}/resources/logback.xml file with the configuration you want.
      <?xml version="1.0" encoding="UTF-8"?>
      <!--
        Example LOGBACK Configuration File
        http://logback.qos.ch/manual/configuration.html
        -->
      <configuration>
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
          <!-- encoders are assigned the type
               ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
          <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
          </encoder>
        </appender>
        <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
          <!-- in the absence of the class attribute, it is assumed that the
               desired discriminator type is
               ch.qos.logback.classic.sift.MDCBasedDiscriminator -->
          <discriminator>
            <key>host</key>
            <defaultValue>unknown</defaultValue>
          </discriminator>
          <sift>
            <appender name="FILE-${host}" class="ch.qos.logback.core.rolling.RollingFileAppender">
              <file>${jetty.home}/logs/jetty-${host}.log</file>
              <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <!-- daily rollover -->
                <fileNamePattern>jetty-${host}_%d{yyyy-MM-dd}.log</fileNamePattern>
                <!-- keep 30 days' worth of history -->
                <maxHistory>30</maxHistory>
              </rollingPolicy>
              <encoder>
                <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
              </encoder>
            </appender>
          </sift>
        </appender>
        <root level="INFO">
          <appender-ref ref="STDOUT" />
          <appender-ref ref="SIFT" />
        </root>
      </configuration>

    That’s it, now you have (in the following order):

    1. Jetty configured to use slf4j
      (via the existence slf4j-api.jar in the classpath on Jetty startup)
    2. Jetty is configured to modify incoming Webapp’s classloaders to favor server logging classes   over the webapp’s own logging classes.
      (a.k.a. Centralized Webapp Logging)
    3. slf4j configured to use logback
      (via the existence of logback-core.jar in the classpath at Jetty startup)
    4. logback configured to produce output to:
      • ${jetty.home}/logs/jetty-${host}.log (with daily rolling)  and using “unknown” for log events that don’t originate from a request.
      • and STDOUT console

    Not too bad huh?
    Go ahead and start Jetty.

    $ java -jar start.jar


    If you have started the distribution produced by the example configuration, you can use the provided /slf4j-tests/ context to experiment with this.
    Go ahead and use the default URL of http://localhost:8080/slf4j-tests/

    Now try a few more URLs that are for the same Jetty instance.

    Note: “lapetus” is the name of my development machine.
    You should now have a few different log files in your ${jetty.home}/logs/ directory.