Category: Jetty

  • Jetty-SPDY blogged

    Jos Dirksen has written a nice blog about Jetty-SPDY, thanks Jos !
    In the upcoming Jetty 7.6.3 and 8.1.3 (due in the next days), the Jetty-SPDY module has been enhanced with support for prioritized streams and for SPDY push (although the latter only available via the pure SPDY API), and we have fixed a few bugs that we spotted and were reported by early adopters.
    Also, we are working on making really easy for Jetty users to enable SPDY, so that the configuration changes needed to enable SPDY in Jetty will be minimal.
    After these releases we will be working on full support for SPDY/3 (currently Jetty-SPDY supports SPDY/2, with some feature of SPDY/3).
    Browsers such as Chromium and Firefox are already updating their implementations to support also SPDY/3, so we will soon have support for the new version of the SPDY protocol also in the browsers.
    Stay tuned !

  • Jetty-SPDY is joining the revolution!

    There is a revolution quietly happening on the web and if you blink you might miss it. The revolution is in the speed and latency with which some browsers can load some web pages, and what used to take 100’s of ms is now often reduced to 10’s.  The revolution is Google’s  SPDY protocol which I predict will soon replace HTTP as the primary protocol of the web, and  Jetty-SPDY is joining this revolution.

    SPDY is a fundamental rethink of how HTTP is transported over the internet, based on careful analysis of the interaction between TCP/IP, Browsers and web page design .  It does not entirely replace HTTP (it still uses HTTP GET’s and POST’s), but makes HTTP semantics available over a much more efficient wire protocol. It also opens up the possibility of new semantics that can be used on the web (eg server push/hint).  Improved latency, throughput and efficiency will improve user experience and facilitate better and cheaper services in environments like the mobile web.

    When is the revolution?

    So when is SPDY going to be available?  It already is!!! The SPDY protocol is deployed in the current Chrome browsers and on the Amazon Kindle, and it is optionally supported by firefox 11.  Thus it is already on 25% of clients and will soon be over 50%. On the server side, Google supports SPDY on all their primary services and Twitter switched on SPDY support this month.  As the webs most popular browsers and servers are talking SPDY, this is a significant shift in the way data is moved on the web.   Since Jetty 7.6.2/8.1.2, SPDY is supported in  Jetty and you can start using it without any changes to your web application!

    Is it a revolution or a coup?

    By deploying SPDY on it’s popular browser and web services, Google has used it’s market share to make a fundamental shift in the web (but not as we know it)!  and there are some rumblings that this may be an abuse of Google’s market power.  I’ve not been shy in the past of pointing out google’s failings to engage with the community in good faith, but in this case I think they have done an excellent job.  The SPDY protocol has been an open project for over two years and they have published specs and actively solicited feedback and participation.  More over, they are intending to take the protocol to the IETF for standardisation and have already submitted a draft to the httpbis working group.   Openly developing the protocol to the point of wide deployment is a good fit with the IETF’s approach of “rough consensus and working code“.

    Note also that Google are not tying any functionality to SPDY, so it is not as if they are saying that we must use their new protocol or else we can’t access their services.  We are free to disable or block SPDY on our own networks and the browsers will happily fallback to normal HTTP.  Currently SPDY is a totally transparent upgrade to the user.

    Is there a problem?

    So why would anybody be upset about Google making the web run faster?  One of the most significant changes in the SPDY protocol, is that all traffic is encrypted with TLS. For most users, this can be considered a significant security enhancement, as they will no longer need to consider if a page/form is secure enough for the transaction they are conducting.

    However, if you are the administrator of a firewall that is enforcing some kind of content filtering policy, then having all traffic be opaque to your filters will make it impossible to check content (which may be great if you are a dissident in a rogue state, but not so great if you are responsible for a primary school network).  Similarly, caching proxies will no longer be able to cache shareable content as it will also be opaque to them, which may reduce some of the latency/throughput benefits of SPDY.

    Mike Belshe, who has lead the development of SPDY, points out that SPDY does not prevent proxies, it just prevents implicit (aka transparent) proxies.  Since SPDY traffic is encrypted, the browser and any intermediaries must negotiate a session to pass TLS traffic, so the browser will need to give it’s consent before a proxy can see or modify any content.  This is probably workable for the primary school use-case, but no so much for the rouge state.

    Policy or Necessity?

    There is nothing intrinsic about the SPDY protocol that requires TLS, and there are versions of it that operate in the clear.  I believe it was a policy rather than a technical decision to required TLS only. There are some technical justification by the argument that it reduces round trips needed to negotiate a SPDY and/or HTTP connection,  but I don’t see that encryption is the only answer to those problems.  Thus I suspect that there is also a little bit of an agenda in the decision and it will probably be the most contentious aspect of SPDY going forward.  It will be interesting to see if the TLS-only policy survives the IETF process, but then I might be hard to argue for a policy change that benefits rogue states and less personal privacy.

    Other than rouge states, another victim of the TLS-only policy is eas of debugging, as highlighted by Mike’s blog, where he is having trouble working out how the kindle uses SPDY because all the traffic is encrypted.  As a developer/debugger of a HTTP server, I cannot over stress how important it is to be able to see a TCP dump of a problematic session.  This argument is one of the reasons why the IETF has historically favoured clear text protocols.  It remains to be seen if this argument will continue to prevail or if we will have to rely on better tools and browser/servers coughing up TLS sessions keys in order to debug?

    In Summary

    Google and the other contributors to the SPDY project have done great work to develop a protocol that promises to take the web a significant step forward and to open up the prospects for many new semantics and developments.  While they have done this some what unilaterally, it has been done openly and with out any evidence of any intent other than to improve user experience/privacy and to reduce server costs.

    SPDY is a great development for the web and the Jetty team is please to be a part of it.

  • SPDY support in Jetty

    SPDY is Google’s protocol that is intended to improve user experience on the web, by reducing the latency of web pages, sometimes up to a factor of 3. Yes, three times faster.
    How does SPDY accomplish that ?
    SPDY reduces roundtrips with the server, reduces the HTTP verboseness by compressing HTTP headers, improves the utilization of the TCP connection, multiplexes requests into a single TCP connection (instead of using a limited number of connections, each serving only one request), and allows for server to push secondary resources (like CSS, images, scripts, etc.) associated with a primary resource (typically a web page) without incurring in additional round-trips.
    Now, the really cool thing is that Jetty has an implementation of SPDY (see the documentation) in the newly released 7.6.2 and 8.1.2 releases.
    Your web applications can immediately and transparently benefit of many of the SPDY improvements without changes, because Jetty does the heavy lifting for you under the covers.
    With Chromium/Chrome already supporting SPDY, and Firefox 11 supporting it also (although it needs to be enabled, see how here), more than 50% of the web browsers will be supporting it, so servers needs to catch up, and where Jetty shines.
    The Jetty project continues to foster innovation by supporting emerging web protocols: first WebSocket and now SPDY.
    A corollary project that came out from the SPDY implementation is a pure Java implementation of the Next Protocol Negotiation (NPN) TLS Extension, also available in Jetty 7.6.2 and 8.1.2.
    To prove that this is no fluke, we have updated Webtide’s website with Jetty’s SPDY implementation, and now the website can be served via SPDY, if the browser supports it.
    We encourage early adopters to test out Jetty’s SPDY and feedback us on jetty-dev@eclipse.org.
    Enjoy !

  • WebSocket over SSL in Jetty

    Jetty has always been in the front line on the implementation of the WebSocket Protocol.
    The CometD project leverages the Jetty WebSocket implementation to its maximum, to achieve great scalability and minimal latencies.
    Until now, however, support for WebSocket over SSL was lacking in Jetty.
    In Jetty 7.6.x a redesign of the connection layer allows for more pluggability of SSL encryption/decryption and of connection upgrade (from HTTP to WebSocket), and these changes combined allowed to implement very easily WebSocket over SSL.
    These changes are now merged into Jetty’s master branch, and will be shipped with the next version of Jetty.
    Developers will now be able to use the wss:// protocol in web pages in conjunction with Jetty on the server side, or just rely on the CometD framework to forget about transport details and always have the fastest, most reliable and now also confidential transport available, and concentrate in writing application logic rather than transport logic.
    WebSocket over SSL is of course also available in the Java WebSocket client provided by Jetty.
    Enjoy !

  • mvn jetty:run-forked

    Being able to run the jetty maven plugin on your webapp – but in a freshly forked jvm – is a feature that has been requested for a loooong time. With jetty-7.5.2 release, this feature has been implemented, and it even works on your unassembled webapp.

    How to Run


    mvn jetty:run-forked

    That will kick off a Jetty instance in a brand new jvm and deploy your unassemabled webapp to it. The forked Jetty will keep on running until either:

    • you execute a mvn jetty:stop (in another terminal window)
    • you <cntrl-c> the plugin

    The plugin will keep on executing until either:

    • you stop it with a <cntrl-c>
    • the forked jvm terminates

    NOTE: I’m interested in obtaining feedback about the lifecycles of the plugin and the forked Jetty. Is the lifecycle linkage that I’ve implemented the way you want to use it? Do you want the forked jvm to continue on, even if the plugin exits? Please post your input to the Jetty list at jetty-users@eclipse.org.

    How to Configure

    You need a few different configuration parameters from the usual jetty:run ones. Let’s look at an example:

         <plugin>
            <groupId>org.mortbay.jetty</groupId>
            <artifactId>jetty-maven-plugin</artifactId>
            <version>7.5.2.v20111006</version>
            <configuration>
              <stopPort>8087</stopPort>
              <stopKey>foo</stopKey>
              <jettyXml>src/main/config/jetty.xml</jetty.xml>
              <contextXml>src/main/config/context.xml</jetty.xml>
              <contextPath>/foo</contextPath>
              <tmpDirectory>${project.build.directory}/tmp</tmpDirectory>
              <jvmArgs>-verbose:gc -Xmx80m</jvmArgs>
            </configuration>
          </plugin>
    

    You need to specify the stopKey and stopPort so that you can control the forked Jetty using the handy maven goal mvn jetty:stop.
    You can use the jettyXml parameter to specify a comma separated list of jetty xml configuration files that you can use to configure the container. There’s nothing special about these config files, they’re just normal jetty configuration files. You can also use this parameter with the jetty:run goal too.
    The contextXml parameter specifies the location of a webapp context xml configuration file. Again, this is a normal jetty context xml configuration file. You can also use this with the jetty:run goal too, either in conjunction with, or instead of, the <webAppConfig> parameter (which configures the webapp right there in the pom). As the jetty:run-forked goal does NOT support the <webAppConfig> element, you MUST use contextXml if you need to configure the webapp.
    The contextPath parameter specifies the context path at which to deploy the webapp. You can use this as a simple shortcut instead of the contextXml parameter if you have no other configuration that you need to do for the webapp. Or, you can specify both this AND the contextXml parameter, in which case the contextPath takes precedence over the context path inside the context xml file.
    tmpDirectory is the location of a temporary working directory for the webapp. You can configure it either here, or in a contextXml file. If specified in both places, the tmpDirectory takes precedence.
    With the jvmArgs parameter, you can specify an arbitrary list of args that will be passed as-is to the newly forked jvm.
    There’s also the same parameters as the mvn jetty:run goal:

    • skip – if true the execution of the plugin is skipped
    • useTestScope – if true, jars of <scope>test</scope> and the test classes are placed on the webapp’s classpath inside the forked jvm
    • useProvidedScope – if true, jars of <scope>provided</scope> are placed on the container’s classpath inside the forked jvm
    • classesDirectory – the location of the classes for the webapp
    • testClassesDirectory – the location of the test classes
    • webAppSourceDirectory – the location of the static resources for the webapp

    Also, just like the mvn jetty:run case, if you have dependencies that are <type>war</type> , then their resources will be overlaid onto the webapp when it is deployed in the new jvm.

  • CometD 2.4.0.beta1 Released

    CometD 2.4.0.beta1 has been released.
    This is a major release that brings in a few new Java API (see this issue) – client-side channels can now be released to save memory, along with an API deprecation (see this issue) – client-side publish() should not specify the message id.
    On the WebSocket front, the WebSocket transports have been overhauled and made up-to-date with the latest WebSocket drafts (currently Jetty implements up to draft 13, while browsers are still a bit back on draft 7/8 or so), and made scalable as well in both threading and memory usage.
    Following these changes, BayeuxClient has been updated to negotiate transports with the server, and Oort has also been updated to use WebSocket by default for server-to-server communication, making server-to-server communication more efficient and with less latency.
    WebSocket is now supported on Firefox 6 through the use of the Firefox-specific MozWebSocket object in the javascript library.
    We have performed some preliminary benchmarks with WebSocket; they look really promising, although have been done before the latest changes to the CometD WebSocket transports.
    We plan to do a more accurate benchmarking in the next days/weeks.
    The other major change is the pluggability of the JSON library to handle JSON generation and parsing (see this issue).
    CometD has been long time based on Jetty’s JSON library, but now also Jackson can be used (the default will still be Jetty’s however, to avoid breaking deployed applications that were using the Jetty JSON classes).
    Jackson proved to be faster than Jetty in both parsing and generation, and will likely to become the default in few releases, to allow gradual migration of application that made use of Jetty JSON classes directly.
    The applications should be written independently of the JSON library used.
    Of course Jackson also brings in its powerful configurability and annotation processing so that your custom classes can be de/serialized from/to JSON.
    Here you can find the release notes.
    Download it, use it, and report back, any feedback is important before the final 2.4.0 release.

  • Jetty WebSocket Client API updated

    With the release of Jetty 7.5.0 and the latest draft 13 of the WebSocket protocol, the API for the client has be re-factored a little since my last blog on WebSocket: Server, Client and Load Test.

    WebSocketClientFactory

    When creating many instances of the java WebSocketClient, there is much that can be shared between multiple instances: buffer pools, thread pools and NIO selectors.  Thus the client API has been updated to use a factory pattern, where the factory can hold the configuration and instances of the common infrastructure:

    WebSocketClientFactory factory = new WebSocketClientFactory();
    factory.setBufferSize(4096);
    factory.start();

    WebSocketClient

    Once the WebSocketClientFactory is started, WebSocketClient instances can be created and configured:

    WebSocketClient client = factory.newWebSocketClient();
    client.setMaxIdleTime(30000);
    client.setMaxTextMessageSize(1024);
    client.setProtocol("chat");

    The WebSocketClient does not need to be started and the configuration set is copied to the connection instances as they are opened.

    WebSocketClient.open(…)

    A websocket connection can be created from a WebSocketClient by calling open and passing the URI and the websocket instance that will handle the call backs (eg onOpen, onMessage etc.):

    Future future = client.open(uri,mywebsocket);
    WebSocket.Connection connection = future.get(10,TimeUnit.SECONDS);

    The open call returns a Future to the WebSocket.Connection.  Like the NIO.2 API in JDK7, calling get with a timeout imposes a connect time on the connection attempt and the connection will be aborted if the get times out.   If the connection is successful, the connection returned by the get is the same object passed to the WebSocket.onOpen(Connection) callback, so it may be access and used in either way.

    WebSocket.Connection

    The connection instance accessed via the onOpen callback or Future.get() is used to send messages and also to configure the connection:

    connection.setMaxIdleTime(10000);
    connection.setMaxTextMessageSize(2*1024);
    connection.setMaxBinaryMessageSize(64*1024);

    The  maximum message sizes are used to control how large messages can grow when they are being aggregated from multiple websocket frames.  Small max message sizes protect a server against DOS attack.

  • GWT and JNDI

    Many folks want to use some features beyond the bare servlet basics with GWT, such as JNDI lookups. It’s not hard to set up, but there are a couple of steps to it so here’s a detailed guide.
    Since GWT switched to using Jetty for its hosted mode (also known as development mode) back at GWT 1.6, lots of people have been asking how to use features such as JNDI lookups in their webapps.  Several people have posted helpful instructions, perhaps the best of them being from Nicolas Wetzel in this thread on Google Groups, and from Henning on his blog (in German).
    In this blog post, we’ll put all these instructions together in the one place, and give you a couple of projects you can download to get you started faster. You might want to skip down to the downloadable projects.

    Customizing the GWT Launcher

    The first step is to customize the JettyLauncher provided by GWT.  Unfortunately, at the time of writing (GWT2.3.0) you cannot customize by extension due to the use of final inner classes and private constructors. Therefore, you will need to copy and paste the entire class in order to make the necessary and trivial modifications to enable JNDI.
    You can find the source of the JettyLauncher.java class inside the gwt-dev.jar in your local installation of the GWT SDK.  Here’s a link to the jar from Maven Central Repository for convenience: gwt-dev-2.3.0.jar.  Unjar it, and copy the com/google/gwt/dev/shell/jetty/JettyLauncher.java class to a new location and name.
    Edit your new class and paste in this declaration:

    public static final String[] DEFAULT_CONFIG_CLASSES =
    {
        "org.mortbay.jetty.webapp.WebInfConfiguration",    //init webapp structure
        "org.mortbay.jetty.plus.webapp.EnvConfiguration",  //process jetty-env
        "org.mortbay.jetty.plus.webapp.Configuration",     //process web.xml
        "org.mortbay.jetty.webapp.JettyWebXmlConfiguration",//process jetty-web.xml
    };

    This declaration tells Jetty to setup JNDI for your web app and process the various xml files concerned.  Nearly done now. All you need to do is now apply these Configuration classes to the WebAppContext that represents your web app. Find the line that creates the WebAppContext:

    WebAppContext wac = createWebAppContext(logger, appRootDir);

    Now, add this line straight afterwards:

    wac.setConfigurationClasses(DEFAULT_CONFIG_CLASSES);

    Build your new class and you’re done. To save you some time, here’s a small project with the class modifications already done for you (variants for Ant and Maven):

    Modifying your Web App

    Step 1

    Add the extra jetty jars that implement JDNI lookups to your web app’s WEB-INF/lib directory. Here’s the links to version 6.1.26 of these jars – these have been tested against GWT 2.3.0 and will work, even though GWT is using a much older version of jetty (6.1.11?):

    Step 2

    Now you can create a WEB-INF/jetty-env.xml file to define the resources that you want to link into your web.xml file, and lookup at runtime with JNDI.
    That’s it, you’re good to go with runtime JNDI lookups in GWT hosted mode. Your webapp should also be able to run without modification when deployed into standalone Jetty. If you deploy to a different container (huh?!), then you’ll need to define the JNDI resources appropriately for that container (but you can leave WEB-INF/jetty-env.xml in place and it will be ignored).

    If You’re Not Sure How To Define JNDI Resources For Jetty…

    The Jetty 6 Wiki contains instructions on how to do his, but here’s a short example that defines a MySQL datasource:
    In WEB-INF/jetty-env.xml:

    <New id="DSTest" class="org.mortbay.jetty.plus.naming.Resource">
        <Arg>jdbc/DSTest</Arg>
        <Arg>
         <New class="com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource">
           <Set name="Url">jdbc:mysql://localhost:3306/databasename</Set>
           <Set name="User">user</Set>
           <Set name="Password">pass</Set>
         </New>
        </Arg>
    </New>

    Now link this into your web app with a corresponding entry in your WEB-INF/web.xml:

    <resource-ref>
        <description>My DataSource Reference</description>
        <res-ref-name>jdbc/DSTest</res-ref-name>
        <res-type>javax.sql.DataSource</res-type>
        <res-auth>Container</res-auth>
    </resource-ref>

    Of course, you will also need to copy any jars required by your resources – in this case the MySQL jar – into your WEB-INF/lib.
    You can then lookup the JNDI resource inside your servlet, filter etc:

    import javax.naming.InitialContext;
    import javax.sql.DataSource;
    InitialContext ic = new InitialContext();
    DataSource ds = (DataSource)ic.lookup("java:comp/env/jdbc/DSTest");
    

    An Example WebApp

    An example usually helps, so I’ve put together a silly, tiny webapp that does a JNDI lookup. It is based on the standard GWT “Hello World” webapp that is generated by default by the GWT webAppCreator script. This webapp does an RPC call to a servlet to get a message incorporating the name entered by the user. I’ve simply modified the message that is returned to also include an extra sentence obtained by doing a java:com/env lookup.
    Here’s my WEB-INF/jetty-env.xml:

    <Configure id='wac' class="org.mortbay.jetty.webapp.WebAppContext">
      <!-- An example EnvEntry that acts like it was defined in web.xml as an env-entry -->
      <New class="org.mortbay.jetty.plus.naming.EnvEntry">
        <Arg>msg</Arg>
        <Arg type="java.lang.String">A bird in the hand is worth 2 in the bush </Arg>
        <Arg type="boolean">true</Arg>
      </New>
    

    This defines the equivalent of an <env-entry> outside of web.xml. In fact, the boolean argument set to “true” means that it would override the value of an <env-entry> of the same name inside WEB-INF/web.xml. This is actually most useful when used in a Jetty context xml file for the webapp instead of WEB-INF/jetty-env.xml, as it would allow you to define a default value inside WEB-INF/web.xml and then customize for each deployment in the context xml file (which is external to the webapp). For this example, we could have just as well defined the <env-entry> in WEB-INF/web.xml instead, but I wanted to show you a WEB-INF/jetty-env.xml file so you have an example of where to define your resources.
    Here’s the extra code that does the lookup inside of GreetingServletImpl.java:

      private String lookupMessage (String user) {
        try {
            InitialContext ic = new InitialContext();
            String message = (String)ic.lookup("java:comp/env/msg");
            return message +" "+user;
        } catch (Exception e) {
            return e.getMessage();
        }
      }
    

    Running the built project in hosted mode and hitting the url http://127.0.0.1:8888/HelloJNDI.html?gwt.codesvr=127.0.0.1:9997 I see:
    Screen shot of webapp in action.
    Here’s an Ant project for this trivial webapp: HelloJNDI

    1. edit the build.xml file to change the property gwt.sdk to where you have the GWT SDK locally installed.
    2. build and run it in hosted mode with: ant devmode
    3. follow the hosted mode instructions to cut and paste the url into your browser

    Resource Listing

  • Sifting Logs in Jetty with Logback

    Ever wanted to create log files at the server level that are named based on some sort of arbitrary context?It is possible to do with Slf4j + Logback + Jetty Webapp Logging in the mix.
    Example projects for this can be found at github
    https://github.com/jetty-project/jetty-and-logback-example
    Modules:

    /jetty-distro-with-logback-basic/
    This configures the jetty distribution with logback enabled at the server level with     an example logback configuration.
    /jetty-distro-with-logback-sifting/
    This configures the jetty distribution with logback, centralized webapp logging,     a MDC handler, and a sample logback configuration that performs sifting based  on the incoming Host header on the requests.
    /jetty-slf4j-mdc-handler/
    This provides the Slf4J MDC key/value pairs that are needed to perform the     sample sifting with.
    /jetty-slf4j-test-webapp/
    This is a sample webapp+servlet that accepts arbitrary values on a form POST     and logs them via Slf4J, so that we can see the results of this example.

    Basic Logback Configuration for Jetty

    See the /jetty-distro-with-logback-basic/ for a maven project that builds this configuration.Note: the output directory /jetty-distro-with-logback-basic/target/jetty-distro/ is where this configuration will be built by maven.
    What is being done:

    1. Unpack your Jetty 7.x Distribution Zip of choice
      The example uses the latest stable release
      (7.4.5.v20110725 at the time of writing this)
    2. Install the slf4j and logback jars into ${jetty.home}/lib/logging/
    3. Configure ${jetty.home}/start.ini to add the lib/logging directory into the server classpath
      #===========================================================
      # Start classpath OPTIONS.
      # These control what classes are on the classpath
      # for a full listing do
      #   java -jar start.jar --list-options
      #-----------------------------------------------------------
      OPTIONS=Server,resources,logging,websocket,ext
      #-----------------------------------------------------------
      #===========================================================
      # Configuration files.
      # For a full list of available configuration files do
      #   java -jar start.jar --help
      #-----------------------------------------------------------
      etc/jetty.xml
      # etc/jetty-requestlog.xml
      etc/jetty-deploy.xml
      etc/jetty-webapps.xml
      etc/jetty-contexts.xml
      etc/jetty-testrealm.xml
      #===========================================================
    4. Create a ${jetty.home}/resources/logback.xml file with the configuration you want.
      <?xml version="1.0" encoding="UTF-8"?>
      <!--
        Example LOGBACK Configuration File
        http://logback.qos.ch/manual/configuration.html
        -->
      <configuration>
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
          <!-- encoders are assigned the type
               ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
          <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
          </encoder>
        </appender>
        <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
          <file>${jetty.home}/logs/jetty.log</file>
          <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>jetty_%d{yyyy-MM-dd}.log</fileNamePattern>
            <!-- keep 30 days' worth of history -->
            <maxHistory>30</maxHistory>
          </rollingPolicy>
          <encoder>
            <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
          </encoder>
        </appender>
        <root level="info">
          <appender-ref ref="STDOUT" />
          <appender-ref ref="FILE" />
        </root>
      </configuration>

    That’s it, now you have (in the following order)

    1. Jetty configured to use slf4j
      (via the existance slf4j-api.jar in the classpath on Jetty startup)
    2. slf4j configured to use logback
      (via the existance of logback-core.jar in the classpath at Jetty startup)
    3. logback configured to produce output to:
      • ${jetty.home}/logs/jetty.log (with daily rolling)
      • and STDOUT console

    Pretty easy huh?
    Go ahead and start Jetty.

    $ java -jar start.jar

    You’ll notice that the log events being produced by Jetty are being handled by Slf4j and Logback is doing the writing of those events to the STDOUT console and logs/jetty.log file
    Now lets try something a bit more complex.

    Sifting Logs produced by webapps via Hostname using Logback in Jetty

    Lets say we have several virtual hosts, or a variety of DNS hostnames for the Jetty instance that is running.And you want to have the logging events being produced by the webapps captured into uniquely named log files by the hostname that the request came in on.
    This too is possible with logback, albeit with a little help from slf4j and jettty WebappContextClassloader configuration.
    See the /jetty-distro-with-logback-sifting/ project example from the github project above for a build-able configuration of the following instructions:

    1. Unpack your Jetty 7.x Distribution Zip of choice.
      The example uses the latest stable release.
      (7.4.5.v20110725 at the time of writing this)
    2. Install the slf4j and logback jars into ${jetty.home}/lib/logging/
    3. Configure ${jetty.home}/start.ini to add the lib/logging directory into the server classpath
      #===========================================================
      # Start classpath OPTIONS.
      # These control what classes are on the classpath
      # for a full listing do
      #   java -jar start.jar --list-options
      #-----------------------------------------------------------
      OPTIONS=Server,resources,logging,websocket,ext
      #-----------------------------------------------------------
      #===========================================================
      # Configuration files.
      # For a full list of available configuration files do
      #   java -jar start.jar --help
      #-----------------------------------------------------------
      etc/jetty.xml
      # etc/jetty-requestlog.xml
      etc/jetty-mdc-handler.xml
      etc/jetty-deploy.xml
      etc/jetty-webapps.xml
      etc/jetty-contexts.xml
      etc/jetty-webapp-logging.xml
      etc/jetty-testrealm.xml
      #===========================================================

      The key entries here are the addition of the “logging” OPTION to load the classes in ${jetty.home}/lib/logging into the jetty server classpath, and the 2 new configuration files:

      etc/jetty-mdc-handler.xml
      This adds wraps the MDCHandler found in jetty-slf4j-mdc-handler around all of the handlers in Jetty Server.
      etc/jetty-webapp-logging.xml
      This adds a DeploymentManager lifecycle handler that configures the created Webapp’s Classloaders to deny      acccess to any webapp (war) file contained logger implementations in favor of using the ones that exist      on the server classpath.      This is a concept known as Centralized Webapp Logging.
    4. Create a ${jetty.home}/resources/logback.xml file with the configuration you want.
      <?xml version="1.0" encoding="UTF-8"?>
      <!--
        Example LOGBACK Configuration File
        http://logback.qos.ch/manual/configuration.html
        -->
      <configuration>
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
          <!-- encoders are assigned the type
               ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
          <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
          </encoder>
        </appender>
        <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
          <!-- in the absence of the class attribute, it is assumed that the
               desired discriminator type is
               ch.qos.logback.classic.sift.MDCBasedDiscriminator -->
          <discriminator>
            <key>host</key>
            <defaultValue>unknown</defaultValue>
          </discriminator>
          <sift>
            <appender name="FILE-${host}" class="ch.qos.logback.core.rolling.RollingFileAppender">
              <file>${jetty.home}/logs/jetty-${host}.log</file>
              <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
                <!-- daily rollover -->
                <fileNamePattern>jetty-${host}_%d{yyyy-MM-dd}.log</fileNamePattern>
                <!-- keep 30 days' worth of history -->
                <maxHistory>30</maxHistory>
              </rollingPolicy>
              <encoder>
                <pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
              </encoder>
            </appender>
          </sift>
        </appender>
        <root level="INFO">
          <appender-ref ref="STDOUT" />
          <appender-ref ref="SIFT" />
        </root>
      </configuration>

    That’s it, now you have (in the following order):

    1. Jetty configured to use slf4j
      (via the existence slf4j-api.jar in the classpath on Jetty startup)
    2. Jetty is configured to modify incoming Webapp’s classloaders to favor server logging classes   over the webapp’s own logging classes.
      (a.k.a. Centralized Webapp Logging)
    3. slf4j configured to use logback
      (via the existence of logback-core.jar in the classpath at Jetty startup)
    4. logback configured to produce output to:
      • ${jetty.home}/logs/jetty-${host}.log (with daily rolling)  and using “unknown” for log events that don’t originate from a request.
      • and STDOUT console

    Not too bad huh?
    Go ahead and start Jetty.

    $ java -jar start.jar


    If you have started the distribution produced by the example configuration, you can use the provided /slf4j-tests/ context to experiment with this.
    Go ahead and use the default URL of http://localhost:8080/slf4j-tests/

    Now try a few more URLs that are for the same Jetty instance.

    Note: “lapetus” is the name of my development machine.
    You should now have a few different log files in your ${jetty.home}/logs/ directory.

  • NoSql Sessions with Jetty7 and Jetty8

    When Jetty 7.5.0 is released we will have officially started to dabble in the area of distributed session handling and storage. To start this out we have created a set of abstract classes around the general concept of NoSQL support, and have prepared an initial implementation using MongoDB. We will also be working on Ehcache and perhaps Cassandra implementations over time to round out the offering, but it is overall a pretty exciting time for these sorts of things.

    NoSQL sessions are a good idea for a number of usage scenarios, but as with NoSQL solutions in general, it is not a one-size-fits-all technology. The Jetty NoSQL session implementation should be good for scenarios that require decentralization, highly parallel work loads, and scalability, while also supporting session migration from one machine to the next for load balancing purposes. While we are initially releasing with just the MongoDB session manager, it is important to make clear that all the different distributed NoSQLish solutions out there have there own positives and negatives that you need to balance when choosing a storage medium. This is an interesting and diverse area of development, and since there is little standardization at the moment it is not a simple matter of exporting data from one system to the next if you want to change back ends.

    Before jumping in and embracing this solution for your session management, ask yourself some questions:

    • Do I require a lot of write behavior on my session objects?

    When you’re dealing with anything that touches the network to perform an action, you have an entirely different set of issues than if you can keep all your logic on one machine.  The hash session manager is the fastest solution for this use profile, but the JDBC session manager is not a bad solution if you need to operate with the network.  That in mind, there is an optimization in the NoSQL session managers where tight write loops should queue up a bit before an actual write to the back end MongoDB server occurs.  In general, if you have a session profile that involves a lot of writes all the time, you might want to shy away from this approach.

    • Am I bouncing sessions across lots of machines all the time?

    If you are, then you might be better off to get rid of sessions entirely and be more RESTful, but a networked session manager is going to be difficult to scale to this approach and be consistent.  By consistent I mean writing data into your session on one node and having that same data present within a session on another node.  If you’re looking at using MongoDB to increase the number of sessions you’re able to support, it is vitally important to remember that the network is not an inexhaustable resource, and keeping sessions localized is good practice, especially if you want consistent behavior. But if you want non-sticky sessions or mostly sticky sessions that can scale, this sort of NoSQL session manager is certainly an option, especially for lightweight, mostly read sessions.

    • Do I want to scale to crazy amounts of sessions that are relatively small and largely contain write-once read-often data?

    Great! Use this!  You are the people we had in mind when we developed the distributed session handling.

    On the topic of configuring the new session managers, it is much like other traditional ones: add them to the context.xml or set up with the regular jetty.xml route. There are, however, a couple of important options to keep in mind for the session ID manager.

    • scavengeDelay–How often will a scavenge operation occur looking for sessions to invalidate?
    • scavengePeriod–How much time after a scavenge has completed should you wait before doing it again?
    • purge (Boolean)–Do you want to purge (delete) sessions that are invalid from the session store completely?
    • purgeDelay–How often do you want to perform this purge operation?
    • purgeInvalidAge–How old should an invalid session be before it is eligible to be purged?
    • purgeValidAge–How old should a valid session be before it is eligible to be marked invalid and purged? Should this occur at all?

    A guide for detailed configuration can be found on our wiki at on the Session Clustering with MongoDB page.

    The new MongoDB session manager and session ID manager are located in the jetty-nosql module.  Since we plan to have multiple offerings we have made the mongodb dependency optional, so if you’re planning to use embedded Jetty, make sure you declare a hard dependency in Maven. You can also download the mongodb jar file and place it into a lib/mongodb directory within the jetty distribution itself; then you must add mongodb to the OPTIONS  on the cli or in the start.ini file you’re starting Jetty with.

    There were a number of different ways to go in implementing session ID management. While we are wholly tolerant of a user request being moved from one server to another, we chose to keep normal session operations localized to the machine where the session originates.  If the request bounces from one machine to another, the latest known session is loaded. If it is saved and then bounces back, Jetty notices the change in the version of the session and reloads, but these operations are heavy weight: they require pulling back all data of a session across the network, as opposed to a field or two of MongoDB goodness.  One side effect of this approach is the scavenge operation executes only on the known session IDs of a given node. In this scenario, if your happy cluster of Jetty instances has a problem and one of them crashes (not our fault!), there is potential for previously valid session IDs to remain in your MongoDB session store, never to be seen again, but also never cleaned up. That is where purge comes in: the purge process can perform a passive sweep through the MongoDB cluster to delete really old, valid sessions.  You can also delete the invalid sessions that are over a week old, or a month old, or whatever you like. If you have hoarding instincts, you can turn purge off (it’s true by default), and your MongoDB cluster will grow… and grow.

    We have also added some additional JMX support to the MongoDB session manager. When you enable JMX, you can access all the normal session statistics, but you also have the option to force execution of the purge and scavenge operations on a single node, or purge fully, which executes the purge logic for everything in the MongoDB store.  In this mode you can disable purge on your nodes and schedule the actions for when you are comfortable they will not cause issues on the network.  For tips on configuring JMX support for jetty see our tutorial on JMX.

    Lastly I’ll just mention that MongoDB is really a treat to work with. I love how easy it is to print the data being returned from MongoDB, and it’s in happy JSON.  It has a rich query language that allowed us to easily craft queries for the exact information we were looking for, reducing the footprint on the network the session work imposes.