Category: Uncategorized

  • Jetty Deployed Around the World

    The nice people at PaperCut were kind enough to talk about their usage of Jetty… and it isn’t minor usage. 10’s of thousands of servers in 60 countries. From small user populations to hundreds of thousands…

    See their full posting here!

    So, if someone asks, “Does anyone use Jetty in production?” Respond “who doesn’t?” Jetty is fantastic as an embedded component to add to your application.

  • JSR-315 Needs You II

    Rajiv, the spec lead on JSR-315 has posted his views on the issue of flexible automatic configuration of web applications. 

    Despite my vigorous arguments for flexibility (or perhaps because of them :), I’ve not been able to make the case with those opposed to selective enabling of auto configuration.  

    Unfortunately my arguments keep getting cast as security arguments. While there is an element of that, my main point is that I would like to be able to use modularized configuration without losing total control.  Even if I have to specify everything in web.xml, I would prefer to do so in fragments rather than 1 monolithic file.  The counter argument appears to be that be that just because there will always exist some scenarios that will need a full web.xml, there is no point trying to offer a solution for any other scenario other than full automatic discovery and deployment.

    As I’m not making the case and those that are unconvinced include the lead of JSR-315 and other EE JRSs, so I’m going to cease hammering on this issue as it is probably blocking discussion of  features for Servlet-3.0 more important than ease of configuration.

    But rest assured, Jetty-7 will support these features, but will make them optional, selective and able to be parameterized. So if you want to have control over what you deploy and you want to use modularized configuration mechanisms, then look no further than Jetty!

    So when you find yourself  to unzipping a jar file, just so you can change something in the reasonable default settings (as conceived by the framework developer) or copy them into your own web.xml… just remember I TOLD YOU SO!


  • Jetty Runner

    If you’re looking for a fast and easy way to run your webapp, without needing to install and administer a Jetty distro, then look no further, the Jetty Runner is here! The idea of the Jetty Runner is extremely simple – run a webapp from the command line using a single jar and as much default configuration as possible:

      java -jar jetty-runner.jar my.war

    Voila! Jetty will start on port 8080 and deploy the my.war webapp. Couldn’t get much simpler, could it?
    You can also deploy multiple webapps – either packed or unpacked wars – from the command line. In this example, my.war will be available at http://host:8080/one and the my2 webapp will be available at http://host:8080/two:

      java -jar jetty-runner.jar --path /one my1.war --path /two my2

    Or, for those webapps that need a little more configuration, you can run them via jetty context config files:

      java -jar jetty-runner.jar contexts/my.xml

    You can configure the most common things from the command line, like the port to start on, and whether to generate a request log or not:

     java --jar jetty-runner.jar --port 9090 --log my/request/log/goes/here my.war

    You can even configure a JDBC JNDI Resource entry right on the command line. Here’s an example to define a Postgres DB available in JNDI at java:comp/env/jdbc/mydatasource:

     java -jar jetty-runner.jar    --lib ~/src/tools/derby/ --lib ~/src/tools/atomikos     --jdbc org.apache.derby.jdbc.EmbeddedXADataSource "databaseName=testdb;createDatabase=create" "jdbc/mydatasource"    my.war

    The syntax of the –jdbc argument is:

     --jdbc <classname of Driver or XADataSource> <db properties> <jndiname>

    You’ll also have to tell jetty where to find your database driver and Atomikos, which we use to provide a transaction manager and wrap XA and non-XA Resources into a DataSource you can access from your webapp.
    You’ll notice the –lib argument, which is one way to tell jetty about extra jars you want to put onto the container’s classpath. We also give you:

     --jar <filename> --classes <dir>

    And as if all that wasn’t enough, you can get full configuration control using a jetty.xml configuration file:

     java -jar jetty-runner.jar --config my/jetty.xml my.war

    You can see all your options with:

     java -jar jetty-runner.jar --help

    How to get the jetty-runner.jar

    The jetty runner is in the jetty-contrib svn repository and as such is no longer distributed as part of the standard jetty release.
    At present, there is no distribution of the modules in the jetty-contrib repo (coming soon), however they are tagged
    with the same release tags as the main jetty release.
    So to obtain the jetty runner:

    1. do an svn checkout from https://svn.codehaus.org/jetty-contrib/tags/jetty-contrib-<tag>, where tag is 7.0.0pre1 or higher release number, eg:
      https://svn.codehaus.org/jetty-contrib/tags/jetty-contrib-7.0.0pre1
    2. mvn clean install
    3. the jar file to use will be in target/jetty-runner-<tag>.jar

    UPDATE 10 November 2008
     
    The jetty-runner jar can be downloaded form the main maven repo at: http://repo2.maven.org/maven2/org/mortbay/jetty/jetty-runner/
     
     

  • Patterns for Servlet 3.0 suspend usage.

    As I have previously blogged, asynchronous coding is hard! The suspend proposal for Servlet 3.0 does take a lot of the pain out of asynchronous programming, but not all.  It has been pointed out, that my own async examples make some assumptions that simplify the code. Specifically they assume that there are no upstream suspenders (eg a filter deployed in front) that have already suspended and resumed and thus affected the values returned by isInitial, isResumed and isTimeout.
    So these examples need to be a little more complex to deal with all circumstances. One way to deal with such complexity is with patterns, which can help explain the generic cases, provide a template for specific implementations and/or be the basis of frameworks to help developers.   Thus I have captured the key usages of the suspend API in the following 5 patterns:

    Suspend/Complete Servlet

    This is the simplest pattern, where a servlet suspends a request and organizes for the response to be completed by asynchronous threads or call backs. This is not affected by any upstream suspenders

      public void doGet(HttpServletRequest request,                     HttpServletResponse response)  {    request.suspend();    // arrange for response to be completed    // by async thread(s) or callback(s)  }

    Simple Suspend/Resume Servlet

    If a servlet developer knows that the servlet will not be fronted by suspending filters, then it can use a simplified pattern:

      public void doGet(HttpServletRequest request,                     HttpServletResponse response)  {    if(request.isInitial())    {      // handle intial dispatch      request.suspend();      // arrange async thread/callback      return;    }
        if(request.isTimeout())    {      // handle timeout    }
        // generate response  }

    Note that the suspend call should happen before arranging async thread/callback so that there is not a risk of a resume before the suspend.

    Suspend/Resume Servlet

    If a suspending servlet can be downstream of a filter (or dispatching servlet) that also suspends, then the isInitial(), isTimeout() and isResumed() methods may not be set due to this servlets suspend. A request attribute is required to flag that this servlet has performed the suspend. The attribute name needs to be chosen so that it will not clash with other instances. The attribute value may be a simple boolean or a more complex state object to pass information from the initial to he resume/timeout handling.

      public void doGet(HttpServletRequest request,                     HttpServletResponse response)  {    if(request.isInitial()     || request.getAttribute("com.acme.suspend")==null)    {      // handle intial dispatch      request.setAttribute("com.acme.suspend",                           Boolean.TRUE);      request.suspend();      // arrange callback      return;    }
        Boolean suspended=request.getAttribute("com.acme.suspend");    if (suspended)    {      request.setAttribute("com.acme.suspend",Boolean.FALSE);
          if(request.isTimeout())      {        // handle timeout        return;      }
          // handle resume    }
        // generate response  }

    The isInitial() call is still used as an efficiency.  If it is is true, then the request is initial for all filters and servlets.  The value of the attribute only needs to be checked if initial returns false.

    Simple Suspend/Resume Filter

    If a filter developer knows that there are no upstream or downstream suspenders, then a simplified pattern similar to the Simpler Suspend/Resume Servlet may be used:

      public void doFilter(ServletRequest request,                        ServletResponse response,                        FilterChain chain)  {
        if(request.isInitial())    {      // handle intial dispatch      request.suspend();      // arrange async callback      return;    }
        if(request.isTimeout())    {      // handle timeout      return;    }    // handle resume    chain.doFilter(request,response);  }

    Suspend/Resume Filter

    If a suspending filter is to be deployed where there may be either/both upstream and/or downstream suspending components, then a request attribute needs to be used to track both the initial handling and to signal that the resume/timeout has been handled. The attribute value may be a simple boolean or a more complex state object to pass information from the initial to the resume/timeout handling.

      public void doFilter(ServletRequest request,                        ServletResponse response,                        FilterChain chain)  {
        if(request.isInitial() || request.getAttribute("com.acme.suspend")==null)    {      // handle intial dispatch      request.setAttribute("com.acme.suspend",Boolean.TRUE);      request.suspend();      // arrange async callback      return;    }
        Boolean suspended=request.getAttribute("com.acme.suspend");    if (suspended)    {      request.setAttribute("com.acme.suspend",Boolean.FALSE);
          if(request.isTimeout())      {        // handle timeout        return;      }
          // handle resume    }
        chain.doFilter(request,response);  }
  • JSR-315 Needs YOU!

    The expert group for JSR 315 (servlet-3.0) has come to a bit of an impasse regarding some new features for auto discovery of servlets and filters.   Some members of the EG have some security/flexibility concerns regarding these features, but others do not think the concerns  significant enough to warrant additional complexity in configuration options. 
    In order to resolve this impasse, the EG has decided to solicit more community feedback. So this is my biased blog soliciting that feedback. I say biased, because I am a strong advocate FOR some additional flexibility in these new features. I understand that those AGAINST will also be making their case to the community and I will link to them from here once they become available.  Thus I’m looking for community support of my views, or corrections to my representations of the situation or just people telling me to chill and to not worry so much about such things.

    The Requirement

    It can be difficult, confusing and error prone to configure a web application that is built from many components, frameworks and web tools. The problem being that the current monolithic  web.xml  must contain all the configuration for  all the components, frameworks and tools.   This means that using a web framework is not as simple as  just dropping a  jar file into  WEB-INF/lib.   Currently snippets of web.xml need to be taken from the framework (either from templates or doco) and merged into the main web.xml for the web applications.  The the web.xml file because a mix of structure declarations, application configuration and framework configuration. 
    The requirement given to JSR-315 was to come up with a way to simplify deployment of frameworks and to allow modular or decomposed configuration.  There are already some features that partially address this in servlet 2.5, specifically:

    • Annotations on servlet classes can be used to add additional configuration to the servlets declared in web.xml: The current support is only for @postConstruct, @preDestroy, @runAs and @resource annotations, but this is the start of decentralized configuration.
    • Jar files in WEB-INF/lib are scanned for TLD descriptors that can instantiate Listeners

    3.0 Framework Pluggability

    The early review draft of Servlet-3.0 (out soon) will contain several new features to further meet the requirement for decentralized drop in style configuration: 

    1. Additional annotations such as @Servlet, @Filter and @FilterMapping have been defined with sufficient parameters (eg urlPattern and initParams) so as to be able to configure filters and servlets entirely from annotations of classes contained within WEB-INF/lib or WEB-INF/classes
    2. Support for web.xml fragments to be included in the /META-INF directory of jar files within WEB-INF/lib.  These web fragments are combined with the web.xml with well defined merging rules (already mostly defined when arbitrary element ordering was supported in 2.5 web.xml)
    3. Programmatic configuration of Filters and Servlets via new methods on ServletContext.  These methods are only active when called from a Listener class defined either in a web.xml fragment or discovered TLD file.

    The intent of these features is that a web framework can have all of it’s defaults configuration backed into it’s jar as either annotated servlets, web.xml fragments or as code triggered by a TLD defined listener.   Thus it should be possible to simple drop a web framework jar into WEB/INF lib and have that framework available without any editing of the web.xml.  It is proposed that  these features are turned on by default when a 3.0 web.xml is present or there is no web.xml at all.   Some (all?) of these features (at least discovery of annotated servlets) can be turned off by using the meta-data-complete element within a web.xml.

    The Automagic Discovery Problems

    I really like these new features.  I specially like web.xml fragments and programmable creation of servlets and filters.  I can also appreciate why those that like annotations would like the ability to completely configure a servlet in annotations. 
    However I have several significant concerns about the security and flexibility aspects of the automatic discovery mechanism implicit in these proposals:

    1. Accidental Deployment: Web applications can contain many many third party jars. I have seen several web applications that have over 100 jars that have been pulled in by their frameworks and their dependencies. Other than the performance issue of scanning all these jars at startup, there is a real risk of accidental deployment of features, debugging aids, admin UIs or hacker attacks.  The developer/deployers must be aware of all the web features and facilities in all the jars they use!  Maintainers that update a jar within a webapp will have to perform due diligence that they are not adding new web features unintentionally.  Tools will not be able to greatly assist with this process as the analysis of the programmatic configuration is a NP-complete problem – so you will need to deploy a jar to see what it defines, and even then you don’t know if it may later decide to define new filters and/or servlets.
    2. All or Nothing:. If there is just 1 servlet defined under WEB-INF that a developer does not want automagically deployed, then there is no mechanism to select included and/or excluded  jars.  The only  options are:
      • to  modify the jar to remove the unwanted configuration
      • to turn off automagic discovery and  define every other filter, servlet and listener in web.xml
      • let the unwanted servlet deploy and try to block it with security constraints.
    3. Parameterization: The jars with auto configured frameworks will contain a good default configuration, most probably set up for developer.  There is currently no mechanism available to parameterize the configuration within a jar, other than by overriding it in the main web.xml. This will lead either to configuration in two places, configuration cut-and-pasted out of the jar or the all-or-nothing options above.
    4. Ordering: The ordering of auto discovered configuration has yet to be defined. Ordering is important as this can affect the order of filters and which configuration may be overridden. If the order (when it is defined) is not the desired order, then there is no mechanism to change the order and the all-or-nothing options above will have to be used.
    5. Disabling: The <meta-data-complete> element will disable automatic scanning for annotations in all jars.  It may also disable checking for web.xml fragments (under discussion).  But there is currently no mechanism in 3.0 to disable the scanning for TLD listeners with their new capability for deploying arbitrary filters and servlets.  Deployment of closed source jars will become an event greater exercise in trust as only decompilation will reveal what may be deployed.

    The Proposed Solution

    Joe Walker (of DWR fame) proposed  a simple solution to these problems, which I embellished with some additional ideas.   This proposal has also evolved a little as a result of telephone and email discussions with the EG. 
    The main idea is to allow web.xml to have optional  <include> elements to guide the automagic discovery of configuration.  Without a web.xml or with a 3.0 web.xml that does not list any inclusions, the default would be to search all of  WEB-INF for annotated servlets and filters, TLD listeners and web.xml fragments as currently proposed.  If however, a web.xml contained <include> element, then the discovery process would be modified as the following examples illustrate:

    <include src="WEB-INF/lib/dwr.jar"/><include src="WEB-INF/lib/cometd.jar"/><include src="WEB-INF/classes"/>

    This include would scan only the dwr.jar and cometd.jar for annotations, TLD fragments and web.xml fragments, the WEB-INF/classes directory would be scanned for annotated servlets.  No other jars or classes  would be scanned unless listed in their own include elsewhere in the web.xml. The ordering between the includes is well defined, and  these element could be placed in the web.xml with other listener/servlet/filter/include declarations before, between or after them.

    <include src="WEB-INF/lib/dwr.jar!META-INF/web.xml"/>

    This include would use the web.xml fragment within the dwr.jar.  Similar includes could be used to scan for differently named web.xml fragments and TLD descriptors either within jars or as files within WEB-INF.

    <include src="WEB-INF/lib/dwr.jar!org/dwr/ReverseAjaxServlet.class"/>

    Scan the specified class within the DWR jar for servlet or filter annotations.  Note that this clause is effectively the same as just a <servlet> or <filter> element, as that would cause the class to be scanned and any annotations for mappings respected.  In essence this proposal just extends the current ability to nominate a servlet or filter for auto configuration to jar files, TLD files and web.xml fragments.

    <include src="WEB-INF/lib/cometd.jar!dojox/cometd/CometdServlet>  <init-param>    <param-name>maxIntervalMs/param-name>    <param-value>3000</param-value>  </init-param></include>

    This include element would deploy the annotated CometdServlet from the cometd.jar and would apply the init-param as an override to any default init-params specified in annotations.  Similarly init paramters could be set on web.xml fragments or even for listeners discovered in TLD files.
    An earlier form of this proposal included wild-card support for the partial URIs passed to the include elements.  While this may be useful, it does increase the complexity and I believe the proposal works well enough for most cases without it.  A web application with 100 jars is still only likely to include a few web toolkits.

    The Case Against?

    Due to my declared bias, I am not the best one to make the case against.  But I will paraphrase it as best I can and will link to the blogs of others when they become available.
    The case against the <include> element is that it is a complexity and confusion that can be done without, because the majority of servlet users are either unconcerned about the possibility of accidental deployment or that they are happy to restrict themselves to business as usual with a single main web.xml.

    Rebuttal

    So I’m debating myself now…  I think this is called a straw man.
    I don’t see this proposal as complex, specially now that I have removed wild carding. The list of include elements may sometimes be long, it will be far more compact, readable and maintainable than copying all the configuration into a single web.xml.
    I do find many servlet users that are very concerned both about security and ease of configuration and would at least like the option to explicitly list which components are auto configured. Please tell me  if you are one are one or not!
     

  • Use-Cases for Async Servlets

    Pre-release 0 of Jetty 7.0.0 is now available and includes a preview of the proposed Servlet 3.0 API for asynchronous servlets. This blog looks at 4 cool things you can do with asynchronous servlets and how they can be implemented using the proposed API.

    The new APIs proposed for servlet 3.0 included in 7.0.0pre0 are:

    ServletContext:
    addServlet, addServletMapping, addFilter, addFilterMapping
    Cookie:
    setHttpOnly, isHttpOnly
    ServletRequest:
    getServletContext, getServletResponse, suspend, resume, complete, isSuspended, isResumed, isTimeout, isInitial
    ServletResponse:
    disable, enable, isDisabled
    ServletRequestListener:
    requestSuspended, requestResumed, requestCompleted
    The key methods for the purposes of this blog, are the suspend and resume on ServletRequest. These are inspired by the suspend/resume aspects of Jetty Continuations and allow a servlet to return a request to the container to be handled later.  The unliked thrown exception of Continuations is gone and replaced with the ability to disable responses if suspend-unaware code needs to be traversed.

    The following 4 use-cases show some of the diverse ways this API can greatly improve your web-1.0 and web-2.0 applications.

    Ajax Cometd

    OK boring you say! Yes I have been ranting and ranting about this use-case for some time: 20,000 simultaneous users etc. etc.  So I wont say much more about this other than that porting the cometd servlet to the 3.0 API from Continuations was trivial and resulted in simpler more easily readable code that will soon be portable between all servlet-3.0 containers.

    Quality of Service Filter

    I have previously blogged about how slow resources (eg JDBC) can cause thread starvation in synchronous web applications.  The Jetty  ThrottleFilter  was developed to allow only a fixed number of requests to access a given slow resource, and for excess requests to asynchronous wait to proceed.  This both protects the slow resource from over-use and protects the webapp from thread starvation.

    With the QoSFilter in 7.0.0pre0, this is taken a step further, so that requests may be assigned a priority based on extensible criteria (eg  authenticated user or type of user), and higher priority users get preferential access to the protected resource.

    Requests that enter the filter for the first time try to acquire the _passes semaphore, which limits the number of requests that can simultaneously acquire it:

      if (request.isInitial())
    {
    accepted=_passes.tryAcquire(_waitMs,TimeUnit.MILLISECONDS);
    If a request is accepted by the semaphore, it simply allows the request to continue down the filter chain:
        if (accepted)
    chain.doFilter(request,response);
    If a request is not accepted by the semaphore in the _waitMs time, then the request is suspended using the new 3.0 API and the filter returns the request after queueing it on the appropriate priority queue:
        if (!accepted)
    {
    request.suspend();
    int priority = getPriority(request);
    _queue[priority].add(request);
    return;
    }
    Note that the thread continues to execute after the suspend. It has not suspended the thread, only the request. The suspend is done before adding to the _queue to prevent a resume occurring before the suspend! The priority queues are handled by accepted requests as they exit the filter in a finally clause. The priority queues are searched for the next highest priority request that is suspended and that request is resumed before the the _passes semaphore is released:
    finally
    {
    if (accepted)
    {
    for (int p=_queue.length;p-->0;)
    {
    ServletRequest req=_queue[p].poll();
    if (req!=null)
    {
    req.resume();
    break;
    }
    }
    _passes.release();
    }
    }
    The requests so resumed, are run again and re-enter the filter. This time they do not pass the isInitial() and thus forcefully acquire the semaphore and proceed to call the filter chain:
      if (request.isInitial())
    {
    ...
    }
    else if (request.isResumed())
    {
    _passes.acquire();
    accepted=true;
    }
    ...

    if (accepted)
    chain.doFilter(request,response);
    The ability to favour the processing of some requests over others is a significant new ability available in servlet-3.0.

    Asynchronous Web Services

    Jesse McConnell has already blogged about his demo showing the CFX asynchronous web services clients working with the Jetty 7.0.0pre0 to allow web service calls in parallel and without
    threadful waiting for the responses:

    The demo shows the thread time taken to access the ebay webservices interface. With a single call, both synchronous and asynchronous techniques took about 860ms to produce the  response. But with the synchronous client, a thread was held the entire time of that call, and could not be used to service other requests. With the asynchronous api, the thread is only held for 2ms to send the ws request, suspend and then process the response. This thread could be used to handle hundreds of other requests while waiting for the ws response!

    It get’s even better if multiple ws calls are required. The synchronous approach must do them in series, so the total time and thread hold time blows out to 2700ms for 3 requests. With the asynchronous API, the three ws requests are sent in parallel and the total time is 890ms, almost the same for a single request, and the thread is only held for 8ms. Again the thread could be used to service many other requests, rather than waiting idly  wasting resources!

    Asynchronous Web Proxy

    The WS demo above uses the CFX asynchronous client. Jetty now includes it own asynchronous HTTP client and this can be used in a very similar way to proxy HTTP requests to another server. The AsyncProxyServlet  demonstrates how a response can be generated for a suspended request without a resume.  The servlet-3.0 has a request.complete method that allows a asynchronous callbacks to generate the response and complete a suspended request:

    The request to be proxied is copied into a HttpExchange, with a little bit of process (deleted here) for standard proxy stuff:

    HttpExchange exchange = new HttpExchange()
    {
    ...
    };
    exchange.setMethod(request.getMethod());
    exchange.setURI(uri);
    ...
    Enumeration enm = request.getHeaderNames();
    while (enm.hasMoreElements())
    {
    String hdr=(String)enm.nextElement();
    ...
    Enumeration vals = request.getHeaders(hdr);
    while (vals.hasMoreElements())
    {
    String val = (String)vals.nextElement();
    exchange.setRequestHeader(lhdr,val);
    }
    }
    if (hasContent)
    exchange.setRequestContentSource(in);

    Once the HttpExchange has been constructed and configured it is asynchronously sent and the current requests is suspended while a responses is waited for:

    _client.send(exchange);
    request.suspend();

    The processing of the proxy response is all done in call back methods supplied to the construction of the HttpExchange (not shown above):

    HttpExchange exchange = new HttpExchange()
    {
    protected void onResponseStatus(Buffer version,
    int status,
    Buffer reason)
    {
    if (reason!=null && reason.length()>0)
    response.setStatus(status,reason.toString());
    else
    response.setStatus(status);
    }

    protected void onResponseHeader(Buffer name,
    Buffer value)
    {
    ...
    String s = name.toString().toLowerCase();
    if (!_DontProxyHeaders.contains(s))
    response.addHeader(name.toString(),value.toString());
    }

    protected void onResponseContent(Buffer content)
    {
    content.writeTo(out);
    }

    protected void onResponseComplete() throws IOException
    {
    request.complete();
    }
    }

    The callbacks allow the real response status to be set, the headers to be copied over (with some more hidden processing), the content to be written and eventually the original response is completed.
    Because the response has already been given a status, headers and content, there is no need to resume the request in order to generate a response.  

    This approach will allow scalable proxies to be implemented as standard java servlets, which will in turn allow some arbitrary fancy business logic to be incorporated into these proxies.

    Conclusion

    Of these use-cases, only Cometd is a real web-2.0. The other use-cases: JDBC access, web services and proxying are all pretty standard parts of many web-1.0 applications. Thus these examples demonstrate how widely asynchronous servlets may be applied to both existing and new web applications to improve their scalability and quality of service.

    If these topics interest you, come to the Jetty BOF at JavaOne.  Tuesday 7:30 pm 6 May 2008 San Francisco, or visit the Webtide booth.
  • Jetty 7.0.0pre0 released!

    The trunk of jetty has undergone some substantial changes over the last couple of weeks.  In addition to jetty 7 now requiring a minimum version of jdk 1.5 and the default inclusion of the early servlet 3.0 spec, there have been a number of structural changes a bit more near and dear to my heart.  It leverages a bit more maven2!

    The Jetty open source project is really broken up into two separate chunks, the jetty project and the jetty contrib project which are sourced out of separate svn repositories at The Codehaus.  A brief list of the build oriented changes that have gone on are:

    • creation of the jetty-parent artifact which serves as the administrative parent for both projects
    • several non-core artifacts were moved to the contrib project
    • all source artifacts have been moved under the modules directory
    • jetty-contrib is still sourced into the main jetty checkout but is not built by default
    • ‘all-modules’ profile added which builds the jetty-contrib modules and the website
    • ‘codehaus-release’ profile added which allows for staging of jetty release artifacts for review
    • -source and -javadoc classifier artifacts are built and released now (for easily pulling up source on the relevant object in your favorite editor)
    • jetty-assembly artifact which bundles the jetty releases into .zip, .tgz and .bz2 formats and places them into the repository
    • jetty releases make use of the maven-release-plugin now!

    Now I am sure that a lot of people couldn’t really care less about many of these things, but for anyone that uses maven extensively they will recognize [hopefully] that the more any given project leverages the conventions of maven, the less time it will take someone already familiar with maven to get up to speed with that project.  Its one of the fundamental reasons maven was created to address that problem at apache. 

    What it means here is that it is that easier for new contributors to jetty to get going.  Which is a good thing since one of my goals with much of this restructure was to make it easier for people to contribute.  The jetty-contrib project has a lower barrier of entry for commit privileges then the jetty core but still allows for extreme flexibility and serves in many ways as a sandbox environment.  When it comes to release time for jetty, we can pick and choose easily in the contrib parent what modules will be built and released.  In addition, as part of the ongoing physical (and build) decoupling of jetty and jetty-contrib we make it easier to release portions of contrib against any given official jetty release (from this point forward).

    Anyway, I wanted to comment a bit on what has been ongoing with the jetty7 trunk as of late and let people know about the jetty-7.0.0pre0 release!

    You can get an assembly of jetty-7.0.0pre0 here!

    Oh! and in case you are totally behind on things, the jetty user and dev lists have moved from sourceforge to codehaus as well!  See the new administrative pom for the subscription locations!

  • Glassfish and OSGi … and Jetty?

    In one of those cosmic coincidences, no sooner do I blog about OSGi and J2EE containers, but Glassfish announces that they are moving to OSGi.

    As OSGi gains more attention in the enterprise, the future is looking very interesting for Jetty, as we are hands-down the most popular servlet engine used in OSGi containers.

    We already ported Jetty into Glassfish V2, back before Glassfish really offered any pluggability in the web tier, so we’re looking forward to a smoother ride with Glassfish V3 and OSGi modularity.

    Oh, and just to be cheeky and because I like it, here’s the little gif I worked up last year when we demo’d Jetty inside Glassfish at the CommunityOne event, showing Glassfish "hooked" on Jetty:


    🙂

  • Jetty Improves in Netcraft survey (again)

    As with most open source projects, it’s very hard to get a measure of who/how/where/why Jetty is being used a deployed.  Downloads long ago became meaningless with the advent of many available bundling and distribution channels.   The Netcraft Web Survey is one good measure, as it scans the internet and identifies which server sites run. In the results released April 2008, Jetty is identified for 278,501 public server, which is 80% of the market share of our closest “competitor” tomcat (identified as coyote in the survey). Jetty is currently 12th in the league table of identified servers of all types and will be top 10 in 6 months if the current trajectory continues.

    I normally don’t like to paint Jetty as directly competitive to Tomcat, and instead focus more on the differences between the containers.  However, in this case there is a direct comparison as the Netcraft  numbers show the servers that are directly connected to the internet and using their own  HTTP implementations.  Jetty has been gaining on average 9000  such servers per month for the last 12 months!  (or 2500 per month since Jetty 2 first appeared in the survey in December 1998 – (Jetty-1 didn’t serve an ID)). Obviously our features and flexibility are having good traction over the last decade! 
    Of course, this is not the full picture, as many more instances of the servers are deployed on private networks, are some of the 3 million servers that don’t advertise their ID, or are proxied behind the 83 million Apache servers.  None the less, the servers that are directly connected is an interesting and important measure.  More over, given Jetty’s flexibility and embeddability, we would expect a healthy ratio between public and hidden servers.  Jetty’s inclusion in the Eclipse IDE from 3.3 probably gives us millions of installs alone, and we are used in many more projects and applications and an option for deployment in Geronimo, JBoss, JOnAS, and Glassfish EE servers. Of course these arguments mostly also apply to Tomcat and it too would have many hidden installations and almost certainly a greater proportion of the servers that are behind apache, but on this 1 measurable comparison I’ll just have a little gloat at our continued gains.
    The Jetty project continues to innovate and integrate, so we hope and expect to continue attracting new users:

    • Our asynchronous features are likely to be adopted by Servlet 3.0 and we will soon have a pre-release of Jetty-7 to  show them.
    • Jetty is available for the google android mobile phone and has created the possibility of micro servers in your pocket complete with web accessible media repositories, cameras and  PDA functions.
    • Our OSGI and spring integration continues to improve.
    • Our implementation of Cometd/Bayeux continues to improve and offer scalable Ajax Comet Push for Web 2.0 applications.
    • The hightide bundle is providing a long term versioned supported distrobution.

    We will be holding a Jetty Birds-of-a-feather session at JavaOne this May, so we invite all of those 278501 users to come along and hear about ongoing Jetty development and to tell others about their Jetty experiences.   Webtide will also have a booth at JavaOne, so please seek us out there if you want to hear about our commercial services and offerings.