Author: admin

  • Servlets 3.0

    I’m just fresh out of a session at JavaOne where Sun have revealed their road map for the servlet 3.0 specification. My initial reaction is that it contains both some good and bad items as well as quite a few concerns in between.
    The 10 words or less version is: Annotations, JSF, Ajax, Comet, REST, scripts, security and Misc.

    Java Community Process

    My first concern with the road map is that I, as a member of the JCP servlet expert group had to go to a JavaOne session to see it for the first time. I hope that this does not signal a return to the bad old days where the JCP was used only as a post process rubber stamp for Sun’s internal design process. The talk did mention consultation with the expert group, but I would have been less concerned if we have been involved in setting the agenda as well as helping find a solution for it

    Annotations

    I’m not a huge fan of annotations, but given that they are already in 2.5 the additional annotations shown all looked reasonable. The aim is to make web.xml either redundant or at least only used for deployment overrides of reasonable defaults.

    Java Server Faces

    There was a lot about improved integration with JSF. This may be good or bad, but I don’t really care either way. I continue to not understand why the servlet specification should be closely tied any one web framework. The servlet spec should be web framework neutral and having a “favored son” will just lead to more special cases – like the need for all webapps to scanned for JSP and JSF descriptors even if they do not use JSP or JSF!

    Ajax Comet

    The good news is that support for Ajax comet (Ajax Push) is on the agenda. The bad news is that the talk appeared to describe the only use case for asynchronous servets to be Comet Ajax, and then only the forever response flavor of that. As I have described in my blog, Ajax Comet is only one of many use cases for asynchronous servlets. Any servlet that might block for a JDBC connection pool, a remote webservice or any other slow resource could benefit from threadless waiting support. So I hope the agenda revealed was only a subset of the use-cases to be addressed and not the limit of the vision.

    REST

    The integration of REST support will be welcomed. Well designed REST support will allow developers to focus on content rather than HTTP protocol. The annotations shown looked like a good step in the right direction, with my only concern being the use of streams to define content could restrict servers from efficient non-blocking handling of content. Hopefully other ways to provide content can be included.

    Scripts

    Exciting development to allow alternate languages other than java to run on the JVM and produce content efficiently through servlets. One size does not fit all and one language does not suit all. Looking forward to this bit.

    Security and Misc

    Web security has always been lumped in with Misc in the servlet spec and thus is half baked. In the road map it got its own heading as it deserves and hopefully this blog will be the last time you see security as Misc.
    Most of the other sore points in the servlet spec (eg welcome files) were listed as needing attention in 3.0

    Summary

    It is important that we keep servlet spec relevant to the way developers produce content. If servlets do not grow to support most of the issues mentioned above, then developers will continue to find ways to bypass standard servlets. Servlets are a key foundation to the whole JEE stack and if they are bypassed, so much of the work done to achieve interoperability and standards will be lost as different foundations are used by different web innovations.
    So while progress on a 3.0 servlet spec is well overdue, I welcome Sun’s statement of intent and look forward to the months ahead and hope that some good solutions may quickly be agreed and specified.

  • Marc Fluery leaves JBoss/Redhat

    So Marc Fluery has left JBoss/Redhat and thus brings to an end a chapter of the story of professional open source.

    One would have to say it has been a very successful chapter and has shown that OS can produce high quality enterprise infrastructure. As Marc leaves the project (or the professional part of it at least), we should note that the project owes much of it’s great success to the energy, enthusiasm, advocacy and leadership of Marc. Without these, JBoss would not have penetrated the enterprise as it has and nor would there have been the commercial activity around the project that sustains many developers and spin off projects.

    We should applaud Marc that he at least partially delivered on his promise to distribute the rewards of success to the contributors to the project.

    But the problem with this OS fairy tale is that the distribution of reward was simply not fair or balanced. While Marc deserved to be well rewarded for his part in the success of the project and business it sustained, I simple cannot see why his reward was several orders of magnitudes larger than that given to the people that actually wrote the software and implemented the business he inspired. Nor can I see why members of his family who contributed only moderately also received payouts far far far in excess of key developers who committed years of effort.

    As well as building a project and a business, Marc built a personality cult based on ego worship. While this in itself was a great marketing machine (any publicity etc.), it was also used as a mechanism to control dissent within the project and distract from the fact that the egalitarian rhetoric did not match up to the reality of firm control over commercial interests and trademarcs. In the formative years of JBoss group, few dared to question the insanity of “we’ll pay you to fix any bugs and keep the all of the support revenue if there are none – please be available 24×7”. Too many let the “honour” of commit status, access to the “inner circle” and payment by the hour, substitute for real equity or profit share.

    Those that did question, were soon made the target of Marc’s bile and bad mouthing (more Hani than Hani) and received zero or paltry allocations of the developer “frequent flyer” points that would one day turn into Red Hat stock.

    Continued decent would result in ejection or departure from the jboss group cult followed by excommunication from the project, deletion from the contributors lists and being moderated out of the forums and mailing lists. Marc was happy for many to tend the commons, but there was only one shepherd whose flock was allowed to graze.

    Open source is about standing on the shoulders of giants so that greater heights can be reached. But it is also about showing a bit of respect and fair treatment to those shoulders. Marc fluery didn’t just stand on the shoulders, he trampled and kicked heads as well.

    The OS story of JBoss story is about how the many created something for the benefit of all. Unfortunately the business story of JBoss is about how the many were exploited for the benefit of one. It didn’t have to be that way.

  • Jetty 6.1.0 Release

    The Jetty 6.1.0 release is now available via http://jetty.mortbay.org. It represents both a stabilization of the features already released in 6.0.x, plus a raft of new features. For a full description of
    Jetty 6.1, see http://docs.codehaus.org/display/JETTY/.

    Stabilization

    The core protocol engine of Jetty 6 has been further optimized and refined in 6.1, plus it has been extended to support asynchronous SSL and a future HTTP client API. Compliance testing and exposure on live deployments has allowed many minor and several major issues to be exposed and fixed.

    Deployers

    Web Application Static Deployer

    The WebApp Deployer is for static deployment standard WAR files and webapps with little or no Jetty specific customization.

    See

    http://docs.codehaus.org/display/JETTY/WebAppDeployer

    Context Hot Deployer

    The ContextDeployer may be used to (hot)deploy an arbitrary Context or Web Application with Jetty specific configuration. The ContextDeployer will scan the configuration directory for xml descriptors that define contexts. Any context descriptors found are used to configure and deploy a context and then the descriptor is monitored for changes to redeploy or undeploy the context.
    Arbitrary contexts may be configured deployed including simple static contexts and standard webapplications. The web.xml of a war file may be extended or replaced and additional configuration applied, thus allowing configuration to be customized during deployment without modification to the WAR file itself.

    See:
    http://docs.codehaus.org/display/JETTY/ContextDeployer

    Cometd

    Jetty 6.1 includes a servlet implementation of the bayuex protocol of http://cometd.com from the dojo foundation.

    Cometd is a message bus for Ajax web applications that allows multi channel messaging between client and server – and more importantly – between server and client. The paradigm is publish/subscribe to named channels.

    The Jetty implementation uses continuations and will thus scale to many connected clients. It also supports declarative filters for security and validation of channel content.

    See
    http://docs.codehaus.org/display/JETTY/Cometd

    Servlet Tester

    The jetty infrastructure can be use to test servlets with an embedded server using the ServletTester and HttpTester classes.
    The embedded server uses a special local connector, and thus avoids the need to open sockets in order to test the servlet and is thus ideal for unit testing:

    ServletTester tester=new ServletTester();
    tester.setContextPath("/context");
    tester.addServlet(HelloServlet.class, "/hello/*");
    tester.start();
    HttpTester request = new HttpTester();
    HttpTester response = new HttpTester();
    request.setMethod("GET");
    request.setURI("/context/hello/info");
    response.parse(tester.getResponses(request.generate()));
    assertEquals(200,response.getStatus());
    assertEquals("<h1>Hello Servlet</h1>",
    response.getContent());
    

    See http://docs.codehaus.org/display/JETTY/ServletTester.

    New Connectors

    SSL Engine connector

    A new SSL connector has been added to jetty 6.1 that is based on the javax.net.ssl.SSLEngine class. This allows the asynchronous mechanisms of jetty to be applied to SSL connections, specifically Continuations.

    See http://docs.codehaus.org/display/JETTY/Ssl+Connector+Guide

    AJP13 connector

    Jetty 6.1 now includes an implementation of the apache AJP protocol for forwarding requests from an apache httpd to Jetty. This connector is a significant rewrite from the Jetty 5 connector so more advance IO features can be used.

    See http://docs.codehaus.org/display/JETTY/Configuring+AJP13+Using+mod_jk

    Grizzly connector

    A Grizzly HTTP connector has been added which uses the Grizzly NIO library from Sun’s Glassfish JEE server. This connector provides an integration path into Glassfish server and an alternative HTTP connector.

    See http://docs.codehaus.org/display/JETTY/Grizzly+Connector

    Integrations

    Apache Geronimo

    Jetty 6.1 is integrated with Apache Geronimo. The flexibility of Jetty allows for a very close integration with the core geronimo infrastructure.

    See http://docs.codehaus.org/display/JETTY/Geronimo

    WADI Clustering

    WADI is a clustering system that is integrated with both Jetty 6 standalone and as Jetty 6 within Geronimo.

    See http://wadi.codehaus.org/

    JBoss

    Jetty 6 has been integrated with JBoss 4 and is an alternative webtier that will provide Servlet 2.5 and asynchronous servlets within this popular J2EE server.

    See http://docs.codehaus.org/display/JETTY/JBoss

    GWT

    The Google Widget Toolkit needs some small extentions in order to be able to be used with Jetty Continuations. This has been done to great effect on the aployr.com games sites, with gpokr.com and kdice.com games both being run from a single Jetty server capable of handling many more simultaneous users than the previous dual apache+tomcat configuration.

    See http://docs.codehaus.org/display/JETTY/GWT

    Windows service

    A windows service wrapper module is included in jetty 6.1.

    See http://docs.codehaus.org/display/JETTY/Win32Wrapper

    Works in Progress

    Annotations, Resource Injection and LifeCycle Callbacks

    The release of Jetty 6.1 includes support for web.xml based resource injections and lifecycle callbacks as defined in the 2.5 Servlet Specification.

    Resource injection removes the need for certain servlet-related classes to perform explicit JNDI lookups to obtain resources.
    Instead, a resource from JNDI is automatically injected into
    designated fields and/or methods of the class instance at runtimeby Jetty.

    Lifecycle callbacks allow certain servlet-related class instances
    to nominate a method to be called just before the instance goes
    into service and another to be called just before the instance
    goes out of service.

    This release supports these features when defined in the web.xml
    descriptor. For example:

    <resource-ref>
    <res-ref-name>jdbc/mydatasource</res-ref-name>
    <res-type>javax.sql.DataSource</res-type>
    <res-auth>Container</res-auth>
    <injection-target>
    <injection-target-class>com.acme.JNDITest</injection-target-class>
    <injection-target-name>myDatasource</injection-target-name>
    </injection-target>
    </resource-ref>
    <post-construct>
    <lifecycle-callback-class>com.acme.JNDITest</lifecycle-callback-class>
    <lifecycle-callback-method>postConstruct</lifecycle-callback-method>
    </post-construct>
    <pre-destroy>
    <lifecycle-callback-class>com.acme.JNDITest</lifecycle-callback-class>
    <lifecycle-callback-method>preDestroy</lifecycle-callback-method>
    </pre-destroy>
    


    Resource injection is supported for env-entry, resource-ref and resource-env-ref
    elements. Both post-construct and pre-destroy callbacks are also supported.
    The 2.5 Servlet Specification also allows for these features to be defined
    directly in the source code of servlet-related classes via java annotations.
    A forthcoming release of Jetty 6.1.x will implement @Resource, @Resources, @PostConstruct,
    @PreDestroy, @DeclaresRoles and @RunAs annotations.

    Client API

    The protocol engines of Jetty have been refactored so that as well as request parsing and response generation, they now support request generation and response parsing. This will allow a HTTP client API to be developed within Jetty to efficiently implement non-blocking proxies.

    Jetty Hightide

    The extensive features of Jetty 6.1 are further enhanced and
    extended in the Jetty Hightide release from Webtide
    Hightide is a versioned and supported distribution of Jetty providing:

    • Jetty 6.1 scalable HTTP server, servlet container and Ajax support, patched with performance enhancements available with a targeted distribution.
    • JettyPlus JNDI framework and JEE services.
    • Access to Webtide’s premium support services.
    • Atomikos XA transaction manager.
    • AMQ with AJAX support configured and integrated into the XA transaction manager.

    See http://www.webtide.com/products.jsp

  • Unit Test Servlets with Jetty

    This rainy weekend, I was inspired by a question from hani about testing servlets. As a result I’ve added a module to Jetty to simply test servlets with an embedded server configured by the ServletTester class. The HTTP requests and responses for testing can be generated and parsed with the
    HttpTest
    class.
    An example of a test harness that uses these classes is ServletTest.

    Setting up the tester

    The ServletTester can configure a single context. Servlets and Filters may be added to the context by class name or class instance. Context attributes, a resource base or a classloader may optionally be set. eg.

    ServletTester tester=new ServletTester();
    tester.setContextPath("/context");
    tester.addServlet("come.acme.TestFilter", "/*");
    tester.addServlet(TestServlet.class, "/servlet/*");
    tester.addServlet(HelloServlet.class, "/hello");
    tester.addServlet("org.mortbay.jetty.servlet.DefaultServlet", "/");
    tester.start();


    Raw HTTP requests and responses.

    The ServletTester takes a string containing requests and
    returns a string containing the corresponding responses (eventually byte arrays will be supported for testing character encoding and binary content). More than one request can be pipelined and multiple responses will be returned if persistent connection conditions are met. eg.

    String requests=
    "GET /context/servlet/info?query=foo HTTP/1.1rn"+
    "Host: testerrn"+
    "rn"+
    "GET /context/hello HTTP/1.1rn"+
    "Host: testerrn"+
    "rn";
    String responses = tester.getResponses(requests);
    String expected=
    "HTTP/1.1 200 OKrn"+
    "Content-Type: text/html; charset=iso-8859-1rn"+
    "Content-Length: 21rn"+
    "rn"+
    "<h1>Test Servlet</h1>" +
    "HTTP/1.1 200 OKrn"+
    "Content-Type: text/html; charset=iso-8859-1rn"+
    "Content-Length: 22rn"+
    "rn"+
    "<h1>Hello Servlet</h1>";
    assertEquals(expected,responses);


    Generated Requests, Parsed Responses

    Dealing with raw HTTP can be a bit verbose and difficult to test non protocol aspects. The
    HttpTest
    class allows for simple generation of requests and parsing of response (it can also parse requests and generate responses). eg.

    HttpTester request = new HttpTester();
    HttpTester response = new HttpTester();
    request.setMethod("GET");
    request.setHeader("Host","tester");
    request.setURI("/context/hello/info");
    request.setVersion("HTTP/1.0");
    response.parse(tester.getResponses(request.generate()));
    assertTrue(response.getMethod()==null);
    assertEquals(200,response.getStatus());
    assertEquals("<h1>Hello Servlet</h1>",response.getContent());



    Once setup, the HttpTester instances may be reused and only the parts that change need to be
    set for subsequent requests. eg.

    request.setURI("/context");
    response.parse(tester.getResponses(request.generate()));
    assertEquals(302,response.getStatus());
    assertEquals("http://tester/context/",response.getHeader("location"));


    Requirements

    This will be in the 6.1.0 release and is currently in 6.1-SNAPSHOT. To use these classes you need the jars for servlet-api, jetty-util, jetty, jetty-servlet-tester.

  • Gaming with GWT and Jetty continuations

    The Google Widget Toolkit allows Ajax applications to be developed in java code using the traditional UI widget paradigm. The toolkit includes support for RPC, but not for comet style Ajax push.
    Online Gaming is an excellent use-case for Ajax and I’ve been working with Ryan Dewsbury of www.aplayr.com to convert his GWT powered games of risk and poker to use Jetty continuations.
    First… play a lot of poker!
    Then the main challenge is to convert the getEvents() RPC call that the aplayr games make for their Comet Ajax Push aspects. The getEvents() is a long poll, in that it waits until there is an event be returned before generating a response.
    This had been implemented with a wait/notify, which works fine but has scalability issues as each player waiting for an event has an outstanding getEvents() request that is consuming a thread and associated resources. It was not possible to have more simultaneous players then there were threads in the thread pool. This is exactly the problem that Jetty continuations have been designed to solve.
    Unfortunately GWT has not made it easy to use continuations within their RPC mechanism. Firstly they catch Throwable, so he Jetty RetryException is caught. Secondly they have made most of the methods on the GWT servlet final, so you cannot fix this by extension.
    Luckily GWT is open source under the apache 2.0 license, so it was possible to do a cut/paste/edit job to fix this. The
    OpenRemoteServiceServlet recently added to Jetty is version of GWTs RemoteServiceServlet without the final methods and a protected method for extending exception handling. We are lobbying Google to make this part of the next release.
    Once the GWT remote service servlet has been opened up, it is trivial to extend it to support Continuations, which has been done in AsyncRemoteServiceServlet.
    Because GWT RPC uses POSTS, the body of the request is consumed when the request is first handled and is not available when the request is retried. To handle this, the parsed contents of the POST are stored as a request attribute so they are available to retried requests without reparsing:

    protected String readPayloadAsUtf8(HttpServletRequest request)
    throws IOException, ServletException
    {
    String payload=(String)request.getAttribute(PAYLOAD);
    if (payload==null)
    {
    payload=super.readPayloadAsUtf8(request);
    request.setAttribute(PAYLOAD,payload);
    }
    return payload;
    }

     
    The exception handling is also extended to allow the continuation RetryException to propagate to the container. This has been done without any hard dependencies on Jetty code:

    protected void handleException( String responsePayload,
    Throwable caught )
    {
    throwIfRetyRequest(caught);
    super.handleException( responsePayload, caught );
    }
    protected void throwIfRetyRequest( Throwable caught )
    {
    if (caught instanceof RuntimeException &&
    "org.mortbay.jetty.RetryRequest"
    .equals(caught.getClass().getName()))
    {
    throw (RuntimeException) caught;
    }
    }

     
    With these extensions, the AsyncRemoteServiceServlet allows
    any GWT RCP method to use continuations to suspend/resume processing. For example below is the Table class used by gpokr where a continuation is used to wait for an event to be available for a player.

    class Table
    {
    Set waiters = new HashSet();
    public Events getEvents( Context c )
    {
    Player p = getPlayer( c );
    // if the player has no events.
    if( p.events.size() == 0 )
    {
    synchronized( this )
    {
    // suspend the continuation waiting for events
    Continuation continuation =
    ContinuationSupport.getContinuation
    (c.getRequest(),this);
    waiters.add(continuation);
    continuation.suspend(30000);
    }
    }
    catch (InterruptedException e)
    {
    log(e);
    }
    }
    return p.events;
    }
    protected void addEvent( Event e )
    {
    // give the event to all players
    Iterator it = players.values().iterator();
    while( it.hasNext() )
    {
    Player p = (Player)it.next();
    player.events.add( event );
    }
    // resume continuations waiting for events
    synchronized( this )
    {
    Iterator iter = waiters.interator();
    while (iter.hasNext())
    ((Continuation)iter.next()).resume();
    waiters.clear();
    }
    }

    With the same style of event loop, the kdice game has been able to run over 200 players and many spectators with only a 100 threads!
    And most importantly, you must remember never to go in with just pocket jacks and a full table. What was I thinking?

  • XmlHttpRequest BAD – Messaging GOOD

    Couch Wei’s blog talks about the need for a web-2.0 messaging layer. See also the follow on discussion on the
    The server side.
    It is a key point that Couch raises and I am working with Coach within the Open Ajax Alliance to explore the possibility of a more standard messaging base for web-2.0.
    It will be a difficult problem, as we have been taught that the path to Ajax is XmlHttpRequest and most Ajax communication frameworks are based on that API.
    But XmlHttpRequest allocates a scarce resource (a HTTP connection) for the duration of a request. Browsers only allow two connections to a given host. The moment you have multiple windows, tabs, frameworks or components using XmlHttpRequest, there is going to be contention for connections, resulting in blocked and inefficient communications.
    The pity is, that 2 HTTP persistent connections are sufficient for efficient two way communication, but only if the multiple windows, tabs, frameworks and components can share connections nad multiplex and/or batch messaging over them.
    Increasing the limit is not an option as web-2.0 comms is already stressing a server infrastructure designed for web-1.0 and more round trips to the server is not what we need to improve interactivity.
    I hope the open ajax alliance hub will be able to provide some APIs to facilitate sharing connections between frameworks and components, but browser support will be needed to share between windows and tabs.
    For the actual protocol to be used, there are many many
    candidates: REST-ful, JMS, cometd from DOJO, beep, sip, jabber, the list is endless. I have developed or contributed to implementation for several of these (beep, activemq, DWR, and cometd) and all have their benefits and all can be well transported over HTTP.
    However, I do not think the solution is in picking a winner among protocols. I think the solutions is recognizing that the paradigm for Ajax communication is messaging and not raw HTTP Request. A common messaging API in the browser would allow multiple protocols to be tried and tested. New protocols can be developed and eventual browser/infrastructure support may even take us away from HTTP. The Open Ajax Alliance can certainly provide this common API and then we can all innovate above or below that API.
    In the meantime, I strongly suggest looking towards activemq Ajax or dojo’s cometd for your Ajax-2.0 communications. Messaging is where it is at!

  • Jetty Continuations for Quality of Service.

    Jetty Continuations can be used to prevent resource starvation
    caused by JDBC Connection pools or similar slow and/or restricted resources.
    Jetty Continuations have mostly been discussed in the context of web 2.0, Ajax Push and Comet. However there are many other use cases and the ThrottlingFilter,
    written by Jetty committer Tim Vernum, is an excellent example of how Continuations can be used to provide a consistent quality of service within a web application.
    JDBC Connection pools are frequently used to limit the
    number of simultaneous requests to a database. The theory
    being that databases will give better overall throughput
    with fewer simultaneous requests, even if requests have
    to wait for a connection from the pool.
    This common approach can make your web
    application very unstable. Threads queuing on the JDBC
    pool can quickly exhaust the entire resources allocated
    and deny service to the entire webapp, even if only
    few requests actually need the database.
    Consider a webapp that receives 1000 requests per second,
    95% of which requests are served without a database in 20ms.
    The other 5% require a database and take 100ms each.
    Crunching the numbers gives:

    25 simultaneous requests
    5 simultaneous JDBC connections.

    But as we are conservative, we will allocate 200 threads
    to the servlet container and a 20 connections to the
    JDBC pool. The webapp runs well and typically consumes only 25% of the resources allocated.
    But what happens if the database runs slow for a while (because the administrator is running a report or an attacker has submitted a overly complex query, etc. etc.)?
    If the DB request take 500ms instead of 100ms,
    then 10 requests a second will accumulate to those already
    blocked on the JDBC connection pool. If the condition lasts
    for 20s, then the entire thread pool will be exhausted. No requests will be served for any resource, even those that do not use the database will be starved of threads.
    If the database actually fails (or the DBAs report does a table lock), then it will only take 4 seconds before the entire thread pool is blocked on the JDBC pool.
    But starvation can occur even without any failures, for example if the URLs that use the database become more popular than normal (due to marketing, slashdot, biorhythms, etc). If 25% of requests hit the database instead of 5%, then 50 requests per seconds will be added to the queue for JDBC connections and starvation will occur in only 4 seconds of increased load.
    The solution is the use the thread-less waiting capability of continuations to allow requests to wait for JDBC connections without consuming a thread from the threadpool.
    Thus an under performing DB will only slow the parts of the
    webapp that use that database. The rest of the webapp will function normally even if the DB fails.
    The ThrottlingFilter
    can be mapped to arbitrary URLs in web.xml and will limit the number of threads past the filter and the size and timeout of the queue. If you know there are only 20 connections in the JDBC queue, then the ThrottlingFilter can be configured to only allow 20 requests past. Any requests in excess are suspended without a thread allocated and resumed when a connection becomes available as a previous request exists the filter.
    The filter may also be used as a base class to provide more sophisticated policy than a simple count.

  • InfoQ on Jetty 6

    InfoQ writes about the Jetty 6 release:

    The Jetty team released version 6 a couple of weeks ago and has also just released version 6.0.1 of the open-source web container. The Jetty 6 code base is a complete rewrite adding such features as Continuations, NIO support, and 2.5 Servlet spec compliance. InfoQ caught up with Jetty lead Greg Wilkins to find out more details on the version 6 product.

    Read the article….

  • Jetty Release 6.0.1

    Revision 6.0.1 of Jetty is now available at http://jetty.mortbay.org and from maven repositories.

    This minor version update includes:

    • fixed isUserInRole checking for JAASUserRealm
    • fixed ClassCastException in JAASUserRealm.setRoleClassNames(String[])
    • Improved charset handling in URLs
    • Factored ErrorPageErrorHandler out of WebAppContext
    • Refactored ErrorHandler to avoid statics
    • JETTY-112 ContextHandler checks if started
    • JETTY-114 removed utf8 characters from code
    • JETTY-115 Fixed addHeader
    • JETTY-121 init not called on externally constructed servlets
    • Improved charset handling in URLs
    • minor optimization of bytes to UTF8 strings
    • JETTY-113 support optional query char encoding on requests
    • JETTY-124 always initialize filter caches
    • JETYY-120 SelectChannelConnector closes all connections on stop