Category: General

  • Introducing Webtide

    Mort Bay has been a successful small open source services company for over 10 years. But the business of open source is changing and is there is more demand for support and services from larger organizations. Thus Mort Bay has partnered with
    Exist to form Webtide and joined a family of open source companies based around Simulalabs that includes Logicblaze (activemq/servicemix) and Mergere (maven).

    We will continue to provide the same training, development and support services, but with Webtide we will be able to scale our offerrings to a higher level of professionalism.

    If you are at OScon Portland, then please come to our briefing and get-together at the Red Lion this Friday evening.

    Webtide is an new services company that provides training, development and support for web 2.0 applications, with particular focus on the server side of scalable Ajax Comet solutions. Webtide can train or mentor your engineers in these emerging technologies, provide open outsourced development and give you the assurance of 24×7 support.

    Webtide is a joint venture between Mort Bay Consulting, the creator of the highly regarded Jetty open source Java web container, and Exist, a premier software development company supporting open source technology. Webtide will be formally launched during the O~@~YReilly Open Source Convention in Poortland, Oregon on July 24-28, 2006.

    To learn more about Webtide and its products please visit the Webtide booth (Booth 723) at the OSCon Exhibit Area. There will also be a Webtide Product Briefing on July 27, 2006, 6:15-7:15pm at the Broadway Room, 6th Flr of the Red Lion Hotel, 1021 NE Grand Ave., Portland, OR. The first 30 registered participants will get a chance to win an HP iPAQ running Jetty. For more info, please email mailto:training@webtide.com or visit http://www.webtide.com

  • Jetty for AJAX Released During Webtide's Launch

    Greg Wilkins and Jan Bartel, the lead developers of Jetty, recently launched Webtide at the O’Reilly’s Open Source Convention 2006 held at the Oregon Convention Center. Webtide, a company specializing in Web 2.0 and AJAX technologies, is a partnership between Mortbay – the creator of Jetty, the highly regarded open source Java web container and Exist – a premier software development company supporting open source technology.

    The launch was followed by an introduction to Webtide’s latest product called Hightide, a versioned distribution of Jetty that is optimized for Ajax. The event was held at the Red Lion Hotel in Portland, Oregon where thumb drives containing Hightide were distributed to guests. One lucky guest

  • Webtide Gears Up for OSCon

    Webtide, a global expert in implementing Ajax Technology, is gearing up for the O’Reilly Open Source Convention (OSCon) in Portland, Oregon from July 24 to 28, 2006.

    At OSCon, Webtide will present its latest product, Hightide. Hightide is a versioned distribution of Jetty, which provides a comprehensive toolset for the development of highly scalable, state-of-the-art web applications. It is optimized for Ajax, and ships with DWR and ActiveMQ Ajax libraries to help you get started quickly. Implementations of J2EE services, such as JNDI, JTA, JMS, JDBC, and web services are pre-integrated.

    To learn more about Webtide and its products, please visit the Webtide booth (Booth 723) at the OSCon Exhibit Area. A Hightide Product Briefing is scheduled on July 27, 2006, from 6:15 to 7:15pm at the Broadway Room, 6th Floor, Red Lion Hotel, 1021 NE Grand Ave., Portland, Oregon. The first 30 registered participants will get a chance to win an HP iPAQ.

    Webtide will also conduct an Ajax Training at its LA Training Room from August 2 to 4, 2006. Greg Wilkins, the Lead Developer of Jetty and the CEO of Webtide, will lead the training team. For more information, please email training@webtide.com or visit www.webtide.com .

    Webtide is a joint venture between Exist and Mortbay Consulting. Exist is a premier software development company supporting open source technology, while Mortbay is the creator of Jetty, which is the highly regarded open source Java web container.

  • An Asynchronous Servlet API?

    Now that the 2.5 servlet specification is final, we must start thinking
    about the next revision and what is needed. I believe that the most
    important change needed is that the Servlet API must be evolved to
    support an asynchronous model.
    I see 5 main use-cases for asynchronous servlets:

    1. Non-blocking input – The ability to receive data from a
      client without blocking if the data is slow arriving. This
      is actually not a significant driver for an asynchronous API, as
      most request arrive in a single packet, or handling can be delayed
      until the arrival of the first content packet. More over, I would
      like to see the servlet API evolve so that applications do
      not have to do any IO.
    2. Non-blocking output – The ability to send data to a
      client without blocking if the client or network is slow.
      While the need for asynchronous output is much greater than
      asynchronous input, I also believe this is not a
      significant driver. Large buffers can allow the
      container to flush most responses asynchronously
      and for larger responses it would still be better to
      avoid the application code handling IO.
    3. Delay request handling – The comet style of
      Ajax web application can require that a request handling
      is delayed until either a timeout or an event has occured.
      Delaying request handling is also useful if a remote/slow
      resource must be obtained before servicing the request
      or if access to a specific resource needs to be throttled to prevent
      too many simultaneous accesses. Currently the only
      compliant option to support this is to wait within
      the servlet, consuming a thread and other resources.
    4. Delay response close – The comet style of
      Ajax web application can require that a response is held
      open to allow additional data to be sent when asynchronous
      events occur. Currently the only compliant option to support
      this is to wait within the servlet, consuming a thread and
      other resources.
    5. 100 Continue Handling – A client may request
      a handshake from the server before sending a request body.
      If this is sent automatically by the container, it prevents
      this mechanism being meaningfuly used. If the application
      is able to decide if a 100-Continue is to be sent, then
      an asynchronous API would prevent a thread being consumed
      during the round trip to the client.

    All these use cases can be summarized as “sometimes you just
    have to wait for something” with the perspective that waiting
    within the Servlet.service method is an expensive
    place to park a request while doing this waiting as:

    • A Thread must be allocated.
    • If IO has begun then buffers must be allocated.
    • If Readers/Writers are obtained, then character converters are allocated
    • The session cannot be passivated
    • Anything else allocated by the filter chain is held

    These are all resources that are frequently pooled or passivated
    when a request is idle. Because comet style Ajax applications require
    a waiting request for every user, this invalidates the use of
    pools for these resources and requires maximal resource usage.
    To avoid this resource crisis, the servlet spec requires some low
    cost short term parking for requests.

    The Current Solutions

    Givin the need for a solution, the servlet container implementations have
    started providing this with an assortment of non-compliant extensions:

    • Jetty has Continuation
      which are targetted at comet applications
    • BEA has a future response mechanism also targetted Comet applications
    • Glassfish has an extensible NIO layer for async IO below the servlet model
    • The tomcat developers have just started developing Comet support in tomcat 6

    It is ironic that just as the 2.5 specification resolves most of the
    outstanding portability issues, new portability issues are being created.
    A standard solution is needed if webapplications are to remain portable and
    if Ajax framework developers are not going to be forced to support
    multiple servers
    as well as multiple browsers. 

    A Proposed Standard Solution?

    I am still not exactly sure how a standard solution should look, but I’m already
    pretty sure how it should NOT look:

    • It should not be an API on a specific servlet. By the time a container has
      identified a specific servlet, much of the work has been done. More over, as filters
      and dispatchers give the abilility to redirect a request, any asynchronous API on
      a servlet would have to follow the same path.
    • It probably will not be based on Continuations. While Continuations are
      a useful abstraction (and will continue to be so), a lower level solution can
      offer greater efficiencies and solve additional use-cases.
    • It is not exposing Channels or other NIO mechanisms to the servlet programmer.
      These are details that the container should implement and hide and NIO may not be
      the actually mechanism used.

    An approach that I’m currently considering is based around a Coordinator
    entity that can be defined and mapped to URL patterns just like filters
    and servlets. A Coordinator would be called by the container in response
    to asynchronous event and would coordinate the call of the synchronous
    service method.
    The default coordinator would provide the normal servlet style of
    scheduling and could look like:

    class DefaultCoordinator implements ServletCoordinator
    {
    void doRequest(ServletRequest request)
    {
    request.continue();
    request.service();
    }
    void doResponse(Response response)
    {
    response.complete();
    }
    }

    The ServletRequest.continue() call would trigger any required 100-Continue response
    and an alternative Coordinator may not call this method if a request body is not required or
    should not be sent.
    The ServletRequest.service() call will trigger the dispatch of a thread to the the normal Filter chain
    and and Servlet service methods. An alternative Coordinator may choose not to call service during
    the call to doRequest. Instead it may register with asynchronous event sources and
    call service() when an event occurs or after a timeout. This can delay event handling
    until the required resources are available for that request.
    The ServletResponse.complete() call will cleanup a response and close the response streams (if not
    already closed). An alternative Coordinator may choose not to call complete during the call to
    doResponse(), thus leaving the response open for asynchronous events to write more content.
    An subsequent event or timeout may call complete to close the response and return it’s connection
    to the scheduling for new requests.
    The coordinator lifecycle would probably be such that an instance would be allocated to a request, so
    that fields in derived coordinator can be used to communicate between the doRequest and
    doResponse methods.
    It would also be possible to extend the Coordinator approach to make available events such as arrival
    of request content or the possibility of writing more response content. However, I believe that asynchronous
    IO is of secondary importance and the approach should be validated for the other use-cases first.
    If feedback of this approach is good, I will probably implement a prototype in Jetty 6 soon.

  • Global firm extends free open source training to Filipinos

    EXIST Engineering, a global company founded by a Filipino, is providing open source technology training in Manila this month, INQ7.net learned.

    In an e-mailed advistory, the company founded by Winston Damarillo, former Intel Capital venture capitalist executive, is offering free training on Ajax, Comet and Jetty technologies on May 6, 2006.

    The company has invited Greg Wilkins and Jan Bartel as speakers cum trainors.

    Exist said Wilkins is a founding partner of the Core Developers Network and the founder of Mortbay Consulting. He is the developer of the Jetty http server and servlet container.

    Bartel, on the other end, is a contributor to the Jetty open source project in a number of capacities, Exist said. She is the author of both the Jetty website and online tutorial, a collaborator on the integration of Jetty with JBoss, and a key developer for J2EE-related service enhancements to Jetty.

    The company said that those interested to be part of this offer can send their confirmation to info-ajaxtraining@exist.com on or before May 3, 2006.

    The training will accommodate 150 people.

    The training will start 8 a.m. to 12 noon at the Assembly Room of the Meralco Foundation Bldg., Ortigas Ave. Extension (just beside the New Medical City), Pasig City, Metro Manila.


    View original article

  • Former Intel Capital VC exec seeds tech firms in RP

    WINSTON Damarillo, a former venture capitalist working at Intel Capital, is set to open more companies in the Philippines that would be involved in open source software development.

    Damarillo is currently running a company called Exist engaged in open source software development. Currently its chairman since its inception in 2001, the company has reported a profitable operation, growing at over 100 percent annually. It currently has two operations in the Philippines: one in Ortigas in Pasig City and another in Cebu City. The company runs an office in Los Angeles.

    Last year, Damarillo became a “celebrity” in the US open source community after he sold Gluecode, a company he co-founded, to IBM for less than 100 million dollars. Gluecode develops open source application servers. He eventually used the money to set up an incubator firm for open source projects called Simula Labs. “Simula” is a Filipino word for beginning, and is used in this context to refer to startups.

    A believer in Filipino software engineering talent, Damarillo revealed that Filipino software engineers in the Philippines account for much of the open source development work done in his company.

    “We have been building our software in the Philippines,” Damarillo, who was in Manila for a visit, told INQ7.net.

    Hoping to reverse the ongoing brain drain in the software industry by sourcing the open source software development jobs in the Philippines, Damarillo said that he is also bringing in open source software experts to train Filipino software engineers in the country.

    “I

  • Extending the Maven Plugin Classpath at Runtime

    This may not be too revolutionary, but I’ve spent enough time googling and asking questions on the Maven lists to believe that there isn’t a lot of information out there about topic so I thought I’d document it in a blog. Apologies to the Maven guys if this isn’t the best way of going about this (I believe Jason mentioned something about improvements to the Maven Embedder), but I needed a solution right now.
    The situation was that I wanted to be able to decide at runtime which jars to put onto the execution classpath for the Jetty6 Maven Plugin. The decision is based on the runtime environment: if the user is running with < 1.5 JVM, then I need to be able to download and use the JSP 2.0 jars. If, however, the user is running with a 1.5 JVM, then the JSP 2.1 jars should be used (as these mandate JDK 1.5).
    Rather than having to hard-code into my plugin a list of jars and their transitive dependencies for each version of JSP, I created one submodule for each JSP variant and listed all dependencies in the module’s pom.xml.
    This reduced the problem to downloading and transitively resolving a pom on-the-fly, then getting all of the resolved artifacts on the plugin’s execution classpath.

    Runtime Downloading and Transitive Resolution of a pom

    I used the Maven tools for manipulating artifacts. You need to put some configuration parameter declarations into your plugin to gather the necessary factories etc from the runtime environment to drive the tools. The ones I used were:

    /**
    * @component
    */
    private ArtifactResolver artifactResolver;
    /**
    *
    * @component
    */
    private ArtifactFactory artifactFactory;
    /**
    *
    * @component
    */
    private ArtifactMetadataSource metadataSource;
    /**
    *
    * @parameter expression="${localRepository}"
    */
    private ArtifactRepository localRepository;
    /**
    *
    * @parameter expression="${project.remoteArtifactRepositories}"
    */
    private List remoteRepositories;

    Then, it is a matter of downloading the pom, getting its dependencies and transitively resolving them. Here’s a snippet of the code I used to do the job generically:

    public Set transitivelyResolvePomDependencies (MavenProjectBuilder projectBuilder,
    String groupId, String artifactId,
    String versionId, boolean resolveProjectArtifact)
    throws MalformedURLException, ProjectBuildingException,
    InvalidDependencyVersionException, ArtifactResolutionException, ArtifactNotFoundException
    {
    //get the pom as an Artifact
    Artifact pomArtifact = getPomArtifact(groupId, artifactId, versionId);
    //load the pom as a MavenProject
    MavenProject project = loadPomAsProject(projectBuilder, pomArtifact);
    //get all of the dependencies for the project
    List dependencies = project.getDependencies();
    //make Artifacts of all the dependencies
    Set dependencyArtifacts = MavenMetadataSource.createArtifacts( artifactFactory, dependencies, null, null, null );
    //not forgetting the Artifact of the project itself
    dependencyArtifacts.add(project.getArtifact());
    List listeners = Collections.EMPTY_LIST;
    if (PluginLog.getLog().isDebugEnabled())
    {
    listeners = new ArrayList();
    listeners.add(new RuntimeResolutionListener());
    }
    //resolve all dependencies transitively to obtain a comprehensive list of jars
    ArtifactResolutionResult result =
    artifactResolver.resolveTransitively(dependencyArtifacts, pomArtifact,
    Collections.EMPTY_MAP,
    localRepository, remoteRepositories,
    metadataSource, null, listeners);
    return result.getArtifacts();
    }

    Now we can make some environment-based decisions on which pom use to extract the artifacts we want:

    //if we're running in a < 1.5 jvm
    Artifacts artifacts =
    resolver.transitivelyResolvePomDependencies(projectBuilder,
    "org.mortbay.jetty", "jsp-2.0", "6.0-SNAPSHOT", true);
    //else
    Artifacts artifacts =
    resolver.transitivelyResolvePomDependencies(projectBuilder,
    "org.mortbay.jetty", "jsp-2.1", "6.0-SNAPSHOT", true);

    Having got the artifacts, now we need to place them on the execution classpath.

    Runtime Maven Classpath Manipulation

    This is the bit I found really hair-raising. I’m not convinced it’s a bullet-proof solution, but all testing to date seems to indicate its working fine.
    Taking the Artifacts we got from the on-the-fly downloaded pom above, we need to put these into a Classloader and also arrange for the existing ContextClassLoader to be it’s parent (so we can resolve classes that are already on the plugin’s classpath).
    The first solution that springs to mind is to put the urls of the download jars into a URLClassLoader and make the current ContextClassLoader it’s parent like this:

    URL[] urls = new URL[artifacts.size()];
    Iterator itor = runtimeArtifacts.iterator();
    int i = 0;
    while (itor.hasNext())
    urls[i++] = ((Artifact)itor.next()).getFile().toURL();
    URLClassLoader cl = new URLClassLoader(urls, Thread.currentThread().getContextClassLoader());
    Thread.currentThread().setContextClassLoader(cl);

    However, after a lot of experimentation, it seems that this just simply does not work: the parent class loader does not seem able to correctly resolve classes and resource when delegated to from the URLClassLoader. The parent class loader is an instance of a ClassWorlds ClassLoader set up by the plugin execution environment.
    Experimenting further, I discovered it is possible to create a new ClassWorlds classloading hierarchy, injecting the jars that we downloaded earlier, and linking the existing (ClassWorlds) classloader as the parent of the new hierarchy. It looks like this:

    //create a new classloading space
    ClassWorld world = new ClassWorld();
    //use the existing ContextClassLoader in a realm of the classloading space
    ClassRealm realm = world.newRealm("plugin.jetty.container", Thread.currentThread().getContextClassLoader());
    //create another realm for just the jars we have downloaded on-the-fly and make
    //sure it is in a child-parent relationship with the current ContextClassLoader
    ClassRealm jspRealm = realm.createChildRealm("jsp");
    //add all the jars we just downloaded to the new child realm
    Iterator itor = artifacts.iterator();
    while (itor.hasNext())
    jspRealm.addConstituent(((Artifact)itor.next()).getFile().toURL();
    //make the child realm the ContextClassLoader
    Thread.currentThread().setContextClassLoader(jspRealm.getClassLoader());

    When used this way, the parent ClassWorlds classloader is able to correctly resolve classes and resources. The Jetty6 Maven Plugin is therefore able to automatically provide the correct JSP version at runtime without necessitating any user configuration.
     

  • Scaling Connections for AJAX with Jetty 6

    With most web applications today, the number of simultaneous users can greatly exceed the number of connections to the server.
    This is because connections can be closed during the frequent pauses in the conversation while the user reads the
    content or completes in a form. Thousands of users can be served with hundreds of connections.

    But AJAX based web applications have very different
    traffic profiles to traditional webapps.
    While a user is filling out a form, AJAX requests to the server will be asking for
    for entry validation and completion support. While a user is reading content, AJAX requests may
    be issued to asynchronously obtain new or updated content. Thus
    an AJAX application needs a connection to the server almost continuously and it is no
    longer the case that the number of simultaneous users can greatly exceed the number of
    simultaneous TCP/IP connections.

    If you want thousands of users you need thousands of connections and if you want tens of thousands
    of users, then you need tens of thousands of simultaneous connections. It is a challenge for java
    web containers to deal with significant numbers of connections, and you must look at your entire system,
    from your operating system, to your JVM, as well as your container implementation.

    Operating Systems & Connections

    A few years ago, many operating systems could not cope with more than a few hundred TCP/IP connections.
    JVMs could not handle the thread requirements of blocking models and the poll system call used for asynchronous
    handling could not efficiently work with more than a few hundred connections.

    Solaris 7 introduced the /dev/poll
    mechanism for efficiently handling thousands of connections and Sun have
    continued their development so that now Solaris 10 has a
    new optimized TCP/IP stack that is reported
    to support over 100 thousand simultaneous TCP/IP connections. Linux has also made great advances in this area and
    comes close to S10’s performance. If you want scalable AJAX application server, you must start with such an
    operation system configured correctly and with a JVM that uses these facilities.

    Connection Buffers

    In my previous blog entry I described how
    Jetty 6 uses Continuations and javax.nio to limit the number of threads required to service AJAX traffic. But threads are
    not the only resources that scale with connections and you must also consider buffers. Significant memory can be consumed if
    a buffer is allocated per connection. Memory cannot be saved by shrinking the buffer size good reasons to have
    significantly large buffers:

    • Below 8KB TCP/IP can have efficiency problems with it’s sliding window protocol.
    • When a buffer overflows, the application needs to be blocked. This holds a thread and associated resources
      and increases switching overheads.
    • If the servlet can complete without needing to flush the response, then the container can flush the buffer
      outside of the blocking application context of a a servlet, potentially using non-blocking IO.
    • If the entire response is held in the buffer, then the container can set the content length header and can avoid
      using chunking and extra complexity.

    Jetty 6 contains a number of features designed to allow larger buffers to be used
    in a scalable AJAX server.

    Jetty 6 Split Buffers

    Jetty 6 uses a split buffer architecture and dynamic buffer allocation. An idle connection will have no buffer allocated to it,
    but once a request arrives an small header buffer is allocated. Most requests have no content, so often this is the only
    buffer required for the request. If the request has a little content, then the header buffer is used for that content as
    well. Only if the header received indicates that the request
    content is too large for the header buffer, is an additional larger receive buffer is allocated.

    For responses, a similar approach is used with a large content buffer being allocated once response data starts to be generated.
    If the content might need to be chunked, space is reserved at the start and the end of the content buffer to allow the data to
    be wrapped as a chunk without additional data copying.
    Only when the response is committed is a smaller header buffer allocated.

    These strategies mean that Jetty 6 allocates buffers only when they are required and that these buffers are of
    a size suitable for the specific usage. Response content buffers of 64KB or more can easily be used without
    blowing out total memory usage.

    Gather writes

    Because the response header and response content are held in different buffers, gather writes
    are used to combine the header and response into a single write to the operating system. As efficient direct buffers are used, no
    additional data copying is needed to combine header and response into a single packet.

    Direct File Buffers

    Of course there will always be content larger than the buffers allocated, but if the content is large then it
    is highly desirable to completely avoid copying the data to a buffer. For very large static content,
    Jetty 6 supports the use of mapped file buffers,
    which can be directly passed to the gather write with the header buffer for the ultimate in java io speed.

    For intermediate sized static content, the Jetty 6 resource cache stores direct byte buffers which also can be written
    directly to the channel without additional buffering.

    For small static content, the Jetty 6 resource cache stores byte buffers which are copied into the
    header buffer to be written in a single normal write.

    Conclusion

    Jetty 6 employs a number of innovative strategies to ensure that only the resources that are actually
    required are assigned to a connection and only for the duration of they are needed. This careful
    resource management gives Jetty an architecture designed to scale to meet the needs of AJAX
    applications.

  • Jetty 6.0 Continuations – AJAX Ready!

    The 6.0.0alpha3 release of Jetty is now available
    and provides a 2.4 servlet server in 400k jar, with only 140k of dependencies (2.6M more if you want JSP!!!).
    But as well as being small, fast, clean and sexy, Jetty 6 supports a new feature
    called Continuations that will allow scalable AJAX applications to be built, with
    threadless waiting for asynchronous events.

    Thread per connection

    One of the main challenges in building a scalable servlet server is how to
    handle Threads and Connections. The traditional IO model of java associates a thread
    with every TCP/IP connection. If you have a few very active threads, this model can
    scale to a very high number of requests per second.
    However, the traffic profile typical of many web applications is many persistent HTTP
    connections that are mostly idle while users read pages or search for the next link
    to click. With such profiles, the thread-per-connection model can have problems scaling
    to the thousands of threads required to support thousands of users on large scale deployments.

    Thread per request

    The NIO libraries can help, as it allows asynchronous IO to be used and threads can be
    allocated to connections only when requests are being processed. When the connection is
    idle between requests, then the thread can be returned to a thread pool and the
    connection can be added to an NIO select set to detect new requests. This thread-per-request
    model allows much greater scaling of connections (users) at the expense of a
    reduced maximum requests per second for the server as a whole (in Jetty 6 this expense
    has been significantly reduced).

    AJAX polling problem

    But there is a new problem. The advent of AJAX as a
    web application model is significantly changing the traffic profile seen on the server side. Because
    AJAX servers cannot deliver
    asynchronous events to the client, the AJAX client
    must poll for events on the server. To avoid a busy polling
    loop, AJAX servers will often hold onto a poll request
    until either there is an event or a timeout occurs.
    Thus an idle AJAX application will
    have an outstanding request waiting on the server which can be used to send a response to the
    client the instant an asynchronous event occurs.
    This is a great technique, but it breaks the thread-per-request model, because
    now every client will have a request outstanding in the server. Thus the server again
    needs to have one or more threads for every client and again there are problems scaling
    to thousands of simultaneous users.

    Jetty 6 Continuations

    The solution is Continuations, a new feature introduced in Jetty 6. A java Filter or
    Servlet that is handling an AJAX request, may now request a Continuation object
    that can be used to effectively suspend the request and free the current
    thread. The request is resumed after a timeout or immediately if the resume method
    is called on the Continuation object. In the Jetty 6 chat room demo, the following
    code handles the AJAX poll for events:

    private void doGetEvents(HttpServletRequest request, AjaxResponse response)
    {
    member = (Member)chatroom.get(request.getSession(true).getId());
    // Get an existing Continuation or create a new one if there are no events.
    boolean create=!member.hasEvents();
    Continuation continuation=ContinuationSupport.getContinuation(request,create);
    if (continuation!=null)
    {
    if(continuation.isNew())
    // register it with the chatroom to receive async events.
    member.setContinuation(continuation);
    // Get the continuation object. The request may be suspended here.
    Object event= continuation.getEvent(timeoutMS);
    }
    // send any events that have arrived
    member.sendEvents(response);
    // Signal for a new poll
    response.objectResponse("poll", "");
    }

    When another user says something in the chat room, the event is delivered to
    each member by another thread calling the method:

    class Member
    {
    public synchronized void addEvent(Event event)
    {
    _events.add(event);
    if (_continuation!=null)
    // resume requests suspened in getEvents
    _continuation.resume(event);
    }
    ...
    }

    How it works

    Behind the scenes, Jetty has to be a bit sneaky to work around Java and the Servlet specification
    as there is no mechanism in Java to suspend a thread and then resume it later.
    The first time the request handler calls continuation.getEvent(timeoutMS) a
    RetryReqeuest runtime exception is thrown. This exception propogates out of all the request
    handling code and is caught by Jetty and handled specially.
    Instead of producing an error response, Jetty places the request on a timeout queue and returns the
    thread to the thread pool.
    When the timeout expires, or if another thread calls continuation.resume(event)
    then the request is retried. This time, when continuation.getEvent(timeoutMS)
    is called, either the event is returned or null is returned to indicate a timeout.
    The request handler then produces a response as it normally would.
    Thus this mechanism uses the stateless nature of HTTP request handling to simulate a
    suspend and resume. The runtime exception allows the thread to legally exit the
    request handler and any upstream filters/servlets plus any associated security context.
    The retry of the request, re-enters the filter/servlet chain and any security context
    and continues normal handling at the point of continuation.
    Furthermore, the API of Continuations is portable. If it is run on a non-Jetty6 server
    it will simply use wait/notify to block the request in getEvent. If Continuations prove
    to work as well as I hope, I plan to propose them as part of the 3.0 Servlet JSR.

  • Cin

    This case study looks at Cinémathèque from PowerSource Software Pty Ltd. It is a digital interactive entertainment system that embeds Jetty as the backend server for the set top box browser.

    PowerSource Software is a boutique software developer based in
    Sydney, Australia. The company has particular expertise in several
    interesting real-time areas: conventional wagering and gaming systems
    (totalisator systems for on and off-track betting, lotteries and
    wide-area keno systems), community gaming systems (trade promotions,
    competitions and opinion polls via mobile devices using SMS), and IPTV
    and video on demand (VOD).

    PowerSource became active in IPTV and VOD because the company was
    looking for new ways to capitalise on its core expertise – high speed
    transaction processing. As luck would have it, the majority of the
    company’s betting systems ran on real-time platforms supplied by
    Concurrent Computer Corporation, and Concurrent had started to utilise
    their hardware and real-time operating systems in the development of
    their MediaHawk video servers. At roughly the same time, the first
    deployable IP-based set top boxes also appeared. However,
    commercialisation of these IPTV-related technologies was being stymied
    by the absence of an affordable way of gluing these sophisticated
    components together into a customer-based money making enterprise. So
    PowerSource developed Cinémathèque.

    Cinémathèque is a comprehensive solution for service providers offering
    interactive digital entertainment including IPTV and VOD, for wide area
    residential, residential multi-dwelling, and hospitality environments.
    It provides a tightly integrated suite of monitoring, control and
    support facilities that maximise the features and facilities available
    to subscribers while minimising the operational burden of the service
    provider.

    In an IPTV-VOD system, a high speed two-way network connects the video
    servers and management system at the head-end to set top boxes in
    subscribers’ premises. Perhaps the biggest difference between IPTV
    systems and traditional hybrid-fibre-coax (HFC) pay TV deployments is
    the speed of the network, and especially the speed of the back channel.

    When a viewer selects an on-demand program the set top box sends in a
    play-out request which needs to be authorised before the video server
    will start streaming the content. At any time the viewer can stop,
    pause, rewind or fast-forward the video. It is important to note that
    the content is streamed across the network and played out in real-time.
    It is not stored or buffered in the set top box for later play-out –
    there are no disks in IP set top boxes. All subscriber interactions
    with the set top box – including play-out control – are transmitted
    across the network in real-time to the servers at the head-end.

    Of course, people will only pay to watch if there is something
    worthwhile to watch and it’s available to them at a convenient time. In
    this regard, digital video on demand differentiates itself from older
    hotel movie systems which provide a very narrow range of content, and
    from traditional subscription TV which offers only “near” video on
    demand services in which programs start at pre-designated times. The
    versatility of a digital video on demand system allows a hotel, or a
    residential IPTV service provider, to offer not only the latest
    Hollywood movies, but also classics, cult films, documentaries and the
    crème de la crème of TV. In other words, subscribers can watch what
    they want, when they want.

    It happens that content owners, and the Hollywood studios in particular,
    go to great lengths to ensure that their valuable property is presented
    in an appropriate manner. For this reason, most content is delivered to
    the set top box as a 4 Mbit per second MPEG2 Transport Stream. This
    bit-rate provides the viewer with a near-DVD quality viewing experience
    on a standard television. While this generally guarantees that a movie
    will be seen in the best light, it also makes simultaneously delivering
    a large number of streams quite challenging. Clearly, there is a big
    difference between streaming numerous film clips at 64 or 128 Kbps over
    the net compared to pumping hundreds, if not thousands, of 4 Mbps
    streams simultaneously. This is especially true considering how easily
    human eyes and ears can detect jitter in the video and audio resulting
    from lost frames or uneven play-out. This is the realm of “big-iron” video servers like Concurrent’s MediaHawks.

    An IP set top box has three principal software components: an operating
    system, a highly customised web browser and a media player. Many of the
    better IP set top boxes run Linux – which is either booted out of
    non-volatile memory or over the network – together with a small
    footprint version of the Mozilla browser. The browser is heavily
    customised to cater for the aspect ratio, resolution and colour palette
    of a standard television. It is also adapted to make it easy to use in
    the “lean back” environment in which people watch television.

    Experience shows that in the lean back environment of the TV room, less
    hand-eye coordination is required to successfully operate the remote
    control if “compass” keys are used for navigation instead of a
    track-ball or other mouse-like device that uses a floating cursor. The
    compass keys on the remote control let the subscriber navigate and
    select a hyperlink; the set top box then sends an HTTP request to
    Cinémathèque which returns the appropriate page in response.

    Cinémathèque plays a vital role in a digital entertainment service
    network because it is responsible for handling all subscriber
    interactions. Each time a subscriber follows a hyperlink, and each time
    they request video play-out or select some other supplementary service,
    Cinémathèque must accept and validate the request, secure a transaction
    to disk, update the subscriber’s account and other persistent data
    structures, and format and return a suitable response. This workload
    represents a unique mix of web content requests and complex customer
    transactions.

    Since Cinémathèque essentially provides the virtual shop window for the
    digital entertainment service provider, it must respond quickly even
    under considerable load, and even when the content is being generated
    dynamically. The content itself also has to be thoughtfully designed to
    facilitate effortless navigation to the items of most interest to a
    subscriber. The experience has to be more like watching television than
    surfing the web.

    Although it is widely recognised that the architecture and performance
    of the video servers is vital to satisfy service level expectations, the
    performance characteristics of the management system – the so-called
    middleware layer – are often overlooked. But all subscriber activity
    starts out as an HTTP request to Cinémathèque – only when a response
    from Cinémathèque includes authorisation to commence video play-out does
    a set top box actually communicate with a media server. This is why
    Cinémathèque’s heritage is so important: it relies heavily on
    PowerSource’s experience building high performance transaction
    processing systems.

    Cinémathèque is comprised of three primary functional modules:

    • Jetty servlet engine
    • Javelin transaction processor
    • Cinémathèque application core

    The Jetty servlet engine provides Cinémathèque with the flexibility of a
    conventional web server but without the bloat, without the inevitable
    performance problems, and without the implicit security worries. Jetty
    is embedded within Cinémathèque and acts as a servlet container and
    dispatcher. In this role Jetty is reliable, secure and fast. Jetty
    invokes specialised Cinémathèque servlets in response to requests from
    set top boxes; these servlets interact with the Cinémathèque application
    core to provide the necessary services to subscribers.

    Javelin is PowerSource’s secure, non-stop transaction processing engine
    – it is written in Java and is the component on which all of
    Cinémathèque’s other application features and facilities are based.
    Javelin secures all transactions to duplicated disk files – it handles
    all data mirroring itself rather than delegating this to the operating
    system, and it provides Cinémathèque with a robust and persistent data
    store. On a mid-range Linux server, Javelin is capable of recording in
    excess of 500 transactions per second while maintaining an average
    response time of less than 100 milliseconds. Javelin also performs an
    automatic restart and recovery to ensure that no data is lost as a
    consequence of a system outage.

    The Cinémathèque application, too, is written entirely in Java to
    maximise portability, reliability and flexibility. Cinémathèque
    supports true video on demand, as well as near video on demand via its
    multicast scheduler. Since every subscriber interaction is handled by
    Cinémathèque the number of subscribers watching on-demand programs can
    be monitored in real-time. Similarly, Cinémathèque also tracks, in
    real-time, the number of subscribers tuned to each reticulated
    free-to-air, pay, or multicast TV channel. This permits a service
    provider to perform very accurate capacity planning as well as knowing
    what content sells and what doesn’t.

    Very little of the HTML content returned to the set top box by
    Cinémathèque is static. Instead, Cinémathèque creates portions of many
    pages dynamically according to the attributes of the viewer’s
    subscription package, the titles and packages that they’ve previously
    purchased, titles that are currently book-marked, and the rating level
    of the content that the current user is permitted to see (to safeguard
    children from accessing inappropriate content).

    All transactions, including billing transactions, are processed by
    Cinémathèque in real-time. Cinémathèque gathers operational,
    statistical and performance data continuously, and records this to its
    transaction files; this data is available for on-demand display on
    system administration workstations.

    Cinémathèque is set top box independent and supports any number of
    different types of set top boxes simultaneously within a single
    deployment. Similarly, it does not rely on any set top box specific
    features and doesn’t require any specialised application software or
    middleware to be present in the set top box. Adding support for other
    set top boxes is straight forward and entails adapting several
    JavaScript functions which are embedded in HTML pages returned to the
    set top box; this JavaScript accommodates the inevitable differences
    between the ways that set top box vendors invoke their media players.
    These are important features in Cinémathèque because they help service
    providers avoid set top box vendor lock-in.

    A virtue of IP-based set top boxes is their uniformity – almost without
    exception they provide a consistent “application environment” by way of
    of their standards compliant HTTP, HTML and JavaScript implementations.
    Indeed, all set top box functions, including invoking, controlling and
    monitoring the embedded media player are achieved with JavaScript.
    Although they have the capability to run a Java Virtual Machine, most IP
    boxes don’t for two reasons: firstly, it substantially increases the
    memory footprint (something to be avoided in a cost sensitive consumer
    device), and secondly, most boxes don’t have sufficient CPU resources to
    spare (a box with a 400 MHz clock CPU is considered fast).

    Cinémathèque returns JavaScript objects to the set top box’s browser in
    response to each HTTP request; the data embedded in these objects is
    then rendered using JavaScript. This mechanism allows the look-and-feel
    designer to expose as little or as much of the service or
    program-related “metadata” to subscribers as they like without the
    requirement to change any server-side software.

    For optimum performance, Cinémathèque comes bundled with a Concurrent
    Computer Corporation iHawk application server. iHawks run Concurrent’s
    RedHawk Linux operating system which is a POSIX-compliant, real-time
    version of the open source Linux operating system. RedHawk is based on
    a standard Red Hat distribution but substitutes the usual kernel with a
    real-time enhanced one; it provides enhancements that maximise
    Cinémathèque’s performance.

    Cinémathèque uses Java’s extensive internationalisation support to make
    locale and language customisation straightforward. Each word and phrase
    that appears in the Cinémathèque administration client is maintained in
    a resource bundle – adding support for a new language simply requires
    adding the appropriate translations to the bundle. Cinémathèque
    currently supports English, Japanese, Korean, Simplified Chinese and
    Traditional Chinese and any combination of these languages can be used
    simultaneously within a single system on both set top boxes and the
    system’s administration workstations.

    In residential mode, a subscriber can only access IPTV and other
    chargeable services after first logging in with their unique client id
    and password. Cinémathèque lets a subscriber assign a different
    password to each content rating classification to prevent children from
    accessing inappropriate material, thereby imposing parental control. In
    hospitality mode, access to services is controlled by Cinémathèque which
    receives guest check-in and check-out notifications from the hotel’s
    property management system.

    A subscriber can have an unlimited number of simultaneously active
    rentals and can switch between active rentals and initiate additional
    rentals at any time. Whenever video play-out is suspended, Cinémathèque
    automatically sets a bookmark for that rental – the subscriber can
    resume play-out either from the start of the program or from the
    bookmark. The subscriber can view their active rental list and review
    their complete rental history via their set top box at any time. The
    active rentals and rental history displays are filtered according to the
    rating level of the current login. Again, this prevents children from
    seeing references to inappropriate content.

    Cinémathèque also has an integral customer loyalty program that works in
    conjunction with its customer profiling capabilities. The loyalty
    program provides for standard and VIP customers and reward points can be
    allocated based on spending behaviour. Accumulated reward points can be
    redeemed for specially created package deals and service upgrades.

    Jetty was selected after PowerSource’s engineers had evaluated several
    servlet engines.

    So why did PowerSource choose Jetty?

    Firstly, Jetty offered superior performance. Secondly, it was easy to
    embed within a larger application. In this regard, PowerSource was
    looking for a servlet engine that didn’t “get in the way” of the rest of
    the larger application. Thirdly, it was particularly important that the
    servlet engine wasn’t a resource hog. And fourthly, Cinémathèque
    systems are installed at customer sites and are expected to run
    unattended in a lights out environment – PowerSource was looking for a
    servlet engine that the engineers could “set and forget”.

    Jetty’s reliability and performance counted highly in its favour because
    Cinémathèque essentially controls the delivery of premium subscription
    television services that customers are buying with their discretionary
    expenditure. In this situation, paying customers don’t tolerate service
    unavailability because they’ve become accustomed to TV not being
    interrupted. If the responsiveness of the IPTV service is poor, or if
    it is unreliable, then customers will buy their entertainment elsewhere.

    Finally, as the company’s software engineers were making their minds up
    about Jetty, it became obvious that there was another significant aspect
    related to performance: namely the super-responsiveness of the team at
    Mortbay and the enthusiasm of the Jetty users active on the mailing lists.

    PowerSource have several new products under development; our positive
    experience with Jetty, and our ability to rely on it, means that it will
    remain one of the key components in PowerSource’s systems.

    Screen shots and diagrams

    Related links

    Cinematheque: http://www.powersource.com.au/cine

    RedHawk Linux: http://www.ccur.com/isd_solutions_redhawklinux.asp

    MediaHawk video servers: http://www.ccur.com/vod_default.asp

    Kreatel set top boxes: http://www.kreatel.se