Category: Status

  • End of Life: Changes to Eclipse Jetty and CometD

    Webtide (https://webtide.com) is the company behind the open-source Jetty and CometD projects. Since 2006, Webtide has fully funded the Jetty and CometD projects through services and support, including migration assistance, production support, developer assistance, and CVE resolution. 

    First, the change.

    Starting January 1, 2026, Webtide will no longer publish releases for Jetty 9, Jetty 10, and Jetty 11, as well as CometD 5, 6, and 7 to Maven Central or other public repositories. 

    Take a look at the primary announcement if you’re interested. 

    So, the motivation.

    Why we are in this situation now harks back to the beginnings of Webtide. Briefly, Greg Wilkins founded the Jetty project in 1995 as part of a contest created by Sun Microsystems for a new language called Java. For a decade, he and Jan Bartel carefully stewarded the project as part of their consulting company Mort Bay Consulting. Around the Jetty 6 timeframe, in 2006, Webtide was founded as an LLC to evolve the project further commercially. Still, at its core, the goal was to support the incredible community that had developed over the years. When I joined in 2007, we began working to join the Eclipse Foundation. We took steps to formalize our development processes, aiming to add more commercial predictability to the open-source project. Joining the Eclipse Foundation also meant adhering to their rigorous IP policy for both the Jetty codebase and its dependencies, an essential step in improving corporate uptake.

    This was also the time for the project to handle the end-of-life process for Jetty 6, while establishing Jetty 7 and Jetty 8. This was the opportunity that Webtide needed to support the project’s development by offering commercial services and support for EOL Jetty 6, while focusing on supporting and funding the future of Jetty 7 and Jetty 8. 

    It was the crux; after careful consideration, we decided that all commercial support releases would be open-source for the benefit of all. While not a traditional business decision, it aligned with our values and dedication to the community, which was rewarded as the community continued to grow its usage of Jetty.

    This worked wonderfully for almost 20 years.

    Something shifted…

    We started to notice a shift in the community a few years ago. For almost 20 years, the companies we spoke with valued how our support could help them become more successful, with many ultimately becoming customers who truly understood the benefits of supporting open-source. Every single one of them saw the value in releasing EOL releases freely. When I became CEO a decade ago and Webtide became 100% developer-owned and operated, we were able to continue operating in this commercial environment with ease, to such an extent that the future of Webtide and the Jetty project is assured for many years to come.

    So what changed? The tone of many companies we spoke to. Increasingly, while explaining the model that served Webtide well for so many years, where I used to hear ‘That makes so much sense, this works great!’, I now hear “So it’s just free? Great, I need to check a box.” Followed up with the galling question “Could you put this policy of yours in writing on your company letterhead?”.

    And today?

    Twenty years ago, things were different; Maven 2 dominance was emerging, and Maven Central was gaining ubiquity. Managing transitive dependencies was novel in many circles. Managing CVEs in a corporate setting was in its infancy, particularly with Java developer software stacks. 

    Now, build tooling is diverse, Maven Central is a global central repository system, and corporations should have their own caching repository servers, or they really should! Even JavaEE was rebranded as Jakarta at the Eclipse Foundation. So much change, but the one I’ll highlight is the emergence of business units focused on corporate software policies, complete with BOM files containing ever more metadata and checkboxes to click, managing CVE risks associated with software developed internally. Developers, the primary people Webtide has interacted with over the years, are increasingly far removed from software maintenance activities. 

    Now our approach to endlessly updating EOL releases seems remarkably outdated. Look at Jetty 9, which we have been releasing since 2013. It turns out our approach of making things as easy as possible for the community, for software that should have officially gone EOL years ago, was a benefit to many, but also enabled far more to grow complacent. Instead of scheduling migrations and updating to more recent versions, we inadvertently provided an environment that allowed companies to deploy onto software well over a decade old, when newer, more performant options were readily available. Then, when security postures started changing and businesses began looking deeper into their dependencies, they realized they were using outdated software, three or more major versions behind. Then, to our shock, many are perfectly fine with that so long as it is free and someone tells them it is ok. 

    If we have learned one thing within this time, it is that the EOL policy needs to be so much clearer, using established industry terminology. Looking back, we have been guilty of inventing terminology and inadvertently exacerbating the situation.

    What is heartening is seeing other organizations work to address EOL as well; notably, MITRE has been developing changes to the CVE system to support EOL concepts fully. If you have ever seen the text “Unsupported When Assigned” in a CVE, then you have encountered the early efforts for EOL in a CVE.

    You have to applaud the efforts of businesses to prioritize security and sane open-source policies.

    However, this is also a call to open-source projects like Jetty, as we are operating in a different world. Everyone understands that ‘End-of-Life’ does not mean ‘End-of-Use’. Clearly, the system for many companies has changed from a Developer Support perspective to a Security Support perspective. EOL Software support is purchased differently now. There are companies, like Sonar (formerly Tidelift), that exist to manage security metadata about open-source software, enabling companies to manage their software risk more effectively. 

    EOL Jetty and CometD by Webtide

    To address this industry evolution, Webtide has launched a partnership program that enables businesses relying on EOL Jetty and CometD versions to obtain CVE resolutions officially and predictably. 

    Webtide continues to resolve CVEs and issues for EOL Jetty and CometD in support of our commercial customers. However, the resulting binaries are now distributed directly to our commercial support customers and through our partnership network. No longer are we calling software EOL but deploying to Maven Central with a nod and a wink.

    Our partners are established leaders in the open-source EOL landscape, creating products that directly address the problems the security and business industries are facing. 

    This synergy works perfectly with Webtide, as we are the company that offers services and support on Jetty and CometD. Migrations, developer assistance, production support, and performance are the things that directly influence the ongoing development of the open-source projects we steward. We can continue to focus on our strengths, and our partners can focus on theirs.

    At last, the partners!

    We are pleased to announce two partnerships. With these partners, you will be able to build a secure EOL solution for your software stack, not just for your usage of Jetty or CometD. Best yet, if you are interested in Webtide’s Lifecycle Support, you can use these partner versions in conjunction with our support!


    TuxCare secures the open-source software the world builds on. Today, we protect over 1.2 million workloads – keeping them secure, compliant, and unstoppable at scale. From operating systems to development libraries and production applications, we power your open-source stack with enterprise-grade security and support, including endless lifecycle extensions for out-of-support software versions, rebootless patching for every major Linux distribution, enterprise-optimized support for community Linux, and our Linux-first vulnerability scanner that cuts through the noise.


    HeroDevs provides secure, long-term maintenance for open-source frameworks that have reached End-of-Life. Through our Never-Ending Support (NES) initiative, we deliver continuous CVE remediation and compliance-grade updates, allowing your team to migrate at your own pace. Our engineers monitor upstream changes, backport verified fixes, and publish fully tested binaries for seamless drop-in replacement. With NES for Jetty and NES for CometD, you can stay secure, stable, and compliant—without refactoring or rushing a migration.

    If your business is interested in our partner program, please direct inquiries to partnership@webtide.com.

    Wrapping it up.

    One important thing to note is that Webtide will continue to support the Jetty Project with a standard open-source release process, ensuring that older versions are released to provide the community with ample time to update to newer versions through a transition period. When Jetty 13 is released, Jetty 12.1 will continue to receive updates for a period, just as Jetty 12.0 does currently. If that is six months or a year, it remains to be seen. Once we finalize this release strategy with timelines, we will make sure the community is well-informed.

    Fundamentally, the change coming is that the End of Life versions for Jetty and CometD will no longer be an empty EOL notice and quiet deployments to Maven Central. It will mean EOL and provide established industry solutions to address those who need additional support.

  • Introducing Jetty-12

    For the last 18 months, Webtide engineers have been working on the most extensive overhaul of the Eclipse Jetty HTTP server and Servlet container since its inception in 1995. The headline for the release of Jetty 12.0.0 could be “Support for the Servlet 6.0 API from Jakarta EE 10“, but the full story is of a root and branch overhaul and modernization of the project to set it up for yet more decades of service.

    This blog is an introduction to the features of Jetty 12, many of which will be the subject of further deep-dive blogs.

    Servlet API independent

    In order to support the Servlet 6.0 API, we took the somewhat counter intuitive approach of making Jetty Servlet API independent.  Specifically we have removed any dependency on the Servlet API from the core Jetty HTTP server and handler architecture.    This is taking Jetty back to it’s roots as it was Servlet API independent for the first decade of the project.

    The Servlet API independent approach has the following benefits:

    • There is now a set of jetty-core modules that provide a high performance and scalable HTTP server.  The jetty-core modules are usable directly when there is no need for the Servlet API and the overhead introduced by it’s features and legacy.
    • For projects like Jetty, support must be maintained for multiple versions of the Servlet APIs.  We are currently supporting branches for Servlet 3.1 in Jetty 9.4.x;  Servlet 4.0 in Jetty 10.0.x; and Servlet 5.0 in Jetty 11.0.x. Adding a fourth branch to maintain would have been intolerable.  With Jetty 12, our ongoing support for Servlet 4.0, 5.0 and 6.0 will be based on the same core HTTP server in the one branch. 
    • The Servlet APIs have many deprecated features that are no longer best practise. With Servlet 6.0, some of these were finally removed from the specification (e.g. Object Wrapper Identity). Removing these features from the Jetty core modules allows for better performance and cleaner implementations of the current APIs.

    Multiple EE Environments

    To support the Servlet APIs (and related Jakarta EE APIs) on top of the jetty-core, Jetty 12 uses an Environment abstraction that introduces another tier of class loading and configuration. Each Environment holds the applicable Jakarta EE APIs needed to provide Servlet support (but not the full suite of EE APIs).

    Multiple environments can be run simultaneously on the same server and Jetty-12 supports:

    • EE8 (Servlet 4.0) in the java.* namespace,
    • EE9 (Servlet 5.0) in the jakarta.* namespace with deprecated features
    • EE10 (Servlet 6.0) in the jakarta.* namespace without deprecated features.
    • Core environments with no Servlet support or overhead.
    The implementation of EE8 & EE9 environments are substantially from the current Jetty-10 and Jetty-11 releases, so that applications that are dependent on those can be deployed on Jetty-12 with minimal risk of changes in behaviour (i.e. they are somewhat “bug for bug compatible”). Even if there is no need to simultaneously run different environments, the upgrading of applications to current and future releases of the Jakarta EE specifications, will be simpler as it is decoupled from a major release of the server itself. For example, it is planned that EE 11 support (probably with Servlet 6.1) will be made available in a Jetty 12.1.0 release rather than in a major upgrade to a 13.0.0 release.

    Core Environment

    As mentioned above, the jetty-core modules are now available for direct support of HTTP without the need for the overhead and legacy of the Servlet API. As part of this effort many API’s have been updated and refined:
    • The core Sessions are now directly usable
    • A core Security model has been developed, that is used to implement the Servlet security model, but avoids some of the bizarre behaviours (I’m talking about you exposed methods!).
    • The Jetty Websocket API has been updated and can be used over the top of the core Websocket APIs
    • The Jetty HttpClient APIs have been updated.

    Performance

    Jetty 12 has achieved significant performance improvements. Our continuous performance tracking indicates that we have equal or better CPU utilisation for given load with lower latency and no long tail of quality of service. 

    Our tests currently offer 240,000 requests per second and then measure quality of service by latency (99th percentile and maximum). Below is the plot of latency for Jetty 11: 

    This shows that the orange 99th percentile latency is almost too small in the plot to see (at 24.1 µs average), and all you do see is the yellow plot of the maximal latency (max 1400 µs). Whilst these peaks look large, the scale is in micro seconds, so the longest maximal delay is just over 1.4 milliseconds and 99% of requests are handled in 0.024ms!

    Below is the same plot of latency for Jetty 12 handling 240,000 requests per second:

    The 99th percentile latency is now only 20.2 µs and the peaks are less frequent and rarely over 1 ms, with the maximum of 1100µs.   

    You can see the latest continuous performance testing of jetty-12 here.

    New Asynchronous IO abstraction

    In the jetty-core is a new asynchronous abstraction that is a significant evolution of the asynchronous approaches developed in Jetty over many previous releases.

    But “Loom” I hear some say. Why be asynchronous if “Loom” will solve all your problems. Firstly, Loom is not a silver bullet, and we have seen no performance benefits of adopting Loom in the core of Jetty. If we were to adopt loom in the core we’d lose the significant benefits of our advanced execution strategy (which ensures that tasks have a good chance of being executed on a CPU core with a hot cache filled with the relevant data).

    However, there are definitely applications that will benefit from the simple scaling offered by Loom’s virtual Threads, thus Jetty has taken the approach to stay asynchronous in the core, but to have optional support of Loom in our Execution strategy. Virtual threads may be used by the execution strategy, rather than submitting blocking jobs to a thread pool.  This is a best of both worlds approach as it let’s us deal with the highly complex but efficient/scaleable asynchronous core, whilst letting applications be written in blocking style but can still scale.

      But I hear other say: “why yet another async abstraction when there are already so many: reactive, Flow, NIO, servlet, etc”? Adopting a simple but powerful core async abstraction allows us to simply adapt to support many other abstractions: specifically Servlet asynchronous IO, Flow and blocking InputStream/OutputStream are trivial to implement. Other features of the abstraction are:

      • Input side can be used iteratively, avoiding deep stacks and needless dispatches. Borrowed from Servlet API.
      • Demand API simplified from Flow/Reactive
      • Retainable ByteBuffers for zero copy handling
      • Content abstraction to simply handle errors and trailers inline.

      The asynchronous APIs are available to be used directly in jetty-core, or applications may simply wrap them in alternative asynchronous or blocking APIs, or simply use Servlets and never see them (but benefit from them). 

      Below is an example of using the new APIs to asynchronously read content from a Content.Source into a string:

      public static class FutureString extends CompletableFuture<String> {
      private final CharsetStringBuilder text;
      private final Content.Source source;

      public FutureString(Content.Source source, Charset charset) {
      this.source = source;
      this.text = CharsetStringBuilder.forCharset(charset);
      source.demand(this::onContentAvailable);
      }

      private void onContentAvailable() {
      while (true) {
      Content.Chunk chunk = source.read();
      if (chunk == null) {
      source.demand(this::onContentAvailable);
      return;
      }

      try {
      if (Content.Chunk.isFailure(chunk))
      throw chunk.getFailure();

      if (chunk.hasRemaining())
      text.append(chunk.getByteBuffer());

      if (chunk.isLast() && complete(text.build()))
      return;
      } catch (Throwable e) {
      completeExceptionally(e);
      } finally {
      chunk.release();
      }
      }
      }
      }

      The asynchronous abstraction will be explained in detail in a later blog, but we will note about the code above here:

      • there are no data copies into buffers (as if often needed with read(byte[]buffer)style APIs.  The chunk may be a slice of a buffer that was read directly from the network and there are retain() and release()to allow references to be kept if need be.
      • All data and meta flows via pull style calls to the Content.Source.read() method, including bytes of content, failures and EOF indication. Even HTTP trailers are sent as Chunks.  This avoids the mutual exclusion that can be needed if there are onData and onError style callbacks. 
      • The read style is iterative, so there is no less need to break down code into multiple callback methods. 
      • The only callback is to the  onContentAvailable method that is passed to Content.Source#demand(Runnable) and is called back when demand is met (i.e. read can be called with a non null return).

      Handler,  Request & Response design

      The core building block of a Jetty Server are the Handler, Request and Response interfaces. These have been significantly revised in Jetty 12 to:

      • Fully embrace and support the asynchronous abstraction. The previous Handler design predated asynchronous request handling and thus was not entirely suitable for purpose.
      • The Request is now immutable, which solves many issues (see “Mutable Request” in Less is More Servlet API) and allows for efficiencies and simpler asynchronous implementations.
      • Duplication has been removed from the API’s so that wrapping requests and responses is now simpler and less error prone. (e.g. There is no longer the need to wrap both a sendError and setStatus method to capture the response status).

      Here is an example Handler that asynchronously echos all a request content back to the response, including any Trailers:

      public boolean handle(Request request, Response response, Callback callback) {
        response.setStatus(200);
        long contentLength = -1;
        for (HttpField field : request.getHeaders()) {
          if (field.getHeader() != null) {
            switch (field.getHeader()) {
              case CONTENT_LENGTH -> {
        response.getHeaders().add(field);
        contentLength = field.getLongValue();
            }
              case CONTENT_TYPE -> response.getHeaders().add(field);
              case TRAILER -> response.setTrailersSupplier(HttpFields.build());
              case TRANSFER_ENCODING -> contentLength = Long.MAX_VALUE;
      }
      }
      } 
      if
      (contentLength > 0)
      Content.copy(request, response, Response.newTrailersChunkProcessor(response), callback);
        else
          callback.succeeded();
        return true;
      }

      Security

      With sponsorship from the Eclipse Foundation and the Open Source Technology Improvement Fund, Webtide was able to engage Trail of Bits for a significant security collaboration. There have been 25 issues of various severity discovered, including several which have resulted in CVEs against the previous Jetty releases.  The Jetty project has a good security record and this collaboration is proving a valuable way to continue that.  

      Big update & cleanup

      Jetty is a 28 year old project. A bit of cruft and legacy has accumulated over that time, not to mention that many RFCs have been obsoleted (several times over) in that period. 

      The new architecture of Jetty 12, together with the name space break of jakarta.* and the removal of deprecated features in Servlet 6.0, has allowed for a big clean out of legacy implementations and updates to the latest RFCs.

      Legacy support is still provided where possible, either by compliance modes selecting older implementations or just by using the EE8/EE9 Environments.

      Conclusion

      The Webtide team is really excited to bring Jetty 12 to the market. It is so much more than just a Servlet 6.0 container, offering a fabulous basis for web development for decades more to come.  

    • The Jetty Performance Effort

      One can only improve what can be reliably measured. To assert that Jetty’s performance is as good as it can be, doesn’t degrade over time and to facilitate future optimization work, we need to be able to reliably measure its performance.

      The primary goal

      The Jetty project wanted an automated performance test suite. Every now and then some performance measurements were done, with some ad-hoc tools and a lot of manual steps. In the past few months an effort has been made to try to come up with an automated performance test suite that could help us with the above goals and more, like making it easy to better visualize the performance characteristics of the tested scenarios for instance.

      We have been working on and off such test suite over the past few months. The primary goal was to write a reliable, fully automated test that can be used over time to measure, understand and compare performance over time.

      A basic load-testing scenario

      A test must be stable over time, and the same is true for performance tests: these ought to report stable performance over time to be considered repeatable. Since this is already a challenge in itself, we decided to start with the simplest possible scenario that is limited in realism but easy to grasp and still useful to get a quick overview of the server’s overall performance.

      The basis of that scenario is a simple HTTPS (i.e.: HTTP/1.1 over TLS) GET on a single resource that returns a few bytes of in-memory hard-coded data. To avoid a lot of complexity, the test is going to run on dedicated physical machines that are hosted in an environment entirely under our control. This way, it is easy to assert what kind of performance they’re capable of, that the performance is repeatable, that those machines are not doing anything else, that the network between them is capable enough and not overloaded, and so on.

      Load, don’t strangle

      As recommended in the Jetty Load Generator documentation, to get meaningful measurements we want one machine running Jetty (the server), one generating a fixed 100 requests/s load (the probe) and four machines each generating a fixed 60K requests/s load (the loaders). This setup is going to load Jetty with around 240K (4 loaders doing 60K each) requests per second, which is a good figure given the hardware we have: it was chosen based on the fact that it is enough traffic to get the server machine to burn around 50% of its total CPU time, i.e.: loading but not strangling it. The way we found this figure simply was by trial and error.

      Choosing a load that will not push the server to constant 100% CPU is important: while running a test that tries to run the heaviest possible load does have its use, such test is not a load test but a limit test. A limit test is good for figuring out how a software behave under a load too heavy for the hardware it runs on, for instance to make sure that it degrades gracefully instead of crashing and burning into flames when a certain limit is reached. But such test is of very limited use to figure out how fast your software responds under a manageable (i.e.: normal) load, which is what we are most commonly interested in.

      Planning the scenario

      The server’s code is pretty easy since it’s just about setting up Jetty: configuring the connector, SSL context and test handler is basically all it takes. For the loaders, the Jetty Load Generator is meant just for that task so it’s again fairly easy to write this code by making use of that library. The same is also true for the probe as the Jetty Load Generator can be used for it too, and can be configured to record each request’s latency too. And say we want to do that for three minutes to get a somewhat realistic idea of how the server does behave under a flat load.

      Deploying and running a test over multiple machines can be a daunting task, which is why we wrote the Jetty Cluster Orchestrator whose job is to make it easy to write some java code to distribute, execute and control it on a set of machines, using only the SSH protocol. Thanks to this tool, getting some code to run on the six necessary machines can be done simply while writing a plain standard JUnit test.

      So we basically have these three methods that we get running over the six machines:

      void startServer() { ... }
      
      void runProbeGenerator() { ... }
      
      void runLoadGenerator() { ... }

      We also need a warmup phase during which the test runs but no recording is made. The Jetty Load Generator is configured with a duration, so the original three minutes duration has to grow by that warmup duration. We decided to go with one minute for that warmup, so the total load generation duration is now four minutes. So both runProbeGenerator() and runLoadGenerator() are going to run for four minutes each. After the first minute, a flag is flipped to indicate the end of the warmup phase and to make the recording start. Once runProbeGenerator() and runLoadGenerator() return the test is over and the server is stopped then the recordings are collected and analyzed.

      Summarizing the test

      Here’s a summary of the procedure the test is implementing:

      1. Start the Jetty server on one server machine: call startServer().
      2. Start the Jetty Load Generator with a 100/s throughput on one probe machine: call runProbeGenerator().
      3. Start the Jetty Load Generator with a 60K/s throughput on four load machines: call runLoadGenerator().
      4. Wait one minute for the warmup to be done.
      5. Start recording statistics on all six machines.
      6. Wait three minutes for the run to be done.
      7. Stop the Jetty server.
      8. Collect and process the recorded statistics.
      9. (Optional) Perform assertions based on the recorded statistics.

      Results

      It took some iterations to get to the above scenario, and to get it to run repeatably. Once we got confident the test’s reported performance figures could be trusted, we started seriously analyzing our latest release (Jetty 10.0.2 at that time) with it.

      We quickly found a performance problem with a stack trace generated on the fast path, thanks to the Async Profiler’s flame graph that is generated on each run for each machine. Issue #6157 was opened to track this problem that has been solved and made it to Jetty 10.0.4.

      After spending more time looking at the reported performance, we noticed that the ByteBuffer pool we use by default is heavily contended and reported as a major time consumer by the generated flame graphs. Issue #6379 was opened to track this issue. A quick investigation of that code proved that minor modifications could provide an appreciable performance boost that made it to Jetty 10.0.6.

      While working on our backlog of general cleanups and improvements, issue #6322 made it to the top of the pile. Investigating it, it became apparent that we could improve the ByteBuffer pool a step further by adopting the RetainableByteBuffer interface everywhere in the input path and slightly modifying its contract, in a way that enabled us to write a much more performant ByteBuffer pool. This work was released as part of Jetty 10.0.7.

      Current status of Jetty’s performance

      Here are a few figures to give you some idea of what Jetty can achieve: while our test server (powered by a 16 cores Intel Core i9-7960X) is under a 240.000 HTTPS requests per second load, the probe measured that most of the time, 99% of its own HTTPS requests were served in less than 1 millisecond, as can be seen on this graph.

      Thanks to the collected measurements, we could add performance-related assertions to the test and made it run regularly against 10.0.x and 11.0.x to make sure performance won’t unknowingly degrade over time for those branches. We are now also running the same test over HTTP/1.1 clear text and TLS as well as HTTP/2.0 clear text and TLS too.

      The test also works against the 9.4.x branch but we do not yet have assertions for that branch because it has a different performance profile, so a different load profile is needed and different performance figures are to be expected. This has yet to happen but that is in our todo list.

      More test scenarios are going to be added to the test suite over time as we see fit. For instance, to measure certain load scenarios we deem important, to cover certain aspects or features or any other reason why we’d want to measure performance and ensure its stability over time.

      In the end, making Jetty as performant as possible and continuously optimizing it has always been on Webtide’s mind and that trend will continue in the future!

    • Community Projects & Contributors Take on Jakarta EE 9

      With the recent release of JakartaEE9, the future for Java has never been brighter. In addition to headline projects moving forward into the new jakarta.* namespace, there has been a tremendous amount of work done throughout the community to stay at the forefront of the changing landscape. These efforts are the summation of hundreds of hours by just as many developers and highlight the vibrant ecosystem in the Jakarta workspace.
      The Jakarta EE contributors and committers came together to shape the 9 release. They chose to create a reality that benefits the entire Jakarta EE ecosystem. Sometimes, we tend to underestimate our influence and the power of our actions. Now that open source is the path of Jakarta EE, you, me, all of us can control the outcome of this technology. 
      Such examples that are worthy of emulation include the following efforts. In their own words:

      Eclipse Jetty – The Jetty project recently released Jetty 11, which has worked towards full compatibility with JakartaEE9 (Servlet, JSP, and WebSocket). We are driven by a mission statement of “By Developers, For Developers”, and the Jetty team has worked since the announcement of the so-called “Big Bang” approach to move Jetty entirely into the jakarta.* namespace. Not only did this position Jetty as a platform for other developers to push their products into the future, but also allowed the project to quickly adapt to innovations that are sure to come.

      [Michael Redich] The Road to Jakarta EE 9, an InfoQ news piece, was published this past October to highlight the efforts by Kevin Sutter, Jakarta EE 9 Release Lead at IBM, and to describe the progress made this past year in making this new release a reality. The Java community should be proud of their contributions to Jakarta EE 9, especially implementing the “big bang,” and discussions have already started for Jakarta EE 9.1 and Jakarta EE 10. The Q&A with Kevin Sutter in the news piece includes the certification and voting process for all the Jakarta EE specifications, plans for upcoming releases of Jakarta EE, and how Java developers can get involved in contributing to Jakarta EE. Personally, I am happy to have been involved in Jakarta EE having authored 14 Jakarta EE-related InfoQ news items for the three years, and I look forward to taking my Jakarta EE contributions to the next level. I have committed to contributing to the Jakarta NoSQL specification which is currently under development. The Garden State Java User Group (in which I serve as one of its directors) has also adopted Jakarta NoSQL. I challenge anyone who still thinks that the Java programming language is dead because these past few years have been an exciting time to be part of this amazing Java community!

      WildFly 22 Beta1 contains a tech preview EE 9 variant called WildFly Preview that you can download from the WildFly download page.  The WildFly team is still working on passing the needed (Jakarta EE 9) TCKs (watch for updates via the wildfly.org site.)  WildFly Preview includes a mix of native EE 9 APIs and implementations (i.e. ones that use the  jakarta.* namespace) along with many APIs and implementations from EE 8 (i.e. ones that use the  java.* namespace). This mix of namespaces is made possible by using the Eclipse community’s excellent Eclipse Transformer project to bytecode transformer legacy EE 8 artifacts to EE 9 when the server is provisioned. Applications that are written for EE 8 can also run on WildFly Preview, as a similar transformation is performed on any deployments managed by the server.

      Apache TomEE is a Jakarta EE application server based on Apache Tomcat. The project main focus is the Web Profile up until Jakarta EE 8. However, with Jakarta EE 9 and some parts being optional or pruned, the project is considering the full platform for the future. TomEE is so far a couple of tests down (99% coverage) before it reaches compatibility with Jakarta EE 8 (See Introducing TCK Work and how it helps the community jump into the effort). For Jakarta EE 9, the Community decided to pick a slightly different path than other implementations. We have already produced a couple of Apache TomEE 9 milestones for Jakarta EE 9 based on a customised version of the Eclipse Transformer. It fully supports the new jakarta.* namespace. Not to forget, the project also implements MicroProfile.

      Open Liberty is in the process of completing a Compatible Implementation for Jakarta EE 9.  For several months, the Jakarta EE 9 implementation has been rolling out via the “monthly” Betas.  Both of the Platform and Web Profile TCK testing efforts are progressing very well with 99% success rates.  The expectation is to declare one (or more) of the early Open Liberty 2021 Betas as a Jakarta EE 9 Compatible Implementation.  Due to Open Liberty’s flexible architecture and “zero migration” goal, customers can be assured that their current Java EE 7, Java EE 8, and Jakarta EE 8 applications will continue to execute without any changes required to the application code or server configuration.  But, with a simple change to their server configuration, customers can easily start experimenting with the new “jakarta” namespace in Jakarta EE 9.

      Jelastic PaaS is the first cloud platform that has already made Jakarta EE 9 release available for the customers across a wide network of distributed hosting service providers. For the last several months Jelastic team has been actively integrating Jakarta EE 9 within the cloud platform and in December made an official release. The certified container images with the following software stacks are already updated and available for customers across over 100 data centers: Tomcat, TomEE, GlassFish, WildFly and Jetty. Jelastic PaaS provides an easy way to create environments with new Jakarta EE 9 application servers for deep testing, compatibility checks and running live production environments. It’s also possible now to redeploy existing containers with old versions to the newest ones in order to reduce the necessary migration efforts, and to expedite adoption of cutting-edge cloud native tools and products. 


      [
      Amelia Eiras] Pull Request 923- Jakarta EE 9 Contributors Card is a formidable example of eleven-Jakartees coming together to create, innovate and collaborate on an Integration-Feature that makes it so that no contributor, who helped on Jakarta EE 9 release, be forgotten in the new landing page for the EE 9 Release. Who chose those Contributors? None. That is the sole point of the existence of PR923.I chose to lead the work on the PR and worked openly by prompt communications delivered the day that Tomitribe submitted the PR – Jakarta EE Working Group message to the forum to invite other Jakartees to provide input in the creation of the new feature. With Triber Andrii, who wrote the code and the feedback of those involved, the feature is active and used in the EE 9 contributors cards, YOU ROCK WALL
      The Integration-Feature will be used in future releases.  We hope that it is also adopted by any project, community, or individual in or outside the Eclipse Foundation to say ThankYOU with actions to those who help power & maintain any community. 

      • PR logistics: 11 Jakartees came together and produced 116 exchanges that helped merge the code. Thank you, Chris (Eclipse WebMaster) for helping check the side of INFRA. The PR’s exchanges lead us to choose the activity from 2 GitHub sources: 1) https://github.com/jakartaee/specifications/pulls all merged pulls and 2)  https://github.com/eclipse-ee4j all repositories.
      • PR Timeframe: the Contributors’ work accomplished from October 31st, 2019 to November 20th, 2020, was boxed and is frozen.   The result is that the Contributor Cards highlight 6 different Jakartees at a time every 15 seconds.  A total of 171 Jakartee Contributors (committers and contributors, leveled) belong to the amazing people behind EE 9 code. While working on that PR, other necessary improvements become obvious. A good example is the visual tweaks PR #952 we submitted that improved the landing page’s formatting, cards’ visual, etc. 

      Via actions, we chose to not “wait & see”, saving the project $budget, but also enabling openness to tackle the stuff that could have been dropped into “nonsense”. 

       
      In open-source, our actions project a temporary part of ourselves, with no exceptions. Those actions affect positively or negatively any ecosystem. Thank you for taking the time to read this #SharingIsCaring blog.  
       

    • Introduction to HTTP2 in Jetty

      Jetty 9.3 supports HTTP/2 as defined by RFC7540 and it is extremely simple to enable and get started using this new protocol that is available in most current browsers.

      Getting started with Jetty 9.3

      Before we can run HTTP/2, we need to setup Jetty for HTTP/1.1 (strictly speaking this is not required, but makes for an easy narrative):

      $ cd /tmp
      $ wget http://repo1.maven.org/maven2/org/eclipse/jetty/jetty-distribution/9.3.0.RC1/jetty-distribution-9.3.0.RC1.tar.gz
      $ tar xfz jetty-distribution-9.3.0.RC1.tar.gz
      $ export JETTY_HOME=/tmp/jetty-distribution-9.3.0.RC1
      $ mkdir demo
      $ cd demo
      $ java -jar $JETTY_HOME/start.jar --add-to-startd=http,https,deploy
      $ cp $JETTY_HOME/demo-base/webapps/async-rest.war webapps/ROOT.war
      $ java -jar $JETTY_HOME/start.jar

      The result of these commands is to:

      • Download the RC1 release of Jetty 9.3 and unpack it to the /tmp directory
      • Create a demo directory and set it up as a jetty base.
      • Enable the HTTP and HTTPS connectors
      • Deploy a demo web application
      • Start the server!

      Now you are running Jetty and you can see the demo application deployed by pointing your browser at http://localhost:8080 or https://localhost:8443 (you may have to accept the self signed SSL certificate)!

      In the console output, I’ll draw your attention to the following two INFO lines that should have been logged:

      Started ServerConnector@490ab905{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
      Started ServerConnector@69955f9a{SSL,[ssl, http/1.1]}{0.0.0.0:8443}

      These lines indicate that the server is listening on ports 8080 and 8443 and lists the default and optional protocols that are support on each of those connections.  So you can see that port 8080 supports HTTP/1.1 (which by specification supports HTTP/1.0) and port 8443 supports SSL plus HTTP/1.1 (which is HTTPS!).

      Enabling HTTP/2

      Now you can stop the Jetty server by hitting CTRL+C on the terminal, and the following command is all that is needed to enable HTTP/2 on both of these ports and to start the server:

      $ java -jar $JETTY_HOME/start.jar --add-to-startd=http2,http2c
      $ java -jar $JETTY_HOME/start.jar

      This does not create/enable new connectors/ports, but adds the HTTP/2 protocol to the supported protocols of the existing connectors on ports 8080 and 8443.

      To access the demo web application with HTTP/2 you will need to point a recent browser to https://localhost:8443/.  You can verify whether your browser supports HTTP/2 here, add extensions to your browser to display an icon in the address bar (see this extension for Firefox). Firefox also sets a fake response header: X-Firefox-Spdy: h2.

      How does it work?

      If you now look at the console logs you will see that additional protocols have been added to both existing connectors on 8080 and 8443:

      Started ServerConnector@4bec1f0c{HTTP/1.1,[http/1.1, h2c, h2c-17, h2c-14]}{0.0.0.0:8080}
      Started ServerConnector@5bc63d63{SSL,[ssl, alpn, h2, h2-17, h2-14, http/1.1]}{0.0.0.0:8443}

      The name ‘h2’ is the official abbreviation for HTTP/2 over TLS  and ‘h2c’ is the abbreviation for unencrypted HTTP/2 (they really wanted to save every bite in the protocol!).   So you can see that port 8080 is now listening by default for HTTP/1.1, but can also talk h2c (and the draft versions of that).   Port 8443 now by defaults talks SSL, then uses ALPN to negotiate a protocol from: ‘h2’, ‘h2-17’, ‘h2-14’ or ‘http/1.1’ in that priority order.

      When you point your browser at https://localhost:8443/ it will establish a TLS connection and then use the ALPN extension to negotiate the next protocol.  If both the client and server speak the same version of HTTP/2, then it will be selected, otherwise the connection falls back to HTTP/1.1.

      For port 8080, the use of ‘h2c’ is a little more complex.  Firstly there is the problem of finding a client that speaks plain text HTTP/2, as none of the common browsers will use the protocol on plain text connections.  The cUrl utility does support h2c, as of does the Jetty HTTP/2 client.

      The default protocol on port 8080 is still HTTP/1.1, so that the initial connection will be expected to speak that protocol. To use the HTTP/2 protocol a connection may send a HTTP/1.1 request that carries  an Upgrade header, which the server may accept and upgrade to any of the other protocols listed against the connector (eg ‘h2’, ‘h2-17’ etc.) by sending a 101 switching protocols response!   If the server does not wish to accept the upgrade, it can respond to the HTTP/1.1 request and continue normally.

      However, clients are also allowed to assume that a known server does speak HTTP/2 and can attempt to make a connection to port 8080 and immediately start talking HTTP/2.   Luckily the protocol has been designed with a preamble that looks a bit like a HTTP/1.1 request:

      PRI * HTTP/2.0
      SM

      Jetty’s HTTP/1.1 implementation is able to detect that preamble and if the connector also supports ‘h2c’, then the connection is upgraded without the need for a 101 Switching Protocols response!

      HTTP/2 Configuration

      Configuration of HTTP/2 can be considered in the following parts

      Properties Configuration File Purpose
      start.d $JETTY_HOME/etc
      ssl.ini jetty-ssl.xml Connector configuration (eg port) common to HTTPS and HTTP/2
      ssl.ini jetty-ssl-context.xml Keystore  configuration common to HTTPS and HTTP/2
      https.ini jetty-https.xml HTTPS Protocol configuraton
      http2.ini jetty-http2.xml HTTP/2 Protocol configuration
    • Jetty-9.3 Features!

      Jetty 9.3.0 is almost ready and Release Candidate 1 is available for download and testing!  So this is just a quick blog to introduce you to what is new and encourage you to try it out!

      HTTP2

      The headline feature in Jetty-9.3 is HTTP/2 support. This protocol is now a proposed standard from the IETF and described in RFC7540. The Jetty team has been closely involved with the development of this standard, and while we have some concerns about the result, we believe that there are significant quality of service gains to be had by deploying HTTP/2.   The protocol has features that can greatly reduce the time to render a web page, which is good for clients; plus it has some good economies in using a fewer connections, which is good for servers.

      Jetty has comprehensive support for HTTP/2: Client, Server with negotiated, upgraded and direct connections and the protocol is already supported by the majority of current browsers. Since HTTP2 is substantially based on the SPDY protocol, we have dropped SPDY support from Jetty-9.3.

      Deploying HTTP/2 in the server is just the same as configuring a https connector : java -jar $JETTY_HOME/start.jar --add-to-startd=http2 will get you going (more blogs and doco coming)!

      Webtide is actively seeking users interested in deploying HTTP2 and collaborating on analysis of load, latency, configuration and optimisations.

      ALPN

      To support standard based negotiation of protocols over new connections (eg. to select HTTP2 or HTTPS),  Jetty-9.3 supports the Application Layer Protocol Negotiation mechanism which replaces our previous support for NPN.

      ALPN will automatically be enabled when HTTP2 is enable with start.jar, which downloads a non-eclipse jar containing our own extension to Open JVM and is not covered by the eclipse licenses.

      SNI

      Jetty-9.3 also supports Server Name Indications during TLS/SSL negotiation.  This allows the key store to contain multiple server certificates that have a specific or wild card domain(s) encoded in their distinguished name or by the Subject Alternate Name X.509 extension.     This allows a server with many virtual hosts/contexts to pick the appropriate TLS/SSL certificate for a connection.

      Enabling SNI support is a simple as adding the multiple certificates to your keystore file!

      Java 8

      Jetty-9.3 is built and targeted for Java 8.  This change was prompted by the SNI extension reliance on a Java 8 API and the HTTP2 specification need for TLS ciphers that are only available in Java 8.  It is possible to build Jetty-9.3 for Java 7 and we were considering releasing it as such with a few configuration tricks to enable the few classes that require java 8, however we decided that since java 7 is end-of-life is was not worth the complication to support it directly in the release.   If you really need java 7, then please speak to Webtide about a build of 9.3 for 7.

      Eat What You Kill

      It is impossible to change the protocol as server speaks without dramatic changes on how it is optimized to scale to high loads and low through puts.  The support of HTTP2 requires some fundamental changes to the core scheduling strategies, specifically with regards to the challenge of handling multiplexed requests from a single connection.   Jetty 9.3 contains a new scheduling strategy nicked named Eat What You Kill that makes 9.3 faster out of the box and gives us the opportunity to continue to improve throughput and latency as we tune the algorithm.

      Reactive Asynchronous IO Flows?

      Jetty 9.2 already supports the Servlet Asynchronous IO API and Asynchronous Servlets.  However, in Jetty 9.3 that support has been made even more fundamental and all IO in Jetty is now fundamentally asynchronous from the connector to the servlet streams and robust under arbitrary access from non container managed threads.

      So Jetty-9.3 is a good basis on which to develop with the servlet asynchronous APIs, however as we have some concerns with the complexity of those APIs, we are actively experimenting with better APIs based on Reactive Programming and specifically on the Flow abstraction developed by Doug Lea as a candidate class for Java 9.   We have a working prototype that runs on Jetty-9.3 which we hope to release soon.  Please contact us if you are interested in  participating in this development, as real use-cases are required to test these abstractions!

    • Jetty 9 – it's coming!

      Development on Jetty-9 has been chugging along for quite some time now and it looks like we’ll start releasing milestones in around the end of September.  This is exciting because we have a lot of cool improvements and features coming that I’ll leave to others to blog about in specific on over the next couple of months and things come closer to release.
      What I wanted to highlight in this blog post are the plans moving forward for jetty version wise pinpointed with a bit of context where appropriate.

      • Jetty-9 will require java 1.7

      While Oracle has relented a couple of times now about when the EOL is of java 1.6, it looks like it will be over within the next few months and since native support for SPDY (more below) is one of the really big deals about jetty-9 and SPDY requires java 1.7 that is going to be the requirement.

      • Jetty-9 will be servlet-api 3.0

      We had planned on jetty-9 being servlet-api 3.1 but since that api release doesn’t appear to be coming anytime soon, the current plan is to just make jetty-9 support servlet 3.0 and once servlet-api 3.1 is released we’ll make a minor release update of jetty-9 to support it.  Most of the work for supporting servlet-api 3.1 already exists in the current versions of jetty anyway so it shouldn’t be a huge deal.

      • Jetty-7 and Jetty-8 will still be supported as ‘mature’ production releases

      Jetty-9 has some extremely important changes in the IO layers that make supporting it moving forward far easier then jetty 7 and 8.  For much of the life of Java 1.6 and Java 1.7 there have been annoying ‘issues’ in the jvm NIO implementation that we (well greg to be honest) have piled on work around after work around until some of the work arounds would start to act up once the underlying jvm issue were resolved.  Most of this has been addressed in jetty-7.6.x and jetty-8.1.x releases assuming the latest jvm’s are being used (basically make sure you avoid anything in the 1.6u20-29 range).  Anyway, jetty-9 contains a heavily refactored IO layer which should make it easier to respond to these situations in the future should they arise in a more…well…deterministic fashion. 🙂

      • Jetty-9 IO is a major overhaul

      This deserves it’s own blog entry which it will get eventually I am sure, however it can’t be overstated how much the inner workings of jetty have evolved with jetty-9. Since its inception jetty has always been a very modular or component oriented http server. The key being ‘http’ server, and with Jetty-9 that is changing. Jetty-9 has been rearchitected from the IO layer up to directly support the separation of wire protocol from semantic, so it is now possible to support HTTP over HTTP, HTTP over SPDY, Websocket over SPDY, multiplexing etc with all protocols being first class citizens and no need to mock out
      inappropriate interfaces. While these are mostly internal changes, they ripple out to give many benefits to users in the form of better performance, smaller software and simpler and more appropriate configuration. For example instead of having multiples of differennt connector types, each with unique SSL and/or SPDY variants, there is now a single connector into which various connections factories are configured to support SSL, HTTP, SPDY, Websocket etc. This means moving forward jetty will be able to adapt easily and quickly to new protocols as they come onto the scene.

      • Jetty-6…for the love of god, please update

      Jetty-5 used to hold the title for ‘venerable’ but that title is really shifting to jetty-6 at this point.  I am constantly amazed with folks on places like stack overflow starting a project using jetty-6.  The linux distributions really need to update, so if you work on those and need help, please ping us.  Many other projects that embed jetty really need to update as well, looking at you Google App Engine and GWT!  If you are a company and would like help updating your jetty version or are interested in taking advantage of the newer protocols, feel free to contact webtide and we can help you make it easier.  If you’re an open source project, reach out to us on the mailing lists and we can assist there as much as time allows.  But please…add migrating to 7, 8 or 9 to your TODO list!

      • No more split production versions

      One of our more confusing situations has been releasing both jetty 7 and jetty 8 as stable production versions.  The reasons for our doing this were many and varied but with servlet 3.0 being ‘live’ for a while now we are going to shift back to the singular supported production version moving forward.  The Servlet API is backwards compatible anyway so we’ll be hopefully reducing some of the confusion on which version of jetty to use moving forward.

      • Documentation

      Finally, our goal starting with jetty-9 moving forward will be to release versioned documentation (generated with docbook)  to a common url under the eclipse.org domain as well as bundling the html and pdf to fit in the new plugin architecture we are working with.  So the days of floundering around for documentation on jetty should be coming to an end soon.
      Lots of exciting things coming in Jetty-9 that you’ll hear about in the coming weeks! Feel free to follow @jmcconnell on twitter for release updates!

    • jetty @ eclipse updates

      As avid users of jetty are bound to have caught wind of, jetty7 is in the process of migrating to the Eclipse Foundation as a component of the Eclipse Runtime. We even already have our own happy little news group which we need to get into the habit of checking and using more. The milestone in the process we have in sight right now is the next Creation Review date which is March 11, 2009. We are ironing out the requirements for a successful review and all indications are positive on that note. Oh, and one of the requirements that we have two mentors that are on the Architectural Council, the first already being Jeff McAffer of EclipseSource fame and the most recent being Bjorn Freeman-Benson of the Eclipse Foundation itself. Couldn’t be happier having them on board for this process, makes life ever so much more straight forward.
      Regardless, once we have the source in its new home we’ll have to adjust to some new processes and work out some sort of arrangement for managing jetty7 and jetty6 across two svn repositories and two bug tracking systems. It shouldn’t be terrible as we are planning on slowing feature additions to jetty6 and get it into a more methodical maintenance release schedule. Of particular interest is how we are going to manage snapshot development for jetty7 as eclipse doesn’t have the infrastructure for that maven setup, at least as far as I have seen. Could be we make use of Nexus at oss.sonatype.com for that as it has been quite useful for dojo and cometd as of late.
      A big thing that I think is important to note over and over again is that Jetty will be dual licensed once it goes into the Eclipse Foundation, taking on both the EPL and the ASL as options for use. So no bait and switch tactics or anything like that going on here, it is only adding an additional option to the mix.
      It is probably also good to note that we have started the IP review process up for the lion share of jetty that will be going to eclipse. In preparation for that we recently shifted around the basic structure of jetty in svn trunk to a new flatter structure that is more typical of maven projects. We also started factoring out some of the larger jetty artifacts, creating jetty-io and jetty-http artifacts that allow us to clean up the dependency tree a little bit since we added in the asynchronous jetty-client a while back.
      Not all jetty artifacts of old will be going to the Eclipse Foundation. A lot of the third party integrations and things like that will be remaining at The Codehaus and will be packaged into another form of jetty distribution…likely something similar to the existing Hightide release which rolls together jetty integrations into an easy download bundle. A lot of those details will be worked out shortly after we get jetty into eclipse svn and the jetty7 development environment stabilizes some. We are still targeting to get our first jetty7 release out along with the release of the servlet-api 3.0 release sometime around JavaOne. Obviously we have a lot of details to iron out with jetty7’s new home and how it integrates with the packages remaining at The Codehaus. One thing that will not change is jetty releases being in the central maven repository. Jetty7 will be available in the central repository, simply under different coordinates.
      Amoung the other issues we have to work out is JSP. While it is not critical for the initial import of jetty into eclipse, eventually we are going to need to offer jsp support from jetty in the osgi bundles that we are going to produce. Sadly there is currently no ‘perfect’ solution for this that I at least know of. Currently in jetty we are checking out the glassfish jsp api and implementation, applying some patches and fixed it up to work with the jetty logging mechanics and then releasing those artifacts in the org.mortbay.jetty:jsp-api-2.1-glassfish coordinates. To do that under the eclipse umbrella we would have get the glassfish jsp source run through the eclipse IP verification process, which is not a task I particularly relish at the moment. Perhaps in a month or two I won’t be quite as leery of the idea, but for now that is an issue we are firmly passing on until we have the host of other process related issues resolved.
      Anyway, once any or all of these details clear up we’ll make sure people know via the mailing lists, blogs and of course our handy dandy shiny eclipse.jetty newsgroup.