Blog

  • Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

  • End of Life: Changes to Eclipse Jetty and CometD

    Webtide (https://webtide.com) is the company behind the open-source Jetty and CometD projects. Since 2006, Webtide has fully funded the Jetty and CometD projects through services and support, including migration assistance, production support, developer assistance, and CVE resolution. 

    First, the change.

    Starting January 1, 2026, Webtide will no longer publish releases for Jetty 9, Jetty 10, and Jetty 11, as well as CometD 5, 6, and 7 to Maven Central or other public repositories. 

    Take a look at the primary announcement if you’re interested. 

    So, the motivation.

    Why we are in this situation now harks back to the beginnings of Webtide. Briefly, Greg Wilkins founded the Jetty project in 1995 as part of a contest created by Sun Microsystems for a new language called Java. For a decade, he and Jan Bartel carefully stewarded the project as part of their consulting company Mort Bay Consulting. Around the Jetty 6 timeframe, in 2006, Webtide was founded as an LLC to evolve the project further commercially. Still, at its core, the goal was to support the incredible community that had developed over the years. When I joined in 2007, we began working to join the Eclipse Foundation. We took steps to formalize our development processes, aiming to add more commercial predictability to the open-source project. Joining the Eclipse Foundation also meant adhering to their rigorous IP policy for both the Jetty codebase and its dependencies, an essential step in improving corporate uptake.

    This was also the time for the project to handle the end-of-life process for Jetty 6, while establishing Jetty 7 and Jetty 8. This was the opportunity that Webtide needed to support the project’s development by offering commercial services and support for EOL Jetty 6, while focusing on supporting and funding the future of Jetty 7 and Jetty 8. 

    It was the crux; after careful consideration, we decided that all commercial support releases would be open-source for the benefit of all. While not a traditional business decision, it aligned with our values and dedication to the community, which was rewarded as the community continued to grow its usage of Jetty.

    This worked wonderfully for almost 20 years.

    Something shifted…

    We started to notice a shift in the community a few years ago. For almost 20 years, the companies we spoke with valued how our support could help them become more successful, with many ultimately becoming customers who truly understood the benefits of supporting open-source. Every single one of them saw the value in releasing EOL releases freely. When I became CEO a decade ago and Webtide became 100% developer-owned and operated, we were able to continue operating in this commercial environment with ease, to such an extent that the future of Webtide and the Jetty project is assured for many years to come.

    So what changed? The tone of many companies we spoke to. Increasingly, while explaining the model that served Webtide well for so many years, where I used to hear ‘That makes so much sense, this works great!’, I now hear “So it’s just free? Great, I need to check a box.” Followed up with the galling question “Could you put this policy of yours in writing on your company letterhead?”.

    And today?

    Twenty years ago, things were different; Maven 2 dominance was emerging, and Maven Central was gaining ubiquity. Managing transitive dependencies was novel in many circles. Managing CVEs in a corporate setting was in its infancy, particularly with Java developer software stacks. 

    Now, build tooling is diverse, Maven Central is a global central repository system, and corporations should have their own caching repository servers, or they really should! Even JavaEE was rebranded as Jakarta at the Eclipse Foundation. So much change, but the one I’ll highlight is the emergence of business units focused on corporate software policies, complete with BOM files containing ever more metadata and checkboxes to click, managing CVE risks associated with software developed internally. Developers, the primary people Webtide has interacted with over the years, are increasingly far removed from software maintenance activities. 

    Now our approach to endlessly updating EOL releases seems remarkably outdated. Look at Jetty 9, which we have been releasing since 2013. It turns out our approach of making things as easy as possible for the community, for software that should have officially gone EOL years ago, was a benefit to many, but also enabled far more to grow complacent. Instead of scheduling migrations and updating to more recent versions, we inadvertently provided an environment that allowed companies to deploy onto software well over a decade old, when newer, more performant options were readily available. Then, when security postures started changing and businesses began looking deeper into their dependencies, they realized they were using outdated software, three or more major versions behind. Then, to our shock, many are perfectly fine with that so long as it is free and someone tells them it is ok. 

    If we have learned one thing within this time, it is that the EOL policy needs to be so much clearer, using established industry terminology. Looking back, we have been guilty of inventing terminology and inadvertently exacerbating the situation.

    What is heartening is seeing other organizations work to address EOL as well; notably, MITRE has been developing changes to the CVE system to support EOL concepts fully. If you have ever seen the text “Unsupported When Assigned” in a CVE, then you have encountered the early efforts for EOL in a CVE.

    You have to applaud the efforts of businesses to prioritize security and sane open-source policies.

    However, this is also a call to open-source projects like Jetty, as we are operating in a different world. Everyone understands that ‘End-of-Life’ does not mean ‘End-of-Use’. Clearly, the system for many companies has changed from a Developer Support perspective to a Security Support perspective. EOL Software support is purchased differently now. There are companies, like Sonar (formerly Tidelift), that exist to manage security metadata about open-source software, enabling companies to manage their software risk more effectively. 

    EOL Jetty and CometD by Webtide

    To address this industry evolution, Webtide has launched a partnership program that enables businesses relying on EOL Jetty and CometD versions to obtain CVE resolutions officially and predictably. 

    Webtide continues to resolve CVEs and issues for EOL Jetty and CometD in support of our commercial customers. However, the resulting binaries are now distributed directly to our commercial support customers and through our partnership network. No longer are we calling software EOL but deploying to Maven Central with a nod and a wink.

    Our partners are established leaders in the open-source EOL landscape, creating products that directly address the problems the security and business industries are facing. 

    This synergy works perfectly with Webtide, as we are the company that offers services and support on Jetty and CometD. Migrations, developer assistance, production support, and performance are the things that directly influence the ongoing development of the open-source projects we steward. We can continue to focus on our strengths, and our partners can focus on theirs.

    At last, the partners!

    We are pleased to announce two partnerships. With these partners, you will be able to build a secure EOL solution for your software stack, not just for your usage of Jetty or CometD. Best yet, if you are interested in Webtide’s Lifecycle Support, you can use these partner versions in conjunction with our support!


    TuxCare secures the open-source software the world builds on. Today, we protect over 1.2 million workloads – keeping them secure, compliant, and unstoppable at scale. From operating systems to development libraries and production applications, we power your open-source stack with enterprise-grade security and support, including endless lifecycle extensions for out-of-support software versions, rebootless patching for every major Linux distribution, enterprise-optimized support for community Linux, and our Linux-first vulnerability scanner that cuts through the noise.


    HeroDevs provides secure, long-term maintenance for open-source frameworks that have reached End-of-Life. Through our Never-Ending Support (NES) initiative, we deliver continuous CVE remediation and compliance-grade updates, allowing your team to migrate at your own pace. Our engineers monitor upstream changes, backport verified fixes, and publish fully tested binaries for seamless drop-in replacement. With NES for Jetty and NES for CometD, you can stay secure, stable, and compliant—without refactoring or rushing a migration.

    If your business is interested in our partner program, please direct inquiries to partnership@webtide.com.

    Wrapping it up.

    One important thing to note is that Webtide will continue to support the Jetty Project with a standard open-source release process, ensuring that older versions are released to provide the community with ample time to update to newer versions through a transition period. When Jetty 13 is released, Jetty 12.1 will continue to receive updates for a period, just as Jetty 12.0 does currently. If that is six months or a year, it remains to be seen. Once we finalize this release strategy with timelines, we will make sure the community is well-informed.

    Fundamentally, the change coming is that the End of Life versions for Jetty and CometD will no longer be an empty EOL notice and quiet deployments to Maven Central. It will mean EOL and provide established industry solutions to address those who need additional support.

  • Google App Engine Performance Improvements

    Over the past few years, Webtide has been working closely with Google to improve the usage of Jetty in the App Engine Java Standard Runtime. We have updated the GAE Java21 Runtime to use Jetty 12 with support for both EE8 and EE10 environments. In addition, a new HttpConnector mode has been added to increase the performance of all Java Runtimes, this is expected to result in significant cost savings from less memory and CPU usage.

    Bypassing RPC Layer with HttpConnector Mode

    Recently, we implemented a new mode for the Java Runtimes which bypasses the legacy gRPC layer which was previously needed to support the GEN1 runtimes. This legacy code path allowed support of the GEN1 and GEN2 Runtimes simultaneously, but had significant overhead; it used two separate Jetty Servers, one for parsing HTTP requests and converting to RPC, and another using a custom Jetty Connector to allow RPC requests to be processed by Jetty. It also required the full request and response content to be buffered which further increased memory usage.

    The new HttpConnector mode completely bypasses this RPC layer, thereby avoiding the overhead of buffering full request and response contents. Additionally, it removes the necessity of starting a separate Jetty Server, further reducing overheads and streamlining the request-handling process.

    Benchmarks

    Benchmarks conducted on the new HttpConnector mode have demonstrated significant performance improvements. Detailed results and documentation of these benchmarks can be found here.

    Usage

    To take advantage of the new HttpConnector mode, developers can set the appengine.use.HttpConnector system property in their appengine-web.xml file.

    <system-properties>
        <property name="appengine.use.httpconnector" value="true"/>
    </system-properties>

    By adopting this configuration, developers can leverage the enhanced performance and efficiency offered by the new HttpConnector mode. This is available for all Java Runtimes from Java8 to Java21.

    This mode is currently an optional configuration but future plans are to make this the default for all applications.

  • Back to the Future with Cross-Context Dispatch

    Cross-Context Dispatch reintroduced to Jetty-12

    With the release of Jetty 12.0.8, we’re excited to announce the (re)implementation of a somewhat maligned and deprecated feature: Cross-Context Dispatch. This feature, while having been part of the Servlet specification for many years, has seen varied levels of use and support. Its re-introduction in Jetty 12.0.8, however, marks a significant step forward in our commitment to supporting the diverse needs of our users, especially those with complex legacy and modern web applications.

    Understanding Cross-Context Dispatch

    Cross-Context Dispatch allows a web application to forward requests to or include responses from another web application within the same Jetty server. Although it has been available as part of the Servlet specification for an extended period, it was deemed optional with Servlet 6.0 of EE10, reflecting its status as a somewhat niche feature.

    Initially, Jetty 12 moved away from supporting Cross-Context Dispatch, driven by a desire to simplify the server architecture amidst substantial changes, including support for multiple environments (EE8, EE9, and EE10). These updates mean Jetty can now deploy web applications using either the javax namespace (EE8) or the jakarta namespace (EE9 and EE10), all using the latest optimized jetty core implementations of HTTP: v1, v2 or v3.

    Reintroducing Cross-Context Dispatch

    The decision to reintegrate Cross-Context Dispatch in Jetty 12.0.8 was influenced significantly by the needs of our commercial clients, some who still leveraging this feature in their legacy applications. Our commitment to supporting our clients’ requirements, including the need to maintain and extend legacy systems, remains a top priority.

    One of the standout features of the newly implemented Cross-Context Dispatch is its ability to bridge applications across different environments. This means a web application based on the javax namespace (EE8) can now dispatch requests to, or include responses from, a web application based on the jakarta namespace (EE9 or EE10). This functionality opens up new pathways for integrating legacy applications with newer, modern systems.

    Looking Ahead

    The reintroduction of Cross-Context Dispatch in Jetty 12.0.8 is more than just a nod to legacy systems; it can be used as a bridge to the future of Java web development. By allowing for seamless interactions between applications across different Servlet environments, Jetty-12 opens the possibility of incremental migration away from legacy web applications.

  • If Virtual Threads are the solution, what is the problem?

    Java’s Virtual Threads (aka Project Loom or JEP 444) have arrived as a full platform feature in Java 21, which has generated considerable interest and many projects (including Eclipse Jetty) are adding support.

    I have previously been somewhat skeptical about how significant any advantages Virtual Threads actually have over Platform Threads (aka Native Threads). I’ve also pointed out that cheap Threads can do expensive things, so that using Virtual Threads may not be a universal panacea for concurrent programming.

    However, even with those doubts, it is clear that Virtual Threads do have advantages in memory utilization and speed of startup. In this blog we look at what kinds of applications may benefit from those advantages.

    In short we investigate what scalability problems are Virtual Threads the solution for.

    Axioms

    Firstly let’s agree on what is accepted about Virtual Thread usage:

    • Writing asynchronous code is extraordinary difficult. “Yeah I know” you say… yeah but no, it is harder than that! Avoiding the need to write application logic in asynchronous style is key to improving the quality and stability of an application. This blog is not generally advocating you write your applications in an asynchronous style.
    • Virtual Threads are very cheap to create. From a performance perspective there is no reason to pool already started Virtual Threads and such pools are considered an anti pattern. If a Virtual Thread is needed, then just create a new one.
    • Virtual Threads use less memory. This is accepted, but with some significant caveats. Specifically the memory saving is achieved because Virtual Threads only allocate stack memory as needed, whilst Platform Threads provision stack size based on a worst case maximal usage. This is not exactly an apples vs oranges comparison.

    If some are good, are more even better?

    Consider a blocking style application is running on a traditional application server that is not scaling sufficiently. On inspection you see that all the Threads in the pool (default size 200) are allocated and that there are no Threads available to do more work!

    Would making more Threads available be the solution to this scalability problem? Perhaps 2000 Platform Threads will help? Still slow? Let’s try 10,000 Platform Threads! Running out of memory? Then perhaps unlimited Virtual Threads will solve the scalability problems?

    What if on further inspection it is found that the pool Threads are mostly blocked waiting for a JDBC Database connection from the JDBC Connection Pool (default size 8) and that as a result the Thread pool is exhausted.

    If every request needs the database, then any additional Threads will all just block on the same JDBC pool, thus more Threads will not make a more Scalable solution.

    Alternatively, if only some requests need to use the database, then having more Threads would allow request that do not need the database to proceed to completion. However, a fraction of requests would still end up blocked on the JBDC pool. Thus any limited Platform Thread pool could still become exhausted.

    With unlimited Virtual Threads there is no effective limit on the number of Threads, so non database requests could always continue, but the queue of Threads waiting on JBDC would also be unlimited as would the total of any resources held by those Threads whilst waiting. Thus the application would only scale for some types of request, whilst giving JDBC dependent requests the same poor Quality of Service as before.

    Finite Resources

    If an application’s scalability is constrained by access to a finite resource, then it is unlikely that “more Threads” is the solution to any scalability problems. Just like you can’t solve traffic by adding cars to a congested road, adding Threads to an already busy server may make things worse.

    Some common examples of finite resources that applications can encounter are:

    • CPU: If the server CPU is near 100% utilization, then the existing number of Threads are sufficient to keep it fully loaded. More and/or faster CPUs are needed before any increase in Threads could be beneficial.
    • Database: Many database technologies cannot handle many concurrent requests, so parallelism is restricted. If the bottleneck is the database, then it needs to be re-engineered rather than laid siege to by more concurrent Threads.
    • Local Network: An application may block reading or writing data because it has reached the limit on the local network. In such cases, more Threads will not increase throughput, but they might improve latency if some threads can progress reading new requests and have responses ready to write once network becomes less congested. However there is a cost in waiting (see below).
    • Locks: Parallel applications often use some form of lock or mutual exclusion to serialize access to common data structures. Contention on those locks can limit parallelism and require redesign rather than just more Threads.
    • Caches: CPU, memory, file system and object caches are key tools in speeding up execution. However, if too many different tasks are executed concurrently, the capacity of these caches to hold relevant data may be exceeded and execution with a cold cache can be very slow. Sometimes it is better to do less things concurrently and serialize the excess so that caches can be more effective that trying to do everything at once.

    If an application’s lack of scalability is due to Threads waiting for finite resources, then any additional Threads (Platform or Virtual) are unlikely to help and may make your application less stable. At best, careful redesign is needed before Thread counts can be increased in any advantageous way.

    Infinite (OK Scalable) Resources

    Not all resources are finite and some can be considered infinite, at least for some purposes. But let’s call them “Scalable” rather than infinite. Examples of scalable resources that an application may block on include:

    • Database: Not all databases are created equal and some types of database have scalability in excess of the request rates experienced by a server. However, such scalability often comes at a latency cost as the database may be remote and/or distributed, thus applications may block waiting for the database, even if it has capacity to handle more requests in parallel.
    • Micro services: A scalable database is really just a specific example of a micro service that may be provided by a remote and/or distributed system that has effectively infinite capacity at the cost of some latency. Applications can often find themselves waiting on one or more such services.
    • Remote Networks: Local data center networks are often very VERY fast and in many situations they can outstrip even the combined capacity of many client systems. An application sending/receiving larger content may may block writing/reading them due to a slow client, but still have enough local network capacity to communicate with many other clients in parallel.
    • Local Filesystems: Typically file systems are faster than networks, but slower than CPU. They also may have significant latency vs throughput tradeoffs (less so now that drives seldom need to spin physical disks). Thus Threads may block on local IO even though there is additional capacity available.

    Applications that lack scalability due to Threads waiting for such scalable resources may benefit from more Threads. Whilst some Threads are waiting for the a database, micro service, slow client network or file system, it is likely that other Threads can progress even if they need to access the same types of resources.

    Platform Threads pools can easily be increased to many 1000’s or more before typical servers will have memory issues. If scalability is needed beyond that, then Virtual Threads can offer practically unlimited additional Thread, but see the caveats below.

    Furthermore, the fast starting Virtual Threads can be of significant benefit in situations where small jobs with long latency can be carried out in parallel. Consider an application that processes request using data from several micro services, each with some access latency. If these are done serially, then the total request latency is the summation of all. Sometimes asynchronous code is used to execute micro service request in parallel, but spinning up a couple of Virtual Threads in this situation is simpler, less error prone and applicable to more APIs.

    Too Much of a Good Thing?

    There is also some concern with low latency scalable resources that seldom block with Virtual Threads. Since Virtual Threads are not preempted, there can be starvation and/or fairness problems if they are not blocked by slow resources. This is probably a good problem to have, but will need some management on extreme scales for some applications.

    The Cost of Waiting

    We have identified that there are indeed scalable resources on which an application may wait with many Threads. However, there is no such thing as a free lunch and waiting Threads may have a significant cost, even if they are Virtual. Specifically how/where an application waits can greatly affect resource usage.

    Consider a traditional application server with a limited Thread pool that is running near capacity, but with additional demand. While the 200 odd Threads are busy handling 200 concurrent request, there are additional request waiting to be handled. However, in an asynchronous server like Jetty, those additional requests can be cheaply parked and may be represented just be a single set bit in a selector or perhaps a tiny entry in a queue that holds only a reference to a connection that is ready to be read.

    Now consider if requests were serviced by Virtual Threads instead of waiting for a pooled Platform Thread to become available. Pending requests would be allowed to proceed to some blocking point in the application. Waiting like this within the application can have additional expenses including:

    • An input buffer will be allocated to read the request and any content it has.
    • A read is performed into the input buffer, thus removing network back pressure so a client is enabled to send more request/data even if the server is unable to handle them.
    • An object representation of the request will be built, containing at least the meta data and frequently some application data if there is an XML or JSON payload
    • Sessions may be activated and brought into memory from caches or passivation stores.
    • The allocated Thread runs deep inside the application code, potentially reaching near maximal stack depth.
    • Application objects created on the heap are held in memory with references from the stack.
    • An output buffer may be allocated, along with additional character conversion resources.

    When request handling blocks within the application, all these additional resources may be allocated and held during that wait. Worse still, because of the lack of back pressure, a client may send more request/data resulting in more Threads and associated resources being allocated and also being held whilst the application waits for some resource.

    Provisioning for the Worst Case

    We have seen that there are indeed applications that may benefit from having additional Threads available to service requests. But we have also seen that such additional Threads may incur additional costs beyond just the stack size. Waiting/Blocking within an application will typically be done with a deep stack and other resources allocated. Whilst Virtual Threads might be effectively infinite, it is unlikely that these other required resources are equally scalable.

    When an application experiences a worst case peak in load, then ultimately some resource will run out. To provide good Quality of Service, it is vital that such resource exhaustion is handled gracefully, allowing some request handling to continue rather than suffering catastrophic failure.

    With traditional Platform Thread based pools, stack memory is already provisioned for worst case stacks for all Threads and the thread pool sized limit is also an indirect limit on the number of concurrent resources used. Threads have sufficient resources available to complete there handling whilst any excess requests suffer latency whilst waiting cheaply for an available Thread. Furthermore, the back pressure resulting from not reading all offered requests can prevent additional load from sent by the clients. Thread limits are imperfect resource limits, but at least they are some kind of limit that can provide some graceful degradation under load.

    Alternatively, an application using Virtual Threads that has no explicit resource management will be likely to exhaust some of the resources used by those Threads. This can result in an OutOfMemoryException or similar, as the unlimited Virtual Threads each allocate deep stacks and other resources needed for request handling. The cost of average memory savings may be insufficient provisioning for the worst case resulting in catastrophic failure rather than graceful degradation. An analogy is that building more roads can actually make traffic worse if the added cars overwhelm other infrastructure.

    Many applications are written without explicit resource limitations/management. Instead they rely on the imperfect Thread pool for at least some minimal protection. If that is removed, then some form of explicit resource limitation/management is likely to be needed in its place. Stable servers need to be provisioned for the worst case, not the average one.

    Conclusion

    There are applications that can scale better if more Threads are available, but it is not all applications (at least not without significant redesign). Consideration needs to be given to what will limit the worst case load for a server/application if it is not to be Threads. Specifically, the costs of waiting within the application may be such that scalability is likely to have a limit that will not be enforced by practically infinite Virtual Threads.

    It may be that resources have limitations well within the capacity of large but limited Platform Thread pools, which are perfectly capable of scaling to many thousands of threads. So experiments with scaling a Platform Thread pool should first be used to see what limits do apply to an application.

    If no upper limit is found before Platform Threads exhaust kernel memory, then Virtual Threads will allow scaling beyond that limit until some other limit is found. Thus the ultimate resource limit will need to be explicitly managed if catastrophic failure is to be avoided (but, to be fair, applications using Thread pools should also do some explicit resource limit management rather than rely just on the course limits of a Thread pool).

    Recommendation

    If Virtual Threads are not the general solution to scalability then what is? There is no one-size-fits-all solution, but I believe many applications that are limited by blocking on the network would benefit from being deployed in a server like Eclipse Jetty, that can do much of the handling for them asynchronously. Let Jetty read your requests asynchronously and prepare the content as parsed JSON, XML, or form data. Only then allocate a Thread (Virtual or Platform) with a large output buffer so the application can be written in blocking style, but will not block on either reading the request or writing the response. Finally, once the response is prepared, then let Jetty flush it to the network asynchronously. Jetty has always somewhat supported this model (e.g. by delaying dispatch to a Servlet until the first packet of data arrives), but with Jetty-12 we are adding more mechanisms to asynchronously prepare requests and flush responses, whilst leaving the application written in blocking style. More to come on this in future blogs!

  • Security Audit with Trail of Bits

    Several months ago, the Eclipse Foundation approached the Eclipse Jetty project with the offer of a security audit. The effort was being supported through a collaboration with the Open Source Technology Improvement Fund (OSTIF), with the actual funding coming from the Alpha-Omega Project.

    Upon reflection, this collaboration could not have come at a better time for the Jetty open-source project. Completing this security audit before the first release of Jetty 12 was serendipitous. While the collaboration results with Trail of Bits are just now being published, the work has primarily been completed for a couple of months. 

    When we started this audit effort, Jetty 12 was quickly shaping up to be one of the most exciting releases we have ever worked on in the history of Jetty. Support for protocols like HTTP/3 further refined the internals of Jetty to be a modern, scalable network component. Coupling that with a refactoring of the internals to remove strict dependency on Servlet Request and Response objects, the core of Jetty became a more general server component with scalable and performant applications being able to be developed directly in Jetty without the strict requirement for Servlets. This ultimately allowed us to add a new Environment concept that supports multiple versions of the Servlet API on the same Jetty server simultaneously; mixing javax.servlet and jakarta.servlet on the same server allows for many exciting options for our users.

    However, these changes and exciting new features mean that quite a bit has changed, and when many moving parts are evolving, there is always a risk of unwanted behaviors.

    Our committers had low expectations of what this engagement would lead to, as our previous experience with various code analysis tooling often resulted in too many false positives.  Part of the prep work to start this review required us to draw an appropriate-sized box around the Jetty project code where we felt a review was most warranted. It should come as no surprise that much of this code is some of the more nuanced and complex with Jetty. So, throwing caution to the wind, we prepared and submitted the paperwork.

    We could not have been more pleased with how the engagement proceeded from here. Trail of Bits was chosen as the company to perform the review, and it met and exceeded our expectations by far. Sitting down with their engineers, it was apparent they were excited to be working on an open-source project of Jetty’s maturity, and when their work was completed, they demonstrated a much more complete understanding of the reviewed code than the Jetty team expected.

    Ultimately, we could not have been happier with how this effort was executed. The Eclipse Jetty project members are very thankful to the Eclipse Foundation, OSTIF, and Trail of Bits for making this collaboration a resounding success!

    Other References

  • New Jetty 12 Maven Coordinates

    Now that Jetty 12.0.1 is released to Maven Central, we’ve started to get a few questions about where some artifacts are, or when we intend to release them (as folks cannot find them).

    Things have change with Jetty, starting with the 12.0.0 release.

    First, is that our historical versioning of <servlet_support>.<major>.<minor> is no longer being used.

    With Jetty 12, we are now using a more traditional <major>.<minor>.<patch> versioning scheme for the first time.

    Also new in Jetty 12 is that the Servlet layer has been separated away from the Jetty Core layer.

    The Servlet layer has been moved to the new Environments concept introduced with Jetty 12.

    EnvironmentJakarta EEServletJakarta NamespaceJetty GroupID
    ee8EE84javax.servletorg.eclipse.jetty.ee8
    ee9EE95jakarta.servletorg.eclipse.jetty.ee9
    ee10EE106jakarta.servletorg.eclipse.jetty.ee10
    Jetty Environments

    This means the old Servlet specific artifacts have been moved to environment specific locations both in terms of Java namespace and also their Maven Coordinates.

    Example:

    Jetty 11 – Using Servlet 5
    Maven Coord: org.eclipse.jetty:jetty-servlet
    Java Class: org.eclipse.jetty.servlet.ServletContextHandler

    Jetty 12 – Using Servlet 6
    Maven Coord: org.eclipse.jetty.ee10:jetty-ee10-servlet
    Java Class: org.eclipse.jetty.ee10.servlet.ServletContextHandler

    We have a migration document which lists all of the migrated locations from Jetty 11 to Jetty 12.

    This new versioning and environment features built into Jetty means that new major versions of Jetty are not as common as they have been in the past.




  • Jetty 12 performance figures

    TL;DR

    This is a quick blog to share the performance figures of Jetty 12 and to compare them to Jetty 11, updated for the release of 12.0.2. The outcome of our benchmarks is that Jetty 12 with EE10 Servlet-6.0 is 5% percent faster than Jetty 11 EE9 Servlet-5.0. The Jetty-12 Core API is 22% faster.

    These percentages are calculated from the P99 integrals, which is the total latency excluding the 1% slowest requests from each of the 180 sample periods. The max latency for the 1% slowest requests in each sample shows a modest improvement for Jetty-12 servlets and a significant improvement for core.

    Introduction

    We use one server (powered by a 16 cores Intel Core i9-7960X) connected with a 10GbE link to a dedicated switch, upon which four clients are connected with a single 1GbE link each. The server runs a single instance of the Jetty Server running with the default of 200 threads and a 32 GB heap managed by ZGC, and the clients each run a single instance of the Jetty Load Generator running with an 8 GB heap also managed by ZGC. This is all automated, and the source of this benchmark is public too.

    Each client applies a load of 60,000 requests per second for a total of 240,000 requests per second that the server handles alone. We make sure there is no saturation anywhere: CPU consumption, network links, RAM, JVM heap, HTTP response status are monitored to ensure this is the case. We also make sure no errors are reported in HTTP response statuses or on the network interface.

    We collect all processing times of the Jetty Server (i.e.: the time it takes from reading the 1st byte of the request from the socket to writing the last byte of the response to the socket) in histograms from which we then graph the minimum, P99, and maximum latencies, and calculate integrals (i.e.: the sum of all measurements) for latter two.

    Jetty 12

    In the following three benchmarks, the CPU consumption of the server machine stayed around 40%.

    Below are plots of the maximum (graph in yellow), minimum to P99 (graph in red) processing times for Jetty 12:

    ee10

    The yellow graph shows a single early peak at 4500 µs, and then most peaks at less than 1000µs and an average sample peak of 434µs.

    The red graph shows a minimum processing time of 7 µs, a P99 processing time of 34.8 µs.

    core

    The yellow graph shows a few peaks at a 2000 µs with the average sample peak being 335µs.

    The red graph shows a minimum processing time of 6 µs, a P99 processing time of 30.0 µs.

    Comparing to Jetty 11

    Comparing apples to apples required us to slightly modify our benchmark to:

    In the following benchmark, the CPU consumption of the server machine stayed around 40%, like with Jetty 12.

    Below are the same plots of the maximum (graph in yellow), minimum to P99 (graph in red) processing times for Jetty 11:

    The yellow graph shows a few peaks at a 4000µs with an average sample peak of 487µs.

    The red graph shows a minimum processing time of 8 µs, a P99 processing time of 36.6 µs.

  • Jetty Project and TCK

    The Jetty project has a long history of participating in the standardization of EExx (previously JEE) specifications such as Servlet and Websocket.

    Jakarta renaming

    After the donation of TCK source code by Oracle to Eclipse Foundation, the EE group has decided to change the historical Java package names from javax.servlet or javax.websocket (etc..) to jakarta.servlet or jakarta.websocket (etc..). As we wanted to be able to validate our EE9 implementation, we have participated in the big change by contributing two Pull Requests doing the migration (Servlet/Websocket TCK https://github.com/jakartaee/platform-tck/pull/257) and (JSP, JSF, JAX* https://github.com/jakartaee/platform-tck/pull/276). Those Pull Requests have been a lot of work and testing as 1746 and 857 files of old/legacy code have been modified.

    Refactoring of Servlet TCK

    The original TCK (still the current main branch as of the date of writing) is based on a technology called JavaTest Harness (more details here). The problem with this technology is the slowness of test execution and the difficulties to debug what is happening.

    After some discussion,  a decision was made to rewrite All TCKs using Junit5 (or TestNG) and use Arquillian.

    Starting with those requirements, it has been a long journey to refactor the Servlet TCK.

    Originally, the Servlet TCK is a cascade of Ant builds which are packaging 210 war files to be deployed in the target Servlet container to be validated using the JavaTest Harness framework. This means a jvm is restarted 210 times to run all the tests and the TCK has 1691 tests.

    On the Jetty CI, the daily build testing against the legacy TCK takes around 1h10min (Jenkins Job) and it’s not easy to set up such a run locally.

    After the refactoring to use Arquillian and embedded Jetty instance, the similar Jenkins Job takes around 17 minutes (including a full rebuild of Arquillian Container Jetty, TCK servlet and Jetty 12).

    Maven Build

    As we were starting a big refactor with Jetty 12, having those 1691 tests was convenient to test all the changes but definitely too slow to run locally. So thus began the journey of rewriting the Servlet TCK.

    The first job was to have a Maven build for the whole TCK, the branch called tckrefactor has been started to refactor the single source directory to multiple Maven modules with separation of concern. But as you can imagine this took some time to move Java files to the correct Maven modules without breaking some (circular?) dependencies that a single source directory was allowing.

    Junit

    Then we had to find the pattern (e.g. what do we have to do exactly?). As explained the TCK is/was a collection of wars and classes running using JavaTest Harness.

    The tests were using the Javadoc taglet to discover the tests to be executed. The most complicated example is a mix using the Javadoc taglet such as:

    public class Client extends secformClient {
    ......
      /*
       * @testName: test1
       *
       * @assertion_ids: Servlet:SPEC:142; JavaEE:SPEC:21
       *
       * @test_Strategy: .....
       *
       */
      /*
       * @testName: test1_anno
       *
       * @assertion_ids: Servlet:SPEC:142; JavaEE:SPEC:21; Servlet:SPEC:290;
       * Servlet:SPEC:291; Servlet:SPEC:293; Servlet:SPEC:296; Servlet:SPEC:297;
       * Servlet:SPEC:298;
       *
       * @test_Strategy: ......
       *
       */
      public void test1_anno() throws Fault {
        // save off pageSec so that we can reuse it
        String tempPageSec = pageSec;
        pageSec = "/servlet_sec_secform_web/ServletSecAnnoTest";
        try {
          super.test1();
        } catch (Fault e) {
          throw e;
        } finally {
          // reset pageSec to orig value
          pageSec = tempPageSec;
        }
      }

    This test class when executed by the Java TestHarness framework will run super.test1 and test1_anno methods. With moving to junit5 we need to add the annotation @Test to all methods which need to run.

    This means in this case we need to execute the test1 method in the superclass and the method test1_anno in the current class. The solution could be to mark test1_anno and the method test1 in the parent class with the annotation @Test but we cannot touch the parent class otherwise every class inheriting from this will this test method which is not expected and was not the behavior of the TestHarness framework.

    The solution is to mark it with @Test annotation the method test1_anno and modify the current source code to add the simple annotated method such as:

    @Test
    public void test1() {
        super.test1();
    }

    With more than 800 classes to touch this was not possible so we developed an Apache Maven plugin to achieve this: https://github.com/jetty-project/tck-extract-tests-maven-plugin

    Arquillian

    As mentioned above, the original TCK was based on Apache Ant building 212 war files to deploy. This means 212 Java files have to be modified to include the generation of the Arquillian WebArchive. Sadly nothing magical here as every build.xml was different and no way to find a pattern to automate this.

    The solution was: Roll Up your sleeves™

    Given a build.xml such as:

      <property name="app.name"  value="servlet_spec_errorpage" />
      <property name="content.file" value="HTMLErrorPage.html"/>
      <property name="web.war.classes"
                value="com/sun/ts/tests/servlet/common/servlets/HttpTCKServlet.class,
                       com/sun/ts/tests/servlet/common/util/Data.class"/>
      <target name="compile">
        <ts.javac includes="${pkg.dir}/**/*.java,
                            com/sun/ts/tests/common/webclient/validation/**/*.java,
                            com/sun/ts/tests/servlet/common/servlets/**/*.java,
                            com/sun/ts/tests/servlet/common/util/**/*.java"/>
      </target>

    was rewritten using Arquillian WebArchive generation in Java code (some manual checks had to be done as some **/**/** were too “generous”) to:

    public static WebArchive getTestArchive() throws Exception {
        return ShrinkWrap.create(WebArchive.class, "servlet_spec_errorpage_web.war")
                .addAsWebResource("spec/errorpage/HTMLErrorPage.html","HTMLErrorPage.html")
                .addAsLibraries(CommonServlets.getCommonServletsArchive())
                .addClasses(SecondServletErrorPage.class, ServletErrorPage.class, TestException.class, TestServlet.class,
                        WrappedException.class)
                .setWebXML(URLClient.class.getResource("servlet_spec_errorpage_web.xml"));
      }
    

    This work has taken a lot of time and resources, and generated a quite big Pull Request: https://github.com/jakartaee/platform-tck/pull/912

    With all of these changes, as long as your container has Arquillian support it’s very easy to run the Servlet TCK using Apache Maven Surefire with a configuration to pick up all the tests (a sample for Jetty is https://github.com/jetty-project/servlet-tck-run/blob/jetty-12-ee10/pom.xml).

    Last but not least as the Jetty Arquilian container is embeded, it is now possible to debug everything (TCK code and Jetty code) using an IDE.

    You can run even more TCK by adding them to the Apache Maven Surefire configuration:

            <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-surefire-plugin</artifactId>
              <version>${surefire.version}</version>
              <configuration>
                ....
                <includes>
                  <include>**/URLClient*</include>
                  <include>**/Client*</include>
                </includes>
                <excludes>
                  <exclude>**/BaseUrlClient</exclude>
                </excludes>
                <dependenciesToScan>
                  <dependenciesToScan>jakartatck:servlet</dependenciesToScan>
                </dependenciesToScan>
              </configuration>
            </plugin>

    The next steps are now to have other TCKs converted to Junit/Arquillian. Both Websocket, JSP and EL are works in progress.

  • Jetty 12 – Virtual Threads Support

    Executive Summary

    Virtual Threads, introduced in Java 19, are supported in Jetty 12, as they have been in Jetty 10 and Jetty 11 since 10.0.12 and 11.0.12, respectively.

    When virtual threads are supported by the JVM and enabled in Jetty (see embedded usage and standalone usage), applications are invoked using a virtual thread, which allows them to use simple blocking APIs, but with the scalability benefits of virtual threads.

    Introduction

    Virtual threads were introduced as a preview feature in Java 19 via JEP 425 and in Java 20 via JEP 436, and finally integrated as an official feature in Java 21 via JEP 444.

    Historically, the APIs provided to application developers, especially web application developers, were blocking APIs based on InputStream and OutputStream, or based on JDBC.
    These APIs are very simple to use, so applications are simple to develop, understand and troubleshoot.

    However, these APIs come with a cost: when a thread blocks, typically waiting for I/O or a contended lock, all the resources associated with that thread are retained, waiting for the thread to unblock and continue processing: the native thread and its native memory are retained, as well as network buffers, lock structures, etc.

    This means that blocking APIs are less scalable because they retain the resources they use.
    For example, if you have configured your server thread pool with 256 threads, and all are blocked, your server cannot process other requests until one of the blocked threads unblocks, limiting the server’s scalability.

    Furthermore, if you increase the server thread pool capacity, you will use more memory and likely require bigger hardware.

    Asynchronous/Reactive

    For these reasons, non-blocking asynchronous and reactive APIs have been introduced. The primary examples are the asynchronous I/O APIs introduced in Servlet 3.1 and reactive APIs provided by libraries such as RxJava and Spring’s Project Reactor, based on Reactive Streams.
    Unfortunately, REST APIs such as JAX-RS or Jakarta RESTful Web Services have not been (fully) updated with non-blocking APIs, so web applications that use REST are stuck with blocking APIs and scalability problems.

    Essential to note is that asynchronous and reactive APIs are more difficult to use, understand, and troubleshoot than blocking APIs, but are more scalable and typically achieve similar performances at a fraction of the resources. We have seen web applications that, when switched from blocking APIs to non-blocking APIs, reduced the threads usage from 1000+ to 10+.

    Virtual threads aim to be the best of both worlds: simple-to-use blocking APIs for developers, with the scalability of non-blocking APIs provided by the JVM.

    Jetty 12 Architecture

    The Jetty 12 architecture, at its core, is completely non-blocking and uses an AdaptiveExecutionStrategy (formerly known as “eat what you kill” which was covered in previous blogs here and here) to determine how to consume tasks.

    The key feature of AdaptiveExecutionStrategy is that it has a strong preference for consuming tasks in the same thread that produces them, so they are executed with a hot CPU cache, without parallel slowdown and no context-switch latency, yet avoids the risk of the server exhausting its thread pool.

    Simplifying a bit, each task is marked either as blocking or non-blocking; AdaptiveExecutionStrategy looks at the task and at how many threads are available to decide how to consume the task.

    If the task is non-blocking, the current thread runs it immediately.
    Otherwise, if no other threads are available to continue producing tasks, the current thread takes over the production of tasks and gives the tasks to an Executor, where they are likely queued and executed later by different threads.

    Virtual Threads Integration

    This architecture made it easy to integrate virtual threads in Jetty: when virtual threads are supported by the JVM and Jetty’s virtual threads support is enabled (see embedded usage and standalone usage), AdaptiveExecutionStrategy consumes a blocking task by offering the task to the virtual thread Executor rather than the native thread Executor, so that a newly spawned virtual thread runs the blocking task.

    That’s it.

    As a Servlet Container implementation, Jetty calls Servlets assuming they will use blocking APIs, so the task that invokes the Servlet is a blocking task.
    When virtual threads are supported and enabled, the thread that calls Servlet Filters and eventually the HttpServlet.service(...) method is a virtual thread.

    For non-blocking tasks, it is more efficient to have them run by the same native thread that created them; it is only for blocking tasks that you may want to use virtual threads.

    Conclusions

    Jetty’s AdaptiveExecutionStrategy allows the best of all worlds.
    Jetty provides a fast scalable asynchronous implementation, which avoids any possible limitations of virtual threads, whilst giving applications the full benefits of virtual threads. 

    Jetty deals with complex asynchronous concerns, so you don’t have to!