Blog

  • Do Looms Claims Stack Up? Part 2: Thread Pools?

    “Project Loom aims to drastically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications that make the best use of available hardware. … The problem is that the thread, the software unit of concurrency, cannot match the scale of the application domain’s natural units of concurrency — a session, an HTTP request, or a single database operation. …  Whereas the OS can support up to a few thousand active threads, the Java runtime can support millions of virtual threads. Every unit of concurrency in the application domain can be represented by its own thread, making programming concurrent applications easier. Forget about thread-pools, just spawn a new thread, one per task.” – Ron Pressler, State of Loom, May 2020

    In this series of blogs, we are examining the new Loom virtual thread features now available in OpenJDK 16 early access releases. In part 1 we saw that Loom’s claim of 1,000,000 virtual threads was true, but perhaps a little misleading, as that only applies to threads with near-empty stacks.  If threads actually have deep stacks, then the achieved number of virtual threads is bound by memory and is back to being the same order of magnitude as kernel threads.  In this part, we will further examine the claims and ramifications of Project Loom, specifically if we can now forget about Thread Pools. Spoiler: Cheap threads can do expensive things!
    All the code from this blog is available in our loom-trial project and has been run on my dev machine (Intel® Core™ i7-6820HK CPU @ 2.70GHz × 8, 32GB memory,  Ubuntu 20.04.1 LTS 64-bit, OpenJDK Runtime Environment (build 16-loom+9-316)) with no specific tuning and default settings unless noted otherwise. 

    Matching the scale?

    Project Loom makes the claim that applications need threads because kernel threads “cannot match the scale of the application domain’s natural units of concurrency”!
    Really???  We’ve seen that without tuning, we can achieve 32k of either type of thread on my laptop.  We think it would be fair to assume that with careful tuning, that could be stretched to beyond 100k for either technology.  Is this really below the natural scale of most applications?  How many applications have a natural scale of more than 32k simultaneous parallel tasks?  Don’t get me wrong, there are many apps that do exceed those scales and Jetty has users that put an extra 0 on that, but they are the minority and in reality very few applications are ever going to see that demand for concurrency.
    So if the vast majority of applications would be covered by blocking code with a concurrency of 32k, then what’s the big deal? Why do those apps need Loom? Or, by the same argument, why would they need to be written in asynchronous style?
    The answer is that you rarely see any application deployed with 10,000s of threads; instead, threads are limited by a thread pool, typically to 100s or 1000s of threads.  The default thread pool size in jetty is 200, which we sometimes see increased to 1000s, but we have never seen a 32k thread pool even though my un-tuned laptop could supposedly support it!
    So what’s going on? Why are thread pools typically so limited and what about the claim that Loom means we can “Forget about thread pools”?

    Why Thread Pools?

    One reason we are told that thread pools are used is because kernel threads are slow to start, thus having a bunch of them pre-started, waiting for a task in a pool improves latency.  Loom claims their virtual threads are much faster to start, so let’s test that with StartThreads, which reports:

    kStart(ns) ave:137,903 from:1,000 min:47,466 max:6,048,540
    vStart(ns) ave: 10,881 from:1,000 min: 4,648 max:  486,078

    So that claim checks out. Virtual threads start an order of magnitude faster than kernel threads.  If start time was the only reason for thread pools, then Loom’s claim of forgetting about thread pools would hold.
    But start time only explains why we have thread pools, but it doesn’t explain why thread pools are frequently sized far below the systems capacity for threads: 100s instead of 10,000s?  What is the reason that thread pools are sized as they are?

    Why Small Thread Pools?

    Giving a thread a task to do is a resource commitment. It is saying that a flow of control may proceed to consume CPU, memory and other resources that will be needed to run to completion or at least until a blocking point, where it can wait for those resources.  Most of those resources are not on the stack,  thus limiting the number of available threads is a way to limit a wide range of resource consumption and give quality of service:

    • If your back-end services can only handle 100s of simultaneous requests, then a thread pool with 100s of threads will avoid swamping them with too much load. If your JDBC driver only has 100 pooled connections, then 1,000,000 threads hammering on those connections or other locks are going to have a lot of contention.
    • For many applications a late response is a wrong response, thus it may well be better to handle 1000 tasks in a timely way with the 1001st task delayed, rather than to try to run all 1001 tasks together and have them all risk being late.
    • Graceful degradation under excess load.  Processing a task will need to use heap memory and if too much memory is demanded an OutOfMemeoryException is fatal for all java applications.  Limiting the number of threads is a coarse grained way of limiting a class of heap usage.  Indeed in part 1, we saw that it was heap memory that limited the number of virtual threads.

    Having a limited thread pool allows an application to be tested to that limit so that it can be proved that an application has the memory and other resources necessary to service all of those threads.  Traditional thinking has been that if the configured number of threads is insufficient for the load presented, then either the excess load must wait, or the application should start using asynchronous techniques to more efficiently use those threads (rather than increase the number of threads beyond the resource capacity of the machine).
    A limited thread pool is a coarse grained limit on all resources, not only threads.  Limiting the number of threads puts a limit on concurrent lock contention, memory consumption and CPU usage.

    Virtual Threads vs Thread Pool

    Having established that there might be some good reasons to use thread pools, let’s see if Loom gives us any good reasons not to use them?   So we have created a FakeDataBase class which simulates a JDBC connection pool of 100 connections with a semaphore and then in ManyTasks we run 100,000 tasks that do 2 selects and 1 insert to the database, with a small amount of CPU consumed both with and without the semaphore acquired.   The core of the thread pool test is:

     for (int i = 0; i < tasks; i++)
       pool.execute(newTask(latch));

    and this is compared against the Loom virtual thread code of:

     for (int i = 0 ; i < tasks; i++)
       Thread.builder().virtual().task(newTask(latch)).start();

    And the results are…. drum roll… pretty much the same for both types of thread:

    Pooled  K Threads 33,729ms
    Spawned V Threads 34,482ms

    The pooled kernel thread does appear to be consistently a little bit better, but this test is not that rigorous so let’s call it the same, which is kind of expected as the total duration is pretty much going to be primarily constrained by the concurrency of the database.
    So were there any difference at all?  Here is the system monitor graph during both runs: kernel threads with a pool are the left hand first period (60-30s) and then virtual threads after a change over peak (30s – 0s):

    Kernel threads with thread pool do not stress the CPU at all, but virtual threads alone use almost twice as much CPU! There is also a hint of more memory being used.
    The thread pool has 100k tasks in the thread pool queue, 100 kernel threads that take tasks, 100 at a time, and each task takes one of 100 semaphores permits 3 times, with little or no contention.
    The Loom approach has 100k independent virtual threads that each contend 3 times for the 100 semaphore permits, with up to 99,900 threads needing to be added then removed 3 times from the semaphore’s wake up queue.  The extra queuing for virtual threads could easily explain the excess CPU needed, but more investigation is needed to be definitive.
    However, tasks limited by a resource like JDBC are not really the highly concurrent tasks that Loom is targeted at.  To truly test Loom (and async), we need to look at a type of task that just won’t scale with blocking threads dispatched from a thread pool.

    Virtual Threads vs Async APIs

    One highly concurrent work load that we often see on Jetty is chat room style interaction (or games) written on CometD and/or WebSocket.  Such applications often have many 10,000s or even 100,000s of connections to the server that are mostly idle, waiting for a message to receive or an event to send. Currently we achieve these scales only by asynchronous threadless waiting, with all its ramifications of complex async APIs into the application needing async callbacks.  Luckily, CometD was originally written when there was only async servlets and not async IO, thus it still has the option to be deployed using blocking I/O reads and writes.  This gives it good potential to be a like for like comparison between async pooled kernel threads vs blocking virtual threads.
    However, we still have a concern that this style of application/load will not be suitable for Loom because each message to a chat room will fan out to the 10s, 100s or even 1000s of other users waiting in that room.  Thus a single read could result in many blocking write operations, which are typically done with deep stacks (parsing, framework, handling, marshalling, then writing) and other resources (buffers, locks etc). You can see in the following flame graph from a CometD load test using Loom virtual threads, that even with a fast client the biggest block of time is spent in the blue peak on the left, that is writing with deep stacks. It is this part of the graph that needs to scale if we have either more and/or slower clients:

    Jetty with CometD chat on Loom

    To fairly test Loom, it is not sufficient to just replace the limited pool of kernel threads with infinite virtual threads.  Jetty goes to lots of effort with its eat what you kill scheduling using reserved threads to ensure that whenever a selector thread calls a potentially blocking task, another selector thread has been executed.  We can’t just put Loom virtual threads on top of this, else it will be paying the cost and complexity of core Jetty plus the overheads of Loom.  Moreover, we have also learnt the risk of Thread Starvation that can result in highly concurrent applications if you defer important tasks (e.g. HTTP/2 flow control).  Since virtual threads can be postponed (potentially indefinitely) by CPU bound applications or the use of non-Loom-aware locks (such as the synchronized keyword), they are not suitable for all tasks within Jetty.
    Thus we think a better approach is to keep the core of Jetty running on kernel threads, but to spawn a virtual thread to do the actual work of reading, parsing, and calling the application and writing the response.  If we flag those tasks with InvocationType.NON_BLOCKING, then they will be called directly by the selector thread, with no executor overhead. These tasks can then spawn a new virtual thread to proceed with the reading, parsing,  handling, marshalling, writing and blocking.  Thus we have created the jetty-10.0.x-loom branch, to use this approach and hopefully give a good basis for fair comparisons.
    Our initial runs with our CometD benchmark with just 20 clients resulted in long GCs followed by out of memory failures! This is due to the usage of ThreadLocal for gathering latency statistics and each virtual thread was creating a latency capture data structure, only to use it once and then throw it away!  While this problem is solvable by changing the CometD benchmark code, it reaffirms that threads use resources other than stack and that Loom virtual threads are not a drop in replacement for kernel threads.
    We are aware that the handling of ThreadLocal is a well known problem in Loom, but until solved it may be a surprisingly hard problem to cope with, since you don’t typically know if a library your application depends on uses ThreadLocal or not.
    With the CometD benchmark modified to not use ThreadLocal, we can now take Loom/Jetty/CometD to a moderate number of clients (1000 which generated the flame graph above) with the following results:

    CLIENT: Async Jetty/CometD server
    ========================================
    Testing 1000 clients in 100 rooms, 10 rooms/client
    Sending 1000 batches of 10x50 bytes messages every 10000 µs
    Elapsed = 10015 ms
    - - - - - - - - - - - - - - - - - - - -
    Outgoing: Rate = 990 messages/s - 99 batches/s - 12.014 MiB/s
    Incoming: Rate = 99829 messages/s - 35833 batches/s(35.89%) - 26.352 MiB/s
                    @     _  3,898 µs (112993, 11.30%)
                       @  _  7,797 µs (141274, 14.13%)
                       @  _  11,696 µs (136440, 13.65%)
                       @  _  15,595 µs (139590, 13.96%) ^50%
                       @  _  19,493 µs (142883, 14.29%)
                      @   _  23,392 µs (130493, 13.05%)
                    @     _  27,291 µs (112283, 11.23%) ^85%
            @             _  31,190 µs (59810, 5.98%) ^95%
      @                   _  35,088 µs (12968, 1.30%)
     @                    _  38,987 µs (4266, 0.43%) ^99%
    @                     _  42,886 µs (2150, 0.22%)
    @                     _  46,785 µs (1259, 0.13%)
    @                     _  50,683 µs (910, 0.09%)
    @                     _  54,582 µs (752, 0.08%)
    @                     _  58,481 µs (567, 0.06%)
    @                     _  62,380 µs (460, 0.05%) ^99.9%
    @                     _  66,278 µs (365, 0.04%)
    @                     _  70,177 µs (232, 0.02%)
    @                     _  74,076 µs (82, 0.01%)
    @                     _  77,975 µs (13, 0.00%)
    @                     _  81,873 µs (2, 0.00%)
    Messages - Latency: 999792 samples
    Messages - min/avg/50th%/99th%/max = 209/15,095/14,778/35,815/78,184 µs
    Messages - Network Latency Min/Ave/Max = 0/14/78 ms
    SERVER: Async Jetty/CometD server
    ========================================
    Operative System: Linux 5.8.0-33-generic amd64
    JVM: Oracle Corporation OpenJDK 64-Bit Server VM 16-ea+25-1633 16-ea+25-1633
    Processors: 12
    System Memory: 89.26419% used of 31.164349 GiB
    Used Heap Size: 73.283676 MiB
    Max Heap Size: 2048.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    Elapsed Time: 10568 ms
       Time in Young GC: 5 ms (2 collections)
       Time in Old GC: 0 ms (0 collections)
    Garbage Generated in Eden Space: 3330.0 MiB
    Garbage Generated in Survivor Space: 4.227936 MiB
    Average CPU Load: 397.78314/1200
    ========================================
    Jetty Thread Pool:
        threads:                174
        tasks:                  302146
        max concurrent threads: 34
        max queue size:         152
        queue latency avg/max:  0/11 ms
        task time avg/max:      1/3316 ms
    

     

    CLIENT: Loom Jetty/CometD server
    ========================================
    Testing 1000 clients in 100 rooms, 10 rooms/client
    Sending 1000 batches of 10x50 bytes messages every 10000 µs
    Elapsed = 10009 ms
    - - - - - - - - - - - - - - - - - - - -
    Outgoing: Rate = 990 messages/s - 99 batches/s - 13.774 MiB/s
    Incoming: Rate = 99832 messages/s - 41201 batches/s(41.27%) - 27.462 MiB/s
                     @    _  2,718 µs (99690, 9.98%)
                       @  _  5,436 µs (116281, 11.64%)
                       @  _  8,155 µs (115202, 11.53%)
                       @  _  10,873 µs (108572, 10.87%)
                      @   _  13,591 µs (106951, 10.70%) ^50%
                       @  _  16,310 µs (117139, 11.72%)
                       @  _  19,028 µs (114531, 11.46%)
                    @     _  21,746 µs (94080, 9.42%) ^85%
                @         _  24,465 µs (71479, 7.15%)
          @               _  27,183 µs (34358, 3.44%) ^95%
      @                   _  29,901 µs (11526, 1.15%) ^99%
     @                    _  32,620 µs (4513, 0.45%)
    @                     _  35,338 µs (2123, 0.21%)
    @                     _  38,056 µs (988, 0.10%)
    @                     _  40,775 µs (562, 0.06%)
    @                     _  43,493 µs (578, 0.06%) ^99.9%
    @                     _  46,211 µs (435, 0.04%)
    @                     _  48,930 µs (187, 0.02%)
    @                     _  51,648 µs (31, 0.00%)
    @                     _  54,366 µs (27, 0.00%)
    @                     _  57,085 µs (1, 0.00%)
    Messages - Latency: 999254 samples
    Messages - min/avg/50th%/99th%/max = 192/12,630/12,476/29,704/54,558 µs
    Messages - Network Latency Min/Ave/Max = 0/12/54 ms
    SERVER: Loom Jetty/CometD server
    ========================================
    Operative System: Linux 5.8.0-33-generic amd64
    JVM: Oracle Corporation OpenJDK 64-Bit Server VM 16-loom+9-316 16-loom+9-316
    Processors: 12
    System Memory: 88.79622% used of 31.164349 GiB
    Used Heap Size: 61.733116 MiB
    Max Heap Size: 2048.0 MiB
    - - - - - - - - - - - - - - - - - - - -
    Elapsed Time: 10560 ms
       Time in Young GC: 23 ms (8 collections)
       Time in Old GC: 0 ms (0 collections)
    Garbage Generated in Eden Space: 8068.0 MiB
    Garbage Generated in Survivor Space: 3.6905975 MiB
    Average CPU Load: 413.33084/1200
    ========================================
    Jetty Thread Pool:
        threads:                14
        tasks:                  0
        max concurrent threads: 0
        max queue size:         0
        queue latency avg/max:  0/0 ms
        task time avg/max:      0/0 ms
    

    The results here are a bit mixed, but there are some positives for Loom:

    • Both approaches easily achieved the 1000 msg/s sent to the server and 99.8k msg/s received from the server (messages have an average fan-out of a factor 100).
    • The Loom version broke up those messages into 41k responses/s whilst the async version used bigger batches at 35k responses/s, which each response carrying more messages. We need to investigate why, but we think Loom is faster at starting to run the task (no time in the thread pool queue, no time to “wake up” an idle thread).
    • Loom had better latency, both average (~12.5 ms vs ~14.8 ms) and max (~54.6 ms vs ~78.2 ms)
    • Loom used more CPU: 413/1200 vs 398/1200 (4% more)
    • Loom generated more garbage: ~8068.0 MiB vs ~3330.0 MiB and less objects made it to survivor space.

    This is an interesting but inconclusive result.  It is at a low scale on a fast loopback network with a client unlikely to cause blocking, so not really testing either approach.  We now need to scale this test to many 10,000s of clients on a real network, which will require multiple load generation machines and careful measurement.  This will be the subject of part 3 (probably some weeks away).

    Conclusion (part 2) – Cheap threads can do expensive things

    It is good that Project Loom adds inexpensive and fast spawning/blocking virtual threads to the JVM.  But cheap threads can do expensive things!
    Having 1,000,000 concurrent application entities is going to take memory, CPU and other resources, no matter if they block or use async callbacks. It may be that entirely different programming styles are needed for Loom, as is suggested by Loom Structured Concurrency, however we have not yet seen anything that provides limitations on resources that can be used by unlimited spawning of virtual threads. There are also indications that Loom’s flexible stack management comes with a CPU cost.   However, it has been moderately simple to update Jetty to experiment with using Loom to call a blocking application and we’d very much encourage others to load test their application on the jetty-10.0.x-loom branch.
    Many of Loom’s claims have stacked up: blocking code is much easier to write, virtual threads are very fast to start and cheap to block. However, other key claims either do not hold up or have yet to be substantiated: we do not think virtual threads give natural scaling as threads themselves are not the limiting factor, rather it is the resources that are used that determines the scaling.  The suggestion to “Forget about thread-pools, just spawn a new thread…” feels like an invitation to create unstable applications unless other substantive resource management strategies are put into place.
    Given that Duke’s “new clothes” woven by Loom are not one-size-fits-all, it would be a mistake to stop developing asynchronous APIs for things such as DNS and JDBC on the unsubstantiated suggestion that Loom virtual threads will make them unnecessary.

  • Do Loom’s Claims Stack Up? Part 1: Millions of Threads?

    “Project Loom aims to drastically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications that make the best use of available hardware. … The problem is that the thread, the software unit of concurrency, cannot match the scale of the application domain’s natural units of concurrency — a session, an HTTP request, or a single database operation. …  Whereas the OS can support up to a few thousand active threads, the Java runtime can support millions of virtual threads. Every unit of concurrency in the application domain can be represented by its own thread, making programming concurrent applications easier. Forget about thread-pools, just spawn a new thread, one per task.” – Ron Pressler, State of Loom, May 2020

    Project Loom brings virtual threads (back) to the JVM in an effort to reduce the effort of writing high-throughput concurrent applications. Loom has generated a fair bit of interest with claims that Asynchronous APIs may no longer be necessary for things like Futures, JDBC, DNS, Reactive, etc. So since Loom is now available in OpenJDK 16 early access includes, we thought it was a good time to test out some of the amazing claims that have been made for Duke‘s new opaque clothing that has been woven by Loom!  Spoiler – Duke might not be naked, but its attire could be a tad see-through!
    All the code from this blog is available in our loom-trial project and has been run on my dev machine (Intel® Core™ i7-6820HK CPU @ 2.70GHz × 8, 32GB memory,  Ubuntu 20.04.1 LTS 64-bit, OpenJDK Runtime Environment (build 16-loom+9-316)) with no specific tuning and default settings unless noted otherwise. 

    Some History

    We started writing what would become Eclipse Jetty in 1995 on Java 0.9.  For its first decade, Jetty was a blocking server using a thread per request and then a thread per connection, and large thread pools (sometimes many thousands) were sufficient to handle almost all the loads offered.
    However, there were a few deployments that wanted more parallelism, plus the advent of virtual hosting meant that servers were often sharing physical machines with other server instances, all trying to pre-allocate max resources in their idle thread pools to handle potential load spikes.
    Thus there was some demand for async and so Jetty-6 in 2006 introduced some asynchronous I/O. Yet it was not until Jetty-9 in 2012 that we could say that Jetty was fully asynchronous through the container and to the application and we still fight with the complexity of it today.
    Through this time, Java threads were initially implemented by Green Threads and there were lots of problems of live lock, priority inversion, etc. It was a huge relief when native threads were introduced to the JVM and thus we were a little surprised at the enthusiasm expressed for Loom, which appears to be a revisit of late-stage MxN Green Threads and suffers from at least some similar limitations (e.g. the CPUBound test demonstrates that the lack of preemption makes virtual tasks unsuitable for CPU bound tasks). This paper from 2002 on Multithreading in Solaris gives an excellent background on this subject and describes the switch from the MxN threading to 1:1 native threads with terms like “better scalability”, “simplicity”, “improved quality” and that MxN had “not quite delivered the anticipated benefits”. Thus we are really interested to find out what is so different this time around.
    The Jetty team has a near-unique perspective on the history of both Java threading and the development of highly concurrent large throughput Java applications, which we can use to evaluate Loom. It’s almost like we were frozen in time for decades to bring back our evil selves from the past 🙂

    One Million Threads!

    That’s a lot of threads and it is a claim that is really easy to test!  Here is an extract from MaxVThreads:

    CountDownLatch hold = new CountDownLatch(1);
    while (threads.size() < 1_000_000)
    {
        CountDownLatch started = new CountDownLatch(1);
        Thread thread = Thread.builder().virtual().task(() ->
        {
            try
            {
                started.countDown();
                hold.await();
            }
            catch (InterruptedException e)
            {
                e.printStackTrace();
            }
        }).start();
        threads.add(thread);
        started.await();
        System.err.printf("%s: %,d%n", thread, threads.size());
    }

    Which we ran and got:

    ...
    VirtualThread[@244165d6,...]:   999,998
    VirtualThread[@6f40da3b,...]:   999,999
    VirtualThread[@1cfca01c,...]: 1,000,000

    Async is Dead!!!
    Long live Loom!!!
    Lunch is Free!!!
    Bullets are Silver!!!
    (more…)

  • CometD 5.0.3, 6.0.0 and 7.0.0

    Following the releases of Eclipse Jetty 10.0.0 and 11.0.0, the CometD project has released versions 5.0.3, 6.0.0 and 7.0.0.

    CometD 5.0.x Series

    CometD 5.0.x, of which the latest is the newly released 5.0.3, require at least Java 8 and it is based on Jetty 9.4.x.
    This version will be maintained as long as Jetty 9.4.x is maintained, likely many more years, to allow migration away from Java 8.

    CometD 6.0.x Series

    CometD 6.0.x, with the newly released 6.0.0, requires at least Java 11 and it is based on Jetty 10.0.x.
    In turn, Jetty 10.0.x is based on Java EE 8 / Jakarta EE 8, which provide Servlet and WebSocket APIs under the javax.* packages.
    This version of CometD provides a smooth transition from CometD 5 (or earlier) to Java 11 and Jetty 10.
    In this way, you can leverage new Java 11 language features in your application source code, as well as the few new APIs made available in the Servlet 4.0 Specification without having to change much of your code, since you will still be using javax.* Servlet and WebSocket APIs (see the next section on CometD 7.0.x for the migration to the jakarta.* APIs).
    However, server-side CometD applications should not depend much on Servlet or WebSocket APIs, so the migration from CometD 5 (or earlier) should be pretty straightforward.
    CometD 6.0.x, as Jetty 10.0.x, are transition releases to the new Jakarta EE 9 APIs in the jakarta.* packages.
    CometD 6.0.x will be maintained as long as Jetty 10.0.x is maintained.
    Since CometD applications do not depend much on Servlet or WebSocket APIs, consider moving directly to CometD 7.0.x if you don’t depend on other libraries (for example, Spring) that require javax.* classes.

    CometD 7.0.x Series

    CometD 7.0.x, with the newly released 7.0.0, requires at least Java 11 and it is based on Jetty 11.0.x.
    In turn, Jetty 11.0.x is based on Jakarta EE 9, which provide Servlet and WebSocket APIs under the jakarta.* packages.
    Migrating from CometD 6.0.x (or earlier) to CometD 7.0.x should be very easy if your applications depend mostly on the CometD APIs (few or no changes there) and depend very little on the Servlet or WebSocket APIs.
    Dependencies on the javax.* APIs should be migrated to the jakarta.* APIs.
    If your applications depend on third-party libraries, you need to make sure to update the third-party library to a version that supports — if necessary — jakarta.* APIs.
    Migrating to CometD 7.0.x allows you to leverage Java 11 language features and the benefit that you can base your applications on the new Jakarta EE Specifications, therefore abandoning the now-dead Java EE Specifications.
    This migration keeps your applications up-to-date with the current state-of-the-art of EE Specifications, reducing the technical debt to a minimum.
    CometD 7.0.x will be maintained as long as Jetty 11.0.x is maintained.
    The evolution of the Jakarta EE Specifications will be implemented by future Jetty versions, and future CometD versions will keep the pace.

    Which CometD Series Do I Use?

    If your applications depend on third-party libraries that use or depend on Jetty 9.4.x (such as Spring / Spring Boot), use CometD 5.0.x and Jetty 9.4.x until the third party libraries update to newer Jetty versions.
    If your applications depend on third-party libraries that depend on javax.* APIs and that have not been updated to Jakarta EE 9 yet (and therefore to jakarta.* APIs), use CometD 6.0.x and Jetty 10.0.x until the third party libraries update to Jakarta EE 9.
    If your applications depend on third-party libraries that have already been updated to use Jakarta EE 9 APIs, for example, Jakarta Restful Web Services (previously known as JAX-RS) using Eclipse Jersey 3.0, use CometD 7.0.x and Jetty 11.0.x.
    If you are migrating from earlier CometD versions, skip entirely CometD 6.0.x: for example move from CometD 5.0.x and Jetty 9.4.x directly to CometD 7.0.x and Jetty 11.0.x.

  • Reactive HttpClient 1.1.5, 2.0.0 and 3.0.0

    Following the releases of Eclipse Jetty 10.0.0 and 11.0.0, the Reactive HttpClient project — introduced back in 2017 — has released versions 1.1.5, 2.0.0 and 3.0.0.

    Reactive HttpClient 1.1.x Series

    Reactive HttpClient Versions 1.1.x, of which the latest is the newly released 1.1.5, requires at least Java 8 and it is based on Jetty 9.4.x.
    This version will be maintained as long as Jetty 9.4.x is maintained, likely many more years, to allow migration away from Java 8.

    Reactive HttpClient 2.0.x Series

    Reactive HttpClient Versions 2.0.x, with the newly released 2.0.0, requires at least Java 11 and it is based on Jetty 10.0.x.
    The Reactive HttpClient 2.0.x series is incompatible with the 1.1.x series, since the Jetty HttpClient APIs changed between Jetty 9.4.x and Jetty 10.0.x.
    This means that projects such as Spring WebFlux, at the time of this writing, are not compatible with the 2.0.x series of Reative HttpClient.

    Reactive HttpClient 3.0.x Series

    Reactive HttpClient Versions 3.0.x, with the newly released 3.0.0, requires at least Java 11 and it is based on Jetty 11.0.x.
    In turn, Jetty 11.0.x is based on the Jakarta EE 9 Specifications, which means jakarta.servlet and not javax.servlet.
    The Reactive HttpClient 3.0.x series is fundamentally identical to the 2.0.x series, apart from the Jetty dependency.
    While the HttpClient APIs do not change between Jetty 10 and Jetty 11, if you are using Jakarta EE 9 it will be more convenient to use the Reactive HttpClient 3.0.x series.
    For example when using Reactive HttpClient to call a third party service from within a REST service, it will be natural to use Reactive HttpClient 2.0.x if you use javax.ws.rs, and Reactive HttpClient 3.0.x if you use jakarta.ws.rs.
    Enjoy the new releases and tell us which series you use by adding a comment here!
    For further information, refer to the project page on GitHub.

  • Jetty 10 and 11 Have Arrived!

    The Eclipse Jetty team is proud to announce the release of Jetty 10 and Jetty 11! Let’s first get into the details of Jetty 10, which includes a huge amount of enhancements and upgrades. A summary of the changes follows.

    Minimum Java Version

    The minimum required Java version for Jetty 10 is now Java 11.
    As new versions of the OpenJDK are released, Jetty is tested for compatibility and is at this point compatible with versions up to Java 15.

    WebSocket Refactor

    • Major refactor of Jetty WebSocket code. The Jetty WebSocket core API now provides the RFC6455 protocol implementation. The Jetty and Javax WebSocket APIs are separate layers which use the new WebSocket core implementation. 
    • Support for RFC8441 WebSocket over HTTP/2 on both WebSocket client and WebSocket server. This allows a HTTP/2 stream to be upgraded to a WebSocket connection allowing the single TCP connection to be shared by both HTTP/2 and WebSocket protocols.
    • Various improvements and cleanups to the Jetty WebSocket API.

    Self Ordering Webapp Configurations.

    • The new WebAppContext Configuration API, useful for embedded Jetty users, have been made simpler to use.
    • The available Configurations are discovered with the ServiceLoader, so that often all that is required is to add a feature jar onto the server classpath for that feature to be applied to all web applications.
    • Configurations are self ordering using before/after dependencies defined in the Configurations themselves. This removes the need to explicitly order Configurations or to use add-after style semantics.
    • A Configuration now includes the ability to condition the Webapp classloader so that classes/packages on the server class loader can be explicitly hidden, protected and/or exposed to the Webapp classloader.

    HTTP Session Refactor

    The HTTP session storage integrations were refactored to be consistent. Previously, the various integrations discovered expired sessions in different ways and with different thresholds for expiry. The refactoring applies the same uniform criteria for expiry no matter the persistence technology used. Some of the configuration properties have changed as a result of the refactoring.

    New XML DTD

    The Jetty XML files now validate against the new configure_10_0.dtd.  We encourage moving to the new DTD.

    Jetty Distribution Change

    There is no longer a jetty-distribution artifact, use the equivalent jetty-home artifact.
    The $JETTY_BASE mechanism has not changed.  Upgrades should be mostly transparent, only custom XML in your $JETTY_BASE and logging require migration.

    Jetty-Quickstart Just Got Easier to Use

    In Jetty 10 it is easier to use the jetty-quickstart mechanism to improve the start-up time of webapps. Previously it was necessary to enable the quickstart module and also create a special context xml file for your webapp and explicitly apply some configuration. With Jetty 10 you no longer need the special context xml file: you simply enable the quickstart module, and then edit the generated start.d/quickstart.ini file to either force generation or use a pre-generated quickstart file.

    Expanded JPMS Support

    Jetty 10 is now fully JPMS compliant with proper JPMS modules and module-info.class files, improving over Jetty 9.4.x where only the Automatic-Module-Name was defined for each artifact.
    This allows other libraries to leverage Jetty JPMS modules properly, and use jlink to create custom images that embed Jetty.

    HTTP Client

    HttpClient has undergone a number of changes, in particular:

    • An improved API for setting request and response headers, based on the new HttpFields and HttpField.Mutable APIs.
    • An improved API for setting request content, based on a reactive model where the demand of content and buffer recycling can now be performed separately. This allows applications to provide request content implementations in a much simpler way, without the need to know HttpClient implementation details.
    • The introduction of ClientConnector, a reusable component that manages the connections opened by HttpClient that can now be shared with HTTP2Client, and that centralizes the configuration of other components such as the Executor, the Scheduler, the SslContextFactory, and the TCP properties.
    • The introduction of a dynamic HttpClientTransport that allows switching, for example, from HTTP/1.1 to HTTP/2 after negotiation with the server. This allows us to write web spiders or proxies that can prefer HTTP/2 but fall back to HTTP/1.1 if the server does not support HTTP/2.
    • Support for tunneling WebSocket over HTTP/2 (this feature is primarily used by the WebSocketClient – for the application it should be transparent)
    • Support for the PROXY protocol so that proxy applications that use HttpClient can use the PROXY protocol and forward client TCP data (and metadata when using PROXY v2) to the server.

    Logging

    Jetty 9 and earlier had its own Logging Facade.
    Starting with Jetty 10, that custom facade has been completely replaced by SLF4J 2.0 API usage. You can use any logging implementation you want with Jetty, as long as there is an SLF4J 2.x implementation available for it.
    The entire org.eclipse.jetty.util.log package has seen large changes in Jetty 10.

    • org.eclipse.jetty.util.log.Log is deprecated and all methods just delegate to SLF4J.
    • org.eclipse.jetty.util.log.Logger is deprecated and is not used.
    • No other classes from Jetty 9 exist in this package.

    In Jetty 10, there is also a new optional SLF4J implementation provided by Jetty in the form of (jetty-slf4j-impl) which performs the same functionality as the old StdErrLog from Jetty 9 and earlier, with support for configuring logging levels via the jetty-logging.properties classpath resource.
    The format of the jetty-logging.properties file is almost identical to what it was before.
    The following properties have been removed (and cause no changes in the configuration and no warnings or errors in Jetty 10 forward)

    • org.eclipse.jetty.util.log.class=<classname>
      • This has no property replacement, correct SLF4J jar file usage is how you configure this.
    • org.eclipse.jetty.util.log.IGNORED=<boolean>
      • There is no replacement for “IGNORED” logging level.
    • org.eclipse.jetty.util.log.announce=<boolean>
      • Logging announcement is not a feature of Jetty but could be part of the logging implementation you choose, configuring this would be in the logging implementation you choose.
    • org.eclipse.jetty.util.log.SOURCE=<boolean>
    • org.eclipse.jetty.util.log.stderr.SOURCE=<boolean>
      • Source logging is a possible feature of the specific SLF4J implementation you choose,  check their documentation.

    There are new properties as well:

    • org.eclipse.jetty.logging.appender.NAME_CONDENSE=<boolean>
      • This was the old org.eclipse.jetty.util.log.stderr.LONG which controlled the output of the logger name.  (true means the package name are condensed in the output, false means the fully qualified class name is used in output) – defaults to true
    • org.eclipse.jetty.logging.appender.THREAD_PADDING=<int>
      • This was the old org.eclipse.jetty.util.log.StdErrLog.TAG_PAD for aligning all messages to a common offset from the start of the log line. – defaults to -1
    • org.eclipse.jetty.logging.appender.MESSAGE_ESCAPE=<boolean>
      • This was the old org.eclipse.jetty.util.log.stderr.ESCAPE for turning on/off the escaping of control characters found in the message text. – defaults to true
    • org.eclipse.jetty.logging.appender.ZONE_ID=<timezone-id>
      • This defaults to java.util.TimeZone.getDefault() and is present to allow setting the output timestamp in the desired timezone.

    Jetty 11

    When Oracle donated the JavaEE project to the Eclipse foundation, they maintained the copyright to the javax.* namespace. Subsequently, future versions of JavaEE, now called JakartaEE, would be required to use a new namespace for those packages. Instead of an incremental process, the Eclipse foundation opted for a big bang approach and packages formerly under the javax.* namespace has been refactored under the new jakarta.* namespace.
    Enter Jetty 11. Jetty 11 is identical to Jetty 10 except that the javax.* packages now conform to the new jakarta.* namespace. Jetty 10 and 11 will remain in lockstep with each other for releases, meaning all new features or bug fixes in one version will be available in the other.
    It is important to note that Jetty 10.x will be the last major Jetty version to support the javax.* namespace. If this is alarming, be assured that Jetty 10 will be supported for a number of years to come. We have no plans on releasing it only to drop support for it in 12-18 months.

  • Object Pooling, Benchmarks, and Another Way

    Context

    The Jetty HTTP client internally uses a connection pool to recycle HTTP connections, as they are expensive to create and dispose of. This is a well-known pattern that has proved to work well.
    While this pattern brings great benefits, it’s not without its shortcomings. If the Jetty client is used by concurrent threads (like it’s meant to be) then one can start seeing some contention building up onto the pool as it is a central point that all threads have to go to to get a connection.
    Here’s a breakdown of what the client contains:

    HttpClient
      ConcurrentMap<Origin, HttpDestination>
    Origin
      scheme
      host
      port
    HttpDestination
      ConnectionPool
    

    The client contains a Map of Origin -> HttpDestination. The origin basically describes where the remote service is and the destination is a wrapper for the connection pool with bells and whistles.
    So, when you ask the client to do a GET on https://www.google.com/abc the client will use the connection pool contained in the destination keyed by “https|www.google.com|443”. Asking the client to GET https://www.google.com/xyz would make it use the exact same connection pool as the origins of both URLs are the same. Asking the same client to also do a GET on https://www.facebook.com/ would make it use a different connection pool as the origin this time around is “https|www.facebook.com|443”.
    It is quite common for a client to exchange with a handful, or sometimes even a single destination so it is expected that if multiple threads are using the client in parallel, they will all ask the same connection pool for a connection.
    There are three important implementations of the ConnectionPool interface:
    DuplexConnectionPool is the default. It guarantees that a connection will never be shared among threads, i.e.: when a connection is acquired by one thread, no other thread will be able to acquire it for as long as it hasn’t been released. It also tries to maximize re-use of the connections such as only a minimal amount of connections have to be kept open.
    MultiplexConnectionPool This one allows up to N threads (N being configurable) to acquire the exact same connection. While this is of no use for HTTP/1 connections as HTTP/1.1 does not allow running concurrent requests, HTTP/2 does so when the Jetty client connects to an HTTP/2 server; a single connection can be used to send multiple requests in parallel. Just like DuplexConnectionPool, it also tries to maximize the re-use of connections.
    RoundRobinConnectionPool Similar MultiplexConnectionPool as it allows multiple threads to share the connections. But unlike it, it does not try to maximize the re-use of connections. Instead, it tries to spread the load evenly across all of them. It can be configured to not multiplex the connections so that it can also be used with HTTP/1 connections.

    The Problem on the Surface

    Some users are heavily using the Jetty client with many concurrent threads, and they noticed that their threads were bottlenecked in the code that tries to get a connection from the connection pool. A quick investigation later revealed the problem comes from the fact that the connection pool implementations all use a java.util.Deque protected with a java.util.concurrent.locks.ReentrantLock. A single lock + multiple threads makes it rather obvious why this bottlenecks: all threads are contending on the lock which is easily provable with a simple microbenchmark ran under a profiler.
    Here is the benchmarked code:

    @Benchmark
    public void testPool()
    {
      Connection connection = pool.acquire(true);
      Blackhole.consumeCPU(ThreadLocalRandom.current().nextInt(10, 20));
      pool.release(connection);
    }
    

    And here is the report of the benchmark, running on a 12 cores CPU with 12 parallel threads and 12 connections in the pool:

    Benchmark                                 (POOL_TYPE)   Mode  Cnt        Score         Error  Units   CPU
    ConnectionPoolsBenchmark.testPool              duplex  thrpt    3  3168609.140 ± 1703378.453  ops/s   15%
    ConnectionPoolsBenchmark.testPool           multiplex  thrpt    3  2284937.900 ±  191568.815  ops/s   15%
    ConnectionPoolsBenchmark.testPool         round-robin  thrpt    3  1403693.845 ±  219405.841  ops/s   25%
    

    It was quite apparent that while the benchmark was running, the reported CPU consumption never reached anything close to 100% no matter how many threads were configured to use the connection pool in parallel.

    The Problem in Detail

    There are two fundamental problems. The most obvious one is the lock that protects all access to the connection pool. The second is more subtle, it’s the fact that a queue is an ill-suited data structure for writing performant concurrent algorithms.
    To be more precise: there are excellent concurrent queue algorithms out there that do a terrific job, and sometimes you have no choice but to use a queue to implement your logic correctly. But when you can write your concurrent code with a different data structure than a queue, you’re almost always going to win. And potentially win big.
    Here is an over-simplified explanation of why:

    Queue
    head           tail
    [c1]-[c2]-[c3]-[c4]
    c1-4 are the connection objects.
    

    All threads dequeue from the head and enqueue at the bottom, so that creates natural contention on these two spots. No matter how the queue is implemented, there is a natural set of data describing the head that must be read and modified concurrently by many threads so some form of mutual exclusion or Compare-And-Set retry loop has to be used to touch the head. Said differently: reading from a queue requires modifying the shape of the queue as you’re removing an entry from it. The same is true when writing to the queue.
    Both DuplexConnectionPool and MultiplexConnectionPool exacerbate the problem because they use the queue as a stack: the re-queuing is performed at the top of the queue as these two implementations want to maximize usage of the connections. RoundRobinConnectionPool mitigates that a bit by dequeuing from the head and re-queuing at the tail to maximize load spreading over all connections.

    The Solution

    The idea was to come up with a data structure that does not need to modify its shape when a connection is acquired or released. So instead of directly storing the connections in the data structure, let’s wrap it in some metadata holder object (let’s call its class Entry) that can tell if the connection is in use or not. Let’s pick a base data structure that is quick and easy to iterate, like an array.
    All threads iterate the array and try to reserve the visited connections up until one can be successfully reserved. The reservation is done by trying to flip the flag atomically which isn’t retried as a failure meaning you
    need to try the next connection. Because of multiplexing, the free/in-use flag actually has to be a counter that can go from 0 to configured max multiplex.

    Array
    [e1|e2|e3|e4]
     a1 a2 a3 a4
     c1 c2 c3 c4
    e1-4 are the Entry objects.
    a1-4 are the "acquired" counters.
    c1-4 are the connection objects.
    

    Both DuplexConnectionPool and MultiplexConnectionPool still have some form of contention as they always start iterating at the array’s index 0. This is unavoidable if you want to maximize the usage of connections.
    RoundRobinConnectionPool uses an atomic counter to figure out where to start iterating. This relieves the contention on the first elements in the array at the expense of contending on the atomic counter itself.
    The results of this algorithm are speaking for themselves:

    Benchmark                                 (POOL_TYPE)   Mode  Cnt         Score         Error  Units   CPU
    ConnectionPoolsBenchmark.testPool              duplex  thrpt    3  15661516.143 ±  104590.027  ops/s  100%
    ConnectionPoolsBenchmark.testPool           multiplex  thrpt    3   6172145.313 ±  459510.509  ops/s  100%
    ConnectionPoolsBenchmark.testPool         round-robin  thrpt    3  15446647.061 ± 3544105.965  ops/s  100%
    

    Clearly, this time we’re using all the CPU cycles we can. And we get 3 times more throughput for the multiplex pool, 5 times for the duplex pool, and 10 times for the round-robin pool.
    This is quite an improvement in itself, but we can still do better.

    Closing the Loop

    The above explanation describes the core algorithm that the new connection pools were built upon. But this is a bit of an over-simplification as reality is a bit more complex than that.

    • Entries count can change dynamically. So instead of an array, a CopyOnWriteArrayList is used so that entries can be added or removed at any time while the pool is being used.
    • Since the Jetty client creates connections asynchronously and the pool enforces a maximum size, the mechanism to add new connections works in two steps: reserve and enable, plus a counter to track the pending connections, i.e.: those that are reserved but not yet enabled. Since it is not expected that adding or removing connections is going to happen very often, the operations that modify the CopyOnWriteArrayList happen under a lock to simplify the implementation.
    • The pool provides a feature with which you can set how many times a connection can be used by the Jetty client before it has to be closed and a new one opened. This usage counter needs to be updated atomically with the multiplexing counter which makes the acquire/release finite state machine more complex.

    Extra #1: Caching

    There is one extra step we can take that can improve the performance of the duplex and multiplex connection pools even further. In an ideal case, there are as many connections in the pool as there are threads using them. What if we could make the threads stick to the same connection every time? This can be done by storing a reference to the successfully acquired connection into a thread-local variable which could then be reused the next time(s) a connection is acquired. This way, we could bypass both the iteration of the array and contention on the “acquired” counters. Of course, life isn’t always ideal so we would still need to keep the iteration and the counters as a fallback in case we drift away from the ideal case. But the closer the usage of the pool would be from the ideal case, the more this thread-local cache would avoid iterations and contention on the counters.
    Here are the results you get with this extra addition:

    Benchmark                                 (POOL_TYPE)   Mode  Cnt         Score           Error  Units   CPU
    ConnectionPoolsBenchmark.testPool       cached/duplex  thrpt    3  65338641.629 ±  42469231.542  ops/s  100%
    ConnectionPoolsBenchmark.testPool    cached/multiplex  thrpt    3  64245269.575 ± 142808539.722  ops/s  100%
    

    The duplex pool is over four times faster again, and the multiplex pool over 10 times faster. Again, the benchmark reflects the ideal case so this is really the best one can get but these improvements are certainly worth considering some tuning of the pool to try to get as much of those multipliers as possible.

    Extra #2: Iteration strategy

    All this logic has been moved to the org.eclipse.jetty.util.Pool class so that it can be re-used across all connection pool implementations, as well as potentially for pooling other objects. A good example is the server’s XmlConfiguration parser that uses expensive-to-create XML parsers. Another example is the Inflater and Deflater classes that are expensive to instantiate but are heavily used for compressing/decompressing requests and responses. Both are great candidates to pool objects and already are, using custom never-reused pooling code.
    So we could replace those two pools with the new pool created for the Jetty client’s HTTP connection pool. Except that none of those need to maximize re-use of the pooled objects, and they don’t need to rotate through them either, so none of the two iterations discussed above do the proper job, and both come with some overhead.
    Hence, StrategyType was introduced to tell the pool from where the iteration should start:

    public enum StrategyType
    {
     /**
      * A strategy that looks for an entry always starting from the first entry.
      * It will favour the early entries in the pool, but may contend on them more.
      */
      FIRST,
     /**
      * A strategy that looks for an entry by iterating from a random starting
      * index. No entries are favoured and contention is reduced.
      */
      RANDOM,
     /**
      * A strategy that uses the {@link Thread#getId()} of the current thread
      * to select a starting point for an entry search. Whilst not as performant as
      * using the {@link ThreadLocal} cache, it may be suitable when the pool is substantially smaller
      * than the number of available threads.
      * No entries are favoured and contention is reduced.
      */
      THREAD_ID,
     /**
      * A strategy that looks for an entry by iterating from a starting point
      * that is incremented on every search. This gives similar results to the
      * random strategy but with more predictable behaviour.
      * No entries are favoured and contention is reduced.
      */
      ROUND_ROBIN,
    }
    

    FIRST being the one used by duplex and multiplex connection pools and ROUND_ROBIN the one used, well, by the round-robin connection pool. The two new ones, RANDOM and THREAD_ID, are about spreading the load across all entries like ROUND_ROBIN, but unlike it, they use a cheap way to find that start index that does not require any form of contention.

  • Jetty, ALPN & Java 8u252

    Introduction

    The Jetty Project provided to the Java community support for NPN first (the precursor of ALPN) in Java 7, and then support for ALPN in Java 8.
    The ALPN support was implemented by modifying sun.security.ssl classes, and this required that the modified classes were prepended to the bootclasspath, so the command line would like like this:

    java -Xbootclasspath/p:/path/to/alpn-boot-8.1.13.v20181017.jar ...

    However, every different OpenJDK version could require a different version of the alpn-boot jar. ALPN support is there, but cumbersome to use.
    Things improved a bit with the Jetty ALPN agent. The agent could detect the Java version and redefine on-the-fly the sun.security.ssl classes (using the alpn-boot jars). However, a new OpenJDK version could generate a new alpn-boot jar, which in turn generates a new agent version, and therefore for every new OpenJDK version applications needed to verify whether the agent version needed to be updated. A bit less cumbersome, but still another hoop to jump through.

    OpenJDK ALPN APIs in Java 9

    The Jetty Project worked with the OpenJDK project to introduce proper ALPN APIs, and this was introduced in Java 9 with JEP 244.

    ALPN APIs backported to Java 8u252

    On April 14th 2020, OpenJDK 8u252 released (the Oracle version is named 8u251, and it may be equivalent to 8u252, but there is no source code available to confirm this; we will be referring only to the OpenJDK version in the following sections).
    OpenJDK 8u252 contains the ALPN APIs backported from Java 9.
    This means that for OpenJDK 8u252 the alpn-boot jar is no longer necessary, because it’s no longer necessary to modify sun.security.ssl classes as now the official OpenJDK ALPN APIs can be used instead of the Jetty ALPN APIs.
    The Jetty ALPN agent version 2.0.10 does not perform class redefinition if it detects that the OpenJDK version is 8u252 or later. It can still be left in the command line (although we recommend not to), but will do nothing.

    Changes for libraries/applications directly using Jetty ALPN APIs

    Libraries or applications that are directly using the Jetty ALPN APIs (the org.eclipse.jetty.alpn.ALPN and related classes) should now be modified to detect whether the backported OpenJDK ALPN APIs are available. If the backported OpenJDK ALPN APIs are available, libraries or applications must use them; otherwise they must fall back to use the Jetty ALPN APIs (like they were doing in the past).
    For example, the Jetty project is using the Jetty ALPN APIs in artifact jetty-alpn-openjdk8-[client|server]. The classes in that Jetty artifact have been changed like described above, starting from Jetty version 9.4.28.v20200408.

    Changes for applications indirectly using ALPN

    Applications that are using Java 8 and that depend on libraries that provide ALPN support (such as the jetty-alpn-openjdk8-[client|server] artifact described above) must modify the way they are started.
    For an application that is still using an OpenJDK version prior to 8u252, the typical command line requires the alpn-boot jar in the bootclasspath and a library that uses the Jetty ALPN APIs (here as an example, jetty-alpn-openjdk8-server) in the classpath:

    /opt/openjdk-8u242/bin/java -Xbootclasspath/p:/path/to/alpn-boot-8.1.13.v20181017.jar -classpath jetty-alpn-openjdk8-server-9.4.27.v20200227:...

    For the same application that wants to use OpenJDK 8u252 or later, the command line becomes:

    /opt/openjdk-8u252/bin/java -classpath jetty-alpn-openjdk8-server-9.4.28.v20200408:...

    That is, the -Xbootclasspath option must be removed and the library must be upgraded to a version that supports the backported OpenJDK ALPN APIs.
    Alternatively, applications can switch to use the Jetty ALPN agent as described below.

    Frequently Asked Questions

    Was the ALPN APIs backport a good idea?

    Yes, because Java vendors are planning to support Java 8 until (at least) 2030 and this would have required to provide alpn-boot for another 10 years.
    Now instead, applications and libraries will be able to use the official OpenJDK ALPN APIs provided by Java for as long as Java 8 is supported.

    Can I use OpenJDK 8u252 with an old version of Jetty?

    Yes, if your application does not need ALPN (in particular, it does not need artifacts jetty-alpn-openjdk8-[client|server]).
    No, if your application needs ALPN (in particular, it needs artifacts jetty-alpn-openjdk8-[client|server]). You must upgrade to Jetty 9.4.28.v20200402 or later.

    Can I use OpenJDK 8u252 with library X, which depends on the Jetty ALPN APIs?

    You must verify that library X has been updated to OpenJDK 8u252.
    In practice, library X must detect whether the backported OpenJDK ALPN APIs are available; if so, it must use the backported OpenJDK ALPN APIs; otherwise, it must use the Jetty ALPN APIs.

    I don’t know what OpenJDK version my application will be run with!

    You need to make a version of your application that works with OpenJDK 8u252 and earlier, see the previous question.
    For example, if you depend on Jetty, you need to depend on jetty-alpn-openjdk8-[client|server]-9.4.28.v20200402 or later.
    Then you must add the Jetty ALPN agent to the command line, since it supports all Java 8 versions.

    /opt/openjdk-8u242/bin/java -javaagent:/path/to/jetty-alpn-agent-2.0.10.jar -classpath jetty-alpn-openjdk8-server-9.4.28.v20200408:...
    /opt/openjdk-8u252/bin/java -javaagent:/path/to/jetty-alpn-agent-2.0.10.jar -classpath jetty-alpn-openjdk8-server-9.4.28.v20200408:...

    The command lines are identical apart the Java version.
    In this way, if your application is run with with OpenJDK 8u242 or earlier, the Jetty ALPN agent will apply the sun.security.ssl class redefinitions, and jetty-alpn-openjdk8-[client|server]-9.4.28.v20200402 will use the Jetty ALPN APIs.
    Otherwise your application is run with OpenJDK 8u252 or later, the Jetty ALPN agent will do nothing (no class redefinition is necessary), and the jetty-alpn-openjdk8-[client|server]-9.4.28.v20200402 will use the OpenJDK ALPN APIs.

  • Renaming Jetty from javax.* to jakarta.*

    The Issue

    The Eclipse Jakarta EE project has not obtained the rights from Oracle to extend the Java EE APIs living in the javax.* package. As such, the Java community is faced with a choice between continuing to use the frozen javax.* APIs or transitioning to a new jakarta.* namespace where these specifications could continue to be evolved and/or significantly changed.
    This name change is now a “fait accompli” (a.k.a. “done deal”), so it will happen no matter what. However, how the Eclipse Jakarta EE specification process handles the transition and how vendors like Webtide respond are yet to be determined.   
    This blog discusses some of the options for how Webtide can evolve the Jetty project to deal with this renaming.

    Jakarta EE Roadmap

    Jakarta EE 8

    A release of Jakarta EE 8 will happen first, and that will include:

    • An update to Licensing
    • An update to Naming (to limit the use of the trademarked term “Java”)
    • An update to the Maven GroupID Coordinate Space in Maven Central
    • No change to the Java packaging of the old Java EE 8 classes; they still retain their javax.* namespace.

    For example, the current Maven coordinates for the Servlet jar are (groupId:artifactId:version):
    javax.servlet:javax.servlet-api:4.0.1
    With Jakarta EE 8, these coordinates will change to:
    jakarta.servlet:jakarta.servlet-api:4.0.2
    The content of these 2 jars will be identical: both will contain javax.servlet.Servlet.

    Jakarta EE 9

    The Eclipse Jakarta EE project is currently having a debate about what the future of Jakarta looks like. Options include:

    Jakarta EE “Big Bang” Rename Option

    Jakarta EE 9 would rename every API and implementation to use the jakarta.* namespace API.
    This means that javax.servlet.Servlet will become jakarta.servlet.Servlet.
    No other significant changes would be made (including not removing any deprecated methods or behaviours). This requires applications to update the package imports from javax.* to jakarta.* en mass to update possible dependencies to versions that use the jakarta.* APIs (e.g. REST frameworks, etc.) and recompile the application.
    Alternatively, backward compatibility would be required (provided by Containers) in some form so that legacy javax.* code could be deployed (at least initially) with static and/or dynamic translation and/or adaption.
    For example, Container vendors could provide tools that pre-process your javax.* application, transforming it into a jakarta.* application. Exact details of how these tools will work (or even exist) are still uncertain.

    Jakarta EE Incremental Change

    Jakarta EE 9 would maintain any javax.* API that did not require any changes. Only those APIs that have been modified/updated/evolved/replaced would be in the jakarta.* namespace.
    For example, if the Servlet 4.0 Specification is not updated, it will remain in the javax.* package.
    However, if the Servlet Specification leaders decide to produce a Servlet 5.0 Specification, it will be in the jakarta.* package.
     
     

    The Current State of Servlet Development

    So far, there appears very little user demand for iterations on the Servlet API.
    The current stable Jetty release series, 9.4.x, is on Servlet 3.1, which has not changed since 2013.
    Even though Servlet 4.0 was released in 2017, we have yet to finalize our Jetty 10 release using it, in part waiting for new eclipse Jakarta artifacts for the various EE specs, eg: servlet-api, jsp-api, el-api, websocket-api, and mail-api.  Despite being 2 years late with Servlet 4.0, we have not received a single user request asking for Servlet 4.0 features or a release date.
    On the other hand, there has been interest from our users in more significant changes, some of which we have already started supporting in Jetty specific APIs:

    • JPMS integration/support
    • Asynchronously filter input/output streams without request/response wrapping
    • Reactive style asynchronous I/O
    • Minimal startup times (avoiding discovery mechanisms) to facilitate fast spin-up of new cloud instances
    • Micro deployments of web functions
    • Asynchronous/background consumption of complex request content: XML, JSON, form parameters, multipart uploads etc.

    Jetty Roadmap

    Jetty 10

    Jetty 10, implementing the Servlet 4.0 Specification, will be released once the frozen Jakarta EE 8 artifacts are available. These artifacts will have a Maven groupId of jakarta.*, but will contain classes in the javax.* packages.
    It is envisioned that Jetty 10 will soon after become our primary stable release of Jetty and will be enhanced for several years and maintained for many more. In the mid-term, it is not Webtide’s intention to force any user to migrate to the new jakarta.* APIs purely due to the lack of availability of a javax.* implementation.  Any innovations or developments done in Jetty 10 will have to be non standard extensions. 
    However, we are unable to commit to long term support for the external dependencies bundled with a Jetty release that use the javax.* package (eg. JSP, JSTL, JNDI, etc.) unless we receive such commitments from their developers.

    Jetty 11

    Jetty 11 would be our first release that uses the Jakarta EE 9 APIs. These artifacts will have a Maven groupId of jakarta.* and contain classes also in the jakarta.* packages.
    We are currently evaluating if we will simply do a rename (aka Big Bang) or instead take the opportunity to evolve the server to be able to run independently of the EE APIs and thus be able to support both javax.* and jakarta.*

    Jetty Options with Jakarta EE “Big Bang”

    If the Eclipse foundation determines that the existing javax.* APIs are to be renamed to jakarta.* APIs at once and evolved, then the following options are available for the development of future Jetty versions.
    Note that current discussions around this approach will not change anything about the functionality present in the existing APIs (eg: no removal of deprecated methods or functions. no cleanup or removal of legacy behaviors. no new functionality, etc)

    Option 0. Do Nothing

    We keep developing Jetty against the javax.* APIs which become frozen in time.
    We would continue to add new features but via Jetty specific APIs. The resources that would have otherwise been used supporting the rename to jakarta.* will be used to improve and enhance Jetty instead.
    For users wishing to stay with javax.* this is what we plan to do with Jetty 10 for many years, so “Do Nothing” will be an option for some time. That said, if Jetty is to continue to be a standards-based container, then we do need to explore additional options.

    Option 1. “Big Bang” Static Rename

    Jetty 11 would be created by branching Jetty 10 and renaming all javax.* code to use jakarta.* APIs from Jakarta EE9. Any users wishing to run javax.* code would need to do so on a separate instance of Jetty 10 as Jetty 11 would only run the new APIs.
    New features implemented in future releases of Jakarta EE APIs would only be available in Jetty 11 or beyond. Users that wish to use the latest Jetty would have to do the same rename in their entire code base and in all their dependencies.
    The transition to the new namespace would be disruptive but is largely a one-time effort. However, significant new features in the APIs would be delayed by the transition and then constrained by the need to start from the existing APIs.

    Option 2. “Big Bang” Static Rename with Binary Compatibility

    This option is essentially the same as Option 1, except that Jetty 11 would include tools/features to dynamically rename (or adapt – the technical details to be determined) javax.* code usage, either as it is deployed or via pre-deployment tooling. This would allow existing code to run on the new Jetty 11 releases. However, as the Jakarta APIs evolve, there is no guarantee that this could continue to be done, at least not without some runtime cost. The Jakarta EE project is currently discussing backward compatibility modes and how (or if) these will be specified.
    This approach minimizes the initial impact of transitioning, as it allows legacy code to continue to be deployed. The downside, however, is that these impacts may be experienced for many years until that legacy code is refactored. These effects will include the lost opportunities for new features to be developed as development resources are consumed implementing and maintaining the binary compatibility feature.

    Option 3. Core Jetty

    We could take the opportunity forced on us by the renaming to make the core of Jetty 11 independent of any Jakarta EE APIs.
    A new modern lightweight core Jetty server would be developed, based on the best parts of Jetty 10, but taking the opportunity to update, remove legacy concerns, implement current best practices and support new features.
    Users could develop to this core API as they do now for embedded Jetty usage. Current embedded Jetty code would need to be ported to the new API (as it would with any javax.* rename to jakarta.* ).
    Standard-based deployments would be supported by adaption layers providing the servlet container and implementations would be provided for both javax.* and jakarta.*. A single server would be able to run both old and new APIs, but within a single context, the APIs would not be able to be mixed.
    This is a significant undertaking for the Jetty Project but potentially has the greatest reward as we will obtain new features, not just a rename.

    Jetty Options with Jakarta EE Incremental Change

    These are options if the Eclipse foundation determines that the jakarta.*  package will be used only for new or enhanced APIs.

    Option 4. Jakarta Jetty

    This option is conceptually similar to Option 3, except that the new Jetty core API will be standardized by Jakarta EE 9 (which will hopefully be a lightweight core) to current best practices and include new features.
    Users could develop to this new standard API. Legacy deployments would be supported by adaption layers providing the javax.* servlet container, which may also be able to see new APIs and thus mix development.

    Option 5.  Core Jetty

    This option is substantially the same as Option 3 as Jetty 11 would be based around an independent lightweight core API.
    Users could develop to this core API as they do now for embedded Jetty usage. Standard-based deployments would be supported by adaption layers providing the servlet container and implementations would be provided for both javax.* and jakarta.*.

    How You Can Get Involved

    As with any large-scale change, community feedback is paramount. We invite all users of Jetty, regardless of how you consume it, to make your voice heard on this issue.
    For Jetty-specific feedback, we have opened messages on both the jetty-users and jetty-dev mailing lists. These discussions can be found below:
    https://www.eclipse.org/lists/jetty-users/msg08908.html
    https://www.eclipse.org/lists/jetty-dev/msg03307.html
    To provide feedback on the broader change from the javax.* to jakarta.* namespace, you can find a discussion on the jakartaee-platform-dev mailing list:
    https://www.eclipse.org/lists/jakartaee-platform-dev/msg00029.html

  • Indexing/Listing Vulnerability in Jetty

    If you are using DefaultServlet or ResourceHandler with indexing/listing, then you are vulnerable to a variant of XSS behaviors surrounding the use of injected HTML element attributes on the parent directory link. We recommend disabling indexing/listing or upgrading to a non-vulnerable version.
    To disable indexing/listing:
    If using the DefaultServlet (provided by default on a standard WebApp/WAR), you’ll set the dirAllowed init-param to false.
    This can be controlled in a few different ways:
    Directly in your WEB-INF/web.xml
    Add/edit the following entry …

      <servlet>
        <servlet-name>default</servlet-name>
        <servlet-class>org.eclipse.jetty.servlet.DefaultServlet</servlet-class>
        <init-param>
          <param-name>dirAllowed</param-name>
          <param-value>false</param-value>
        </init-param>
        ... (other init) ...
        <load-on-startup>0</load-on-startup>
      </servlet>
    

    Alternatively, you’ll edit your configured web descriptor default (usually declared as webdefault.xml) or your web descriptor override-web.xml.
    The web defaults descriptor can either be configured at the WebAppProvider level and will be applied to all webapps being deployed, or at the individual webapp.
    For the WebAppProvider level, you have several choices.
    If you are managing the XML yourself, you can set the Default Descriptor to your edited version:

            <Call id="webappprovider" name="addAppProvider">
              <Arg>
                <New class="org.eclipse.jetty.deploy.providers.WebAppProvider">
                  <Set name="monitoredDirName">
                     <Property name="jetty.base"/>/other-webapps
                  </Set>
                  <Set name="defaultsDescriptor">
                     <Property name="jetty.base"/>/etc/webdefault.xml
                  </Set>
                  <Set name="extractWars">true</Set>
                  <Set name="configurationManager">
                    <New class="org.eclipse.jetty.deploy.PropertiesConfigurationManager"/>
                  </Set>
                </New>
              </Arg>
            </Call>

    Note: the WebAppProvider cannot set the override-web.xml for all webapps.
    If you are using the jetty.home/jetty.base module system and associated start.d/*.ini or start.ini, then you should be able to just point to your specifically edited webdefault.xml.
    Example:

    $ grep jetty.deploy.defaultsDescriptorPath start.d/depoy.ini
    jetty.deploy.defaultsDescriptorPath=/path/to/fixed/webdefault.xml
    

    If you are using a webapp specific deployment XML, such as what’s found in ${jetty.base}/webapps/<appname>.xml then you’ll edit the XML to point to your specific webdefault.xml or override-web.xml:

    <Configure id="exampleWebapp" class="org.eclipse.jetty.webapp.WebAppContext">
      <Set name="contextPath">/example</Set>
      <Set name="war"><Property name="jetty.webapps"/>/example.war</Set>
      <Set name="defaultsDescriptor">/path/to/fixed/webdefault.xml</Set>
      <Set name="overrideDescriptor">/path/to/override-web.xml</Set>
    </Configure>
    

    Reminder, the load order for the effective web descriptor is …

    1. Default Descriptor – webdefault.xml
    2. WebApp Descriptor – WEB-INF/web.xml
    3. Override Descriptor – override.xml

    If using the ResourceHandler (such as in an embedded-jetty setup), you’ll use the ResourceHandler.setDirAllowed(false) method.
    Additionally, we discovered that usages of DefaultHandler were susceptible to a similar leak of information. If no webapp was mounted on the root “/” namespace, a page would be generated with links to other namespaces. This has been the default behavior in Jetty for years, but we have removed this to safeguard data.
    As a result of these CVEs, we have released new versions for the 9.2.x, 9.3.x, and 9.4.x branches. The most up-to-date versions of all three are as follows, and are available both on the Jetty website and Maven Central.
    Versions affected:

    • 9.2.27 and older (now EOL)
    • 9.3.26 and older
    • 9.4.16 and older

    Resolved:

    • 9.2.28.v20190418
    • 9.3.27.v20190418
    • 9.4.17.v20190418

     

  • Eat What You Kill without Starvation!

    Jetty 9 introduced the Eat-What-You-Kill[n]The EatWhatYouKill strategy is named after a hunting proverb in the sense that one should only kill to eat. The use of this phrase is not an endorsement of hunting nor killing of wildlife for food or sport.[/n] execution strategy to apply mechanically sympathetic techniques to the scheduling of threads in the producer-consumer pattern that are used for core capabilities in the server. The initial implementations proved vulnerable to thread starvation and Jetty-9.3 introduced dual scheduling strategies to keep the server running, which in turn suffered from lock contention on machines with more than 16 cores.  The Jetty-9.4 release now contains the latest incarnation of the Eat-What-You-Kill scheduling strategy which provides mechanical sympathy without the risk of thread starvation in a single strategy.  This blog is an update of the original post with the latest refinements.

    Parallel Mechanical Sympathy

    Parallel computing is a “false friend” for many web applications. The textbooks will tell you that parallelism is about decomposing large tasks into smaller ones that can be executed simultaneously by different computing engines to complete the task faster. While this is true, the issue is that for web application containers there is not an agreement on what is the “large task” that needs to be decomposed.

    From the applications point of view the large task to be solved is how to render a complex page for a user, combining multiple requests and resources, using many services for authentication and perhaps RESTful access to a data model on multiple back end servers. For the application, parallelism can improve quality of service of rendering a single page by spreading the decomposed tasks over all the available CPUs of the server.

    However, a web application container has a different large task to solve: how to provide service to hundreds or thousands, maybe even hundreds of thousands of simultaneous users. Unfortunately, for the container, the way to optimally allocate its this decomposed task to CPUs is completely opposite to how the application would like it’s decomposed tasks to be executed.

    Consider a server with 4 CPUs serving 4 users each which each have 4 tasks. The applications ideal view of parallel decomposition looks like:

    Label UxTy represent Task y for User x. Tasks for the same user are coloured alike

    This view suggests that each user’s combined task will be executed in minimum time. However some users must wait for prior users tasks to complete before their execution can start, so average latency is higher.

    Furthermore, we know from Mechanical Sympathy that such ideal execution is rarely possible, especially if there is data shared between tasks. Each CPU needs time to load its cache and register with data before it can be acted on. If that data is specific to the problem each user is trying to solve, then the real view of the parallel execution looks more like the following, the orange blocks indicating the time taken to load the CPU cache with user and task related data:

    Label UxTy represent Task y for User x. Tasks for the same user are coloured alike. Orange blocks represent cache load time.

    So from the containers point of view, the last thing it wants is the data from one users large problem spread over all its CPUs, because that means that when it executes the next task, it will have a cold cache and it must be reloaded with the data of the next user.  Furthermore, executing tasks for the same user on different CPUs risks Parallel Slowdown, where the cost of mutual exclusion, synchronisation and communication between CPUs can increase the total time needed to execute the tasks to more than serial execution.  If the tasks are fully mutually excluded on user data (unlikely but a bounding case), then the execution could look like:

    For optimal execution from the containers point of view it is far better if tasks from each user, which use common data, are kept on the same CPU so the cache only needs to be loaded once and there is no mutual exclusion on user data:

    While this style of execution does not achieve the minimal latency and throughput of the idealised application view, in reality it is the fairest and most optimal execution, with all users receiving similar quality of service and the optimal average latency.

    In summary, when scheduling the execution of parallel tasks, it is best to keep tasks that share data on the same CPU so that they may benefit from a hot cache (the original blog contains some micro benchmark results that quantifies the benefit).

    Produce Consume (PC)

    In order to facilitate the decomposition of large problems into smaller ones, the Jetty container uses the Producer-Consumer pattern:

    • The NIO Selector produces IO events that need to be consumed by reading, parsing and handling the data.
    • A multiplexed HTTP/2 connection produces Frames that need to be consumed by calling the Servlet Container. Note that the producer of HTTP/2 frames is itself a consumer of IO events!

    The producer-consumer pattern adds another way that tasks can be related by data. Not only might they be for the same user, but consuming a task will share the data that results from producing the task. A simple implementation can achieve this by using only a single CPU to both produce and consume the tasks:

    while (true)
    {
      Runnable task = _producer.produce();
      if (task == null)
        break;
       task.run();
    }

    The resulting execution pattern has good mechanical sympathy characteristics:

    Label UxPy represent Produce Task y for User x, Label UxCy represent Consume Task y for User x. Tasks for the same user are coloured in similar tones. Orange blocks are cache load times.

    Here all the produced tasks are immediately consumed on the same CPU with a hot cache!  Cache load times are minimised, but the cost is that server will suffer from Head of Line (HOL) Blocking, where the serial execution of task from a queue means that execution of tasks are forced to wait for the completion of unrelated tasks.  In this case tasks for U1C0 need not wait for U0C0 and U2C0 tasks need not wait for U1C1 or U0C1 etc. There is no parallel execution and thus this is not an optimal usage of the server resources.

    Produce Execute Consume (PEC)

    To solve the HOL blocking problem, multiple CPUs must be used so that produced tasks can be executed in parallel and even if one is slow or blocks, the other CPU can progress the other tasks.  To achieve this, a typical solution is to have one Thread executing on a CPU that will only produce tasks, which are then placed in a queue of tasks to be executed by Threads running on other CPUs.   Typically the task queue is abstracted into an Executor:

    while (true)
    {
        Runnable task = _producer.produce();
        if (task == null)
            break;
        _executor.execute(task);
    }

    This strategy could be considered the canonical solution to the producer consumer problem, where producers are separated from consumers by a queue and is at the heart of architectures such as SEDA. This strategy well solves the head of line blocking issue, since all tasks produced can complete independently in different Threads on different CPUs:

    This represents a good improvement in throughput and average latency over the simple Produce Consume, solution, but the cost is that every consumed task is executed on a different Thread (and thus likely a different CPU) from the one that produced the task.  While this may appear like a small cost for avoiding HOL blocking, our experience is that CPU cache misses significantly reduced the performance of early Jetty 9 releases.

    Eat What You Kill (EWYK) AKA Execute Produce Consume (EPC)

    To achieve good mechanical sympathy and avoid HOL blocking, Jetty has developed the Execute Produce Consume strategy, that we have nicknamed Eat What You Kill (EWYK) after the expression which states a hunter should only kill an animal they intend to eat. Applied to the producer consumer problem this policy says that a thread should only produce (kill) a task if it intends to consume (eat) it[n]The EatWhatYouKill strategy is named after a hunting proverb in the sense that one should only kill to eat. The use of this phrase is not an endorsement of hunting nor killing of wildlife for food or sport.[/n]. A task queue is still used to achieve parallel execution, but it is the producer that is dispatched rather than the produced task:

        while (true)
        {
            Runnable task = _producer.produce();
            if (task == null)
                break;
            _executor.execute(this); // dispatch production
            task.run(); // consume the task ourselves
        }

    The result is that a task is consumed by the same Thread, and thus likely the same CPU, that produced it, so that consumption is always done with a hot cache:

    Moreover, because any thread that completes consuming a task will immediately attempt to produce another task, there is the possibility of a single Thread/CPU executing multiple produce/consume cycles for the same user. The result is improved average latency and reduced total CPU time.

    Starvation!

    Unfortunately, a pure implementation of EWYK suffers from a fatal flaw! Since any thread producing a task will go on to consume that task,  it is possible for all threads/CPU to be consuming at once.   This was initially seen as a feature as it exerted good back pressure on the network as a busy server used all its resources consuming existing tasks rather than producing new tasks. However, in an application server consuming a task may be a blocking process that waits for more data/frames to be produced. Unfortunately if every thread/CPU ends up consuming such a blocking task, then there are no threads left available to produce the tasks to unblock them. Dead lock!

    A real example of this occurred with HTTP/2, when every Thread from the pool was blocked in a HTTP/2 request because it had used up its flow control window. The windows can be expanded by flow control frames from the other end, but there were no threads available to process the flow control frames!

    Thus the EWYK execution strategy used in Jetty is now adaptive and it can can use the most appropriate of the three strategies outlined above, ensuring there is always at least one thread/CPU producing so that starvation does not occur. To be adaptive, Jetty uses two mechanisms:

    • Tasks that are produced can be interrogated via the Invocable interface to determine if they are nonblocking, blocking or can be run in either mode.  NON_BLOCKING or EITHER tasks can be directly consumed by PC model.
    • The thread pools used by Jetty implement the TryExecutor interface which supports the method boolean tryExecute(Runnable task)which allows the scheduler to know if a thread was available to continue producing and thus allows EWYK/EPC mode, otherwise the task must be passed to an executor to be consumed in PEC mode.  To implement this semantic, Jetty maintains a dynamically sized pool of reserved threads that can respond to tryExecute(Runnable)calls.

    Thus the simple produce consume (PC) model is used for non-blocking tasks; for blocking tasks the EWYK, aka Execute Produce Consume (EPC) mode is used if a reserved thread is available, otherwise the SEDA style Produce Execute Consume (PEC) model is used.

    The adaptive EWYK strategy can be written as :

        while (true)
        {
            Runnable task = _producer.produce();
            if (task == null)
                break;
            if (Invocable.getInvocationType(task)==NON_BLOCKING)
                task.run();                     // Produce Consume
            else if (executor.tryExecute(this)) // recruit a new producer?
                task.run();                     // Execute Produce Consume (EWYK!)
            else
                executor.execute(task);         // Produce Execute Consume
        }
    

    Chained Execution Strategies

    As stated above, in the Jetty use-case it is common for the execution strategy used by the IO layer to call tasks that are themselves an execution strategy for producing and consuming HTTP/2 frames.  Thus EWYK strategies can be chained and by knowing some information about the mode in which the prior  strategy has executed them the strategies can be even more adaptive.

    The adaptable chainable EWYK strategy is outlined here:

      while (true) {
        Runnable task = _producer.produce();
        if (task == null)
          break;
        if (thisThreadIsNonBlocking())
        {
          switch(Invocable.getInvocationType(task))
          {
            case NON_BLOCKING:
              task.run();                 // Produce Consume
              break;
            case BLOCKING:
              executor.execute(task);     // Produce Execute Consume
              break;
            case EITHER:
              executeAsNonBlocking(task); // Produce Consume break;
           }
        }
        else
        {
          switch(Invocable.getInvocationType(task))
          {
            case NON_BLOCKING:
              task.run();                   // Produce Consume
              break;
            case BLOCKING:
              if (_executor.tryExecute(this))
                task.run();                 // Execute Produce Consume (EWYK!)
              else
                executor.execute(task);     // Produce Execute Consume
              break;
            case EITHER:
              if (_executor.tryExecute(this))
                task.run();                 // Execute Produce Consume (EWYK!)
              else
                executeAsNonBlocking(task); // Produce Consume
                break;
           }
        }

    An example of how the chaining works is that the HTTP/2 task declares itself as invocable EITHER in blocking on non blocking mode. If IO strategy is operating in PEC mode, then the HTTP/2 task is in its own thread and free to block, so it can itself use EWYK and potentially execute a blocking task that it produced.

    However, if the IO strategy has no reserved threads it cannot risk queuing an important Flow Control frame in a job queue. Instead it can execute the HTTP/2 as a non blocking task in the PC mode.  So even if the last available thread was running the IO strategy, it can use PC mode to execute HTTP/2 tasks in non blocking mode. The HTTP/2 strategy is then always able to handle flow control frames as they are non-blocking tasks run as PC and all other frames that may block are queued with PEC.

    Conclusion

    The EWYK execution strategy has been implemented in Jetty to improve performance through mechanical sympathy, whilst avoiding the issues of Head of Line blocking, Thread Starvation and Parallel Slowdown.   The team at Webtide continue to work with our clients and users to analyse and innovate better solutions to serve high performance real world applications.