Blog

  • Introducing Jetty-12

    For the last 18 months, Webtide engineers have been working on the most extensive overhaul of the Eclipse Jetty HTTP server and Servlet container since its inception in 1995. The headline for the release of Jetty 12.0.0 could be “Support for the Servlet 6.0 API from Jakarta EE 10“, but the full story is of a root and branch overhaul and modernization of the project to set it up for yet more decades of service.

    This blog is an introduction to the features of Jetty 12, many of which will be the subject of further deep-dive blogs.

    Servlet API independent

    In order to support the Servlet 6.0 API, we took the somewhat counter intuitive approach of making Jetty Servlet API independent.  Specifically we have removed any dependency on the Servlet API from the core Jetty HTTP server and handler architecture.    This is taking Jetty back to it’s roots as it was Servlet API independent for the first decade of the project.

    The Servlet API independent approach has the following benefits:

    • There is now a set of jetty-core modules that provide a high performance and scalable HTTP server.  The jetty-core modules are usable directly when there is no need for the Servlet API and the overhead introduced by it’s features and legacy.
    • For projects like Jetty, support must be maintained for multiple versions of the Servlet APIs.  We are currently supporting branches for Servlet 3.1 in Jetty 9.4.x;  Servlet 4.0 in Jetty 10.0.x; and Servlet 5.0 in Jetty 11.0.x. Adding a fourth branch to maintain would have been intolerable.  With Jetty 12, our ongoing support for Servlet 4.0, 5.0 and 6.0 will be based on the same core HTTP server in the one branch. 
    • The Servlet APIs have many deprecated features that are no longer best practise. With Servlet 6.0, some of these were finally removed from the specification (e.g. Object Wrapper Identity). Removing these features from the Jetty core modules allows for better performance and cleaner implementations of the current APIs.

    Multiple EE Environments

    To support the Servlet APIs (and related Jakarta EE APIs) on top of the jetty-core, Jetty 12 uses an Environment abstraction that introduces another tier of class loading and configuration. Each Environment holds the applicable Jakarta EE APIs needed to provide Servlet support (but not the full suite of EE APIs).

    Multiple environments can be run simultaneously on the same server and Jetty-12 supports:

    • EE8 (Servlet 4.0) in the java.* namespace,
    • EE9 (Servlet 5.0) in the jakarta.* namespace with deprecated features
    • EE10 (Servlet 6.0) in the jakarta.* namespace without deprecated features.
    • Core environments with no Servlet support or overhead.
    The implementation of EE8 & EE9 environments are substantially from the current Jetty-10 and Jetty-11 releases, so that applications that are dependent on those can be deployed on Jetty-12 with minimal risk of changes in behaviour (i.e. they are somewhat “bug for bug compatible”). Even if there is no need to simultaneously run different environments, the upgrading of applications to current and future releases of the Jakarta EE specifications, will be simpler as it is decoupled from a major release of the server itself. For example, it is planned that EE 11 support (probably with Servlet 6.1) will be made available in a Jetty 12.1.0 release rather than in a major upgrade to a 13.0.0 release.

    Core Environment

    As mentioned above, the jetty-core modules are now available for direct support of HTTP without the need for the overhead and legacy of the Servlet API. As part of this effort many API’s have been updated and refined:
    • The core Sessions are now directly usable
    • A core Security model has been developed, that is used to implement the Servlet security model, but avoids some of the bizarre behaviours (I’m talking about you exposed methods!).
    • The Jetty Websocket API has been updated and can be used over the top of the core Websocket APIs
    • The Jetty HttpClient APIs have been updated.

    Performance

    Jetty 12 has achieved significant performance improvements. Our continuous performance tracking indicates that we have equal or better CPU utilisation for given load with lower latency and no long tail of quality of service. 

    Our tests currently offer 240,000 requests per second and then measure quality of service by latency (99th percentile and maximum). Below is the plot of latency for Jetty 11: 

    This shows that the orange 99th percentile latency is almost too small in the plot to see (at 24.1 µs average), and all you do see is the yellow plot of the maximal latency (max 1400 µs). Whilst these peaks look large, the scale is in micro seconds, so the longest maximal delay is just over 1.4 milliseconds and 99% of requests are handled in 0.024ms!

    Below is the same plot of latency for Jetty 12 handling 240,000 requests per second:

    The 99th percentile latency is now only 20.2 µs and the peaks are less frequent and rarely over 1 ms, with the maximum of 1100µs.   

    You can see the latest continuous performance testing of jetty-12 here.

    New Asynchronous IO abstraction

    In the jetty-core is a new asynchronous abstraction that is a significant evolution of the asynchronous approaches developed in Jetty over many previous releases.

    But “Loom” I hear some say. Why be asynchronous if “Loom” will solve all your problems. Firstly, Loom is not a silver bullet, and we have seen no performance benefits of adopting Loom in the core of Jetty. If we were to adopt loom in the core we’d lose the significant benefits of our advanced execution strategy (which ensures that tasks have a good chance of being executed on a CPU core with a hot cache filled with the relevant data).

    However, there are definitely applications that will benefit from the simple scaling offered by Loom’s virtual Threads, thus Jetty has taken the approach to stay asynchronous in the core, but to have optional support of Loom in our Execution strategy. Virtual threads may be used by the execution strategy, rather than submitting blocking jobs to a thread pool.  This is a best of both worlds approach as it let’s us deal with the highly complex but efficient/scaleable asynchronous core, whilst letting applications be written in blocking style but can still scale.

      But I hear other say: “why yet another async abstraction when there are already so many: reactive, Flow, NIO, servlet, etc”? Adopting a simple but powerful core async abstraction allows us to simply adapt to support many other abstractions: specifically Servlet asynchronous IO, Flow and blocking InputStream/OutputStream are trivial to implement. Other features of the abstraction are:

      • Input side can be used iteratively, avoiding deep stacks and needless dispatches. Borrowed from Servlet API.
      • Demand API simplified from Flow/Reactive
      • Retainable ByteBuffers for zero copy handling
      • Content abstraction to simply handle errors and trailers inline.

      The asynchronous APIs are available to be used directly in jetty-core, or applications may simply wrap them in alternative asynchronous or blocking APIs, or simply use Servlets and never see them (but benefit from them). 

      Below is an example of using the new APIs to asynchronously read content from a Content.Source into a string:

      public static class FutureString extends CompletableFuture<String> {
      private final CharsetStringBuilder text;
      private final Content.Source source;

      public FutureString(Content.Source source, Charset charset) {
      this.source = source;
      this.text = CharsetStringBuilder.forCharset(charset);
      source.demand(this::onContentAvailable);
      }

      private void onContentAvailable() {
      while (true) {
      Content.Chunk chunk = source.read();
      if (chunk == null) {
      source.demand(this::onContentAvailable);
      return;
      }

      try {
      if (Content.Chunk.isFailure(chunk))
      throw chunk.getFailure();

      if (chunk.hasRemaining())
      text.append(chunk.getByteBuffer());

      if (chunk.isLast() && complete(text.build()))
      return;
      } catch (Throwable e) {
      completeExceptionally(e);
      } finally {
      chunk.release();
      }
      }
      }
      }

      The asynchronous abstraction will be explained in detail in a later blog, but we will note about the code above here:

      • there are no data copies into buffers (as if often needed with read(byte[]buffer)style APIs.  The chunk may be a slice of a buffer that was read directly from the network and there are retain() and release()to allow references to be kept if need be.
      • All data and meta flows via pull style calls to the Content.Source.read() method, including bytes of content, failures and EOF indication. Even HTTP trailers are sent as Chunks.  This avoids the mutual exclusion that can be needed if there are onData and onError style callbacks. 
      • The read style is iterative, so there is no less need to break down code into multiple callback methods. 
      • The only callback is to the  onContentAvailable method that is passed to Content.Source#demand(Runnable) and is called back when demand is met (i.e. read can be called with a non null return).

      Handler,  Request & Response design

      The core building block of a Jetty Server are the Handler, Request and Response interfaces. These have been significantly revised in Jetty 12 to:

      • Fully embrace and support the asynchronous abstraction. The previous Handler design predated asynchronous request handling and thus was not entirely suitable for purpose.
      • The Request is now immutable, which solves many issues (see “Mutable Request” in Less is More Servlet API) and allows for efficiencies and simpler asynchronous implementations.
      • Duplication has been removed from the API’s so that wrapping requests and responses is now simpler and less error prone. (e.g. There is no longer the need to wrap both a sendError and setStatus method to capture the response status).

      Here is an example Handler that asynchronously echos all a request content back to the response, including any Trailers:

      public boolean handle(Request request, Response response, Callback callback) {
        response.setStatus(200);
        long contentLength = -1;
        for (HttpField field : request.getHeaders()) {
          if (field.getHeader() != null) {
            switch (field.getHeader()) {
              case CONTENT_LENGTH -> {
        response.getHeaders().add(field);
        contentLength = field.getLongValue();
            }
              case CONTENT_TYPE -> response.getHeaders().add(field);
              case TRAILER -> response.setTrailersSupplier(HttpFields.build());
              case TRANSFER_ENCODING -> contentLength = Long.MAX_VALUE;
      }
      }
      } 
      if
      (contentLength > 0)
      Content.copy(request, response, Response.newTrailersChunkProcessor(response), callback);
        else
          callback.succeeded();
        return true;
      }

      Security

      With sponsorship from the Eclipse Foundation and the Open Source Technology Improvement Fund, Webtide was able to engage Trail of Bits for a significant security collaboration. There have been 25 issues of various severity discovered, including several which have resulted in CVEs against the previous Jetty releases.  The Jetty project has a good security record and this collaboration is proving a valuable way to continue that.  

      Big update & cleanup

      Jetty is a 28 year old project. A bit of cruft and legacy has accumulated over that time, not to mention that many RFCs have been obsoleted (several times over) in that period. 

      The new architecture of Jetty 12, together with the name space break of jakarta.* and the removal of deprecated features in Servlet 6.0, has allowed for a big clean out of legacy implementations and updates to the latest RFCs.

      Legacy support is still provided where possible, either by compliance modes selecting older implementations or just by using the EE8/EE9 Environments.

      Conclusion

      The Webtide team is really excited to bring Jetty 12 to the market. It is so much more than just a Servlet 6.0 container, offering a fabulous basis for web development for decades more to come.  

    • Jetty HTTP/3 Support

      Introduction

      HTTP/3 is the next iteration of the HTTP protocol.

      HTTP/1.0 was released in 1996 and HTTP/1.1 in 1997; HTTP/1.x is a fairly simple textual protocol based on TCP, possibly wrapped in TLS, that experienced over the years a tremendous growth that was not anticipated in the late ’90s.
      With the growth, a few issues in the HTTP/1.x scalability were identified, and addressed first by the SPDY protocol (HTTP/2 precursor) and then by HTTP/2.

      The design of HTTP/2, released in 2015 (and also based on TCP), resolved many of the HTTP/1.x shortcomings and  protocol became binary and multiplexed.

      The deployment at large of HTTP/2 revealed some issues in the HTTP/2 protocol itself, mainly due a shift towards mobile devices where connectivity is less reliable and packet loss more frequent.

      Enter HTTP/3, which ditches TCP for QUIC (RFC 9000) to address the connectivity issues of HTTP/2.
      HTTP/3 and QUIC are inextricably entangled together because HTTP/3 relies heavily on QUIC features that are not provided by any other lower-level protocol.

      QUIC is based on UDP (rather than TCP) and has TLS built-in, rather than layered on top.
      This means that you cannot offload TLS in a front-end server, like with HTTP/1.x and HTTP/2, and then forward the clear-text HTTP/x bytes to back-end servers.

      Due to HTTP/3 relying heavily on QUIC features, it’s not possible anymore to separate the “carrier” protocol (QUIC) from the “semantic” protocol (HTTP). Therefor reverse proxying should either:

      • decrypt QUIC+HTTP/3, perform some proxy processing, and re-encrypt QUIC+HTTP/3 to forward to back-end servers; or
      • decrypt QUIC+HTTP/3, perform some proxy processing, and re-encode into a different protocol such as HTTP/2 or HTTP/1.x to forward to back-end servers, with the risk of losing features by using older HTTP protocol versions.

      The Jetty Project has always been on the front at implementing Web protocols and standard, and QUIC+HTTP/3 is no exception.

      Jetty’s HTTP/3 Support

      At this time, Jetty’s support for HTTP/3 is still experimental and not recommended for production use.

      We decided to use the Cloudflare’s Quiche library because QUIC’s use of TLS requires new APIs that are not available in OpenJDK; we could not implement QUIC in pure Java.

      We wrapped the native calls to Quiche with either JNA or with Java 17’s Foreign APIs (JEP 412) and retrofitted the existing Jetty’s I/O library to work with UDP as well.
      A nice side effect of this work is that now Jetty is a truly generic network server, as it can be used to implement any generic protocol (not just web protocols) on either TCP or UDP.

      HTTP/3 was implemented in Jetty 10.0.8/11.0.8 for both the client and the server.
      The implementation is quite similar to Jetty’s HTTP/2 implementation, since the protocols are quite similar as well.

      HTTP/3 on the client is available in two forms:

      • Using the high-level APIs provided by Jetty’s HttpClient with the HTTP/3 specific transport (that only speaks HTTP/3), or with the dynamic transport (that can speak multiple protocols).
      • Using the low-level HTTP/3 APIs provided by Jetty’s HTTP3Client that allow you to deal directly with HTTP/3 sessions, streams and frames.

      HTTP/3 on the server is available in two forms:

      • Using embedded code via HTTP3ServerConnector listening on a specific network port.
      • Using Jetty as a standalone server by enabling the http3 Jetty module.

      In both cases, an incoming HTTP/3 request is processed and forwarded to your standard Web Applications, or to your Jetty Handlers.

      Finally, the HTTP/3 specification at the IETF is still a draft and may change, and we prioritized a working implementation over performance.

    • UnixDomain Support in Jetty

      UnixDomain sockets support was added in Jetty 9.4.0, back in 2015, based on the JNR UnixSocket library.

      The support for UnixDomain sockets with JNR was experimental, and has remained so until now.

      In Jetty 10.0.7/11.0.7 we re-implemented support for UnixDomain sockets based on JEP 380, which shipped with Java 16.

      We have kept the source compatibility at Java 11 and used a little bit of Java reflection to access the new APIs introduced by JEP 380, so that Jetty 10/11 can still be built with Java 11.
      However, if you run Jetty 10.0.7/11.0.7 or later with Java 16 or later, then you will be able to use UnixDomain sockets based on JEP 380.

      The UnixDomain implementation from Java 16 is very stable, so we have switched our own website to use it.
      The page that you are reading right now has been requested by your browser and processed on the server by Jetty using Jetty’s HttpClient to send the request via UnixDomain sockets to our local WordPress.

      We have therefore deprecated the old Jetty modules based on JNR in favor of the new Jetty modules based on JEP 380.

      Note that since UnixDomain sockets are an alternative to TCP network sockets, any TCP-based protocol can be carried via UnixDomain sockets: HTTP/1.1, HTTP/2 and FastCGI.

      We have improved the documentation to detail how to use the new APIs introduced to support JEP 380, for the client and for the server.
      If you are configuring Jetty behind a load balancer (or Apache HTTPD or Nginx) you can now use UnixDomain sockets to communicate from the load balancer to Jetty, as explained in this section of the documentation.

      Enjoy!

    • Jetty & Log4j2 exploit CVE-2021-44228

      The Apache Log4j2 library has suffered a series of critical security issues (see this page at the Log4j2 project).

      Eclipse Jetty by default does not use and does not depend on Log4j2 and therefore Jetty is not vulnerable and thus there is no need for a Jetty release to address this CVE.

      If you use Jetty embedded (i.e as a library in your application), and your application uses Log4j2, then you have to take the steps recommended by the CVE to mitigate possible impacts without worrying about the Jetty version.

      However, Jetty standalone offers an optional Log4j2 Jetty module.
      The following describes how you can test if your Jetty standalone configuration is using Log4j2, and how to upgrade to a fixed Log4j2 version without waiting for a release of Jetty.

      IMPORTANT: You must scan the content of your web applications deployed in Jetty, as they may contain a vulnerable Log4j2 artifact. Consult the CVE details to mitigate the vulnerability.

      Eclipse Jetty 9

      You can see your configuration of Jetty with a --list-modules command in your $JETTY_BASE directory:

      $ java -jar $JETTY_HOME/start.jar --list-modules
      Enabled Modules:
      ================
      0) bytebufferpool transitive provider of bytebufferpool for server
      init template available with --add-to-start=bytebufferpool
      1) log4j2-api transitive provider of log4j2-api for slf4j-log4j2
      2) resources transitive provider of resources for log4j2-impl
      3) log4j2-impl transitive provider of log4j2-impl for slf4j-log4j2
      4) slf4j-api transitive provider of slf4j-api for slf4j-log4j2
      5) slf4j-log4j2 transitive provider of slf4j-log4j2 for logging-log4j2
      6) logging-log4j2 ${jetty.base}/start.d/logging-log4j2.ini
      7) threadpool transitive provider of threadpool for server
      init template available with --add-to-start=threadpool
      8) server ${jetty.base}/start.d/server.ini
      9) http ${jetty.base}/start.d/http.ini

      Here you can see that the logging-log4j2 module is explicitly enabled and that it transitively depends on log4j2-api.
      The following command will show what version of the library is being used:

      $ java -jar $JETTY_HOME/start.jar --list-config
      Jetty Server Classpath:
      -----------------------
      Version Information on 13 entries in the classpath.
      Note: order presented here is how they would appear on the classpath.
      changes to the --module=name command line options will be reflected here.
      0: 2.14.0 | ${jetty.base}/lib/log4j2/log4j-api-2.14.0.jar
      1: (dir) | ${jetty.base}/resources
      2: 3.4.2 | ${jetty.base}/lib/log4j2/disruptor-3.4.2.jar
      3: 2.14.0 | ${jetty.base}/lib/log4j2/log4j-core-2.14.0.jar
      4: 2.14.0 | ${jetty.base}/lib/log4j2/log4j-slf4j-impl-2.14.0.jar
      5: 1.7.32 | ${jetty.base}/lib/slf4j/slf4j-api-1.7.32.jar
      6: 3.1.0 | ${jetty.home}/lib/servlet-api-3.1.jar
      7: 3.1.0.M0 | ${jetty.home}/lib/jetty-schemas-3.1.jar
      8: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-http-9.4.44.v20210927.jar
      9: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-server-9.4.44.v20210927.jar
      10: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-xml-9.4.44.v20210927.jar
      11: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-util-9.4.44.v20210927.jar
      12: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-io-9.4.44.v20210927.jar

      Here we can see that the vulnerable Log4j2 2.14.0 version is being used.
      The following commands will remove that jar and update the Jetty base to use the fixed Log4j2 2.17.0 jar:

      $ echo 'log4j2.version=2.17.0' >> start.d/logging-log4j2.ini
      $ rm -f lib/log4j2/*
      $ java -jar $JETTY_HOME/start.jar --create-files
      $ java -jar $JETTY_HOME/start.jar --list-config
      Jetty Server Classpath:
      -----------------------
      Version Information on 13 entries in the classpath.
      Note: order presented here is how they would appear on the classpath.
      changes to the --module=name command line options will be reflected here.
      0: 2.17.0 | ${jetty.base}/lib/log4j2/log4j-api-2.17.0.jar
      1: (dir) | ${jetty.base}/resources
      2: 3.4.2 | ${jetty.base}/lib/log4j2/disruptor-3.4.2.jar
      3: 2.17.0 | ${jetty.base}/lib/log4j2/log4j-core-2.17.0.jar
      4: 2.17.0 | ${jetty.base}/lib/log4j2/log4j-slf4j-impl-2.17.0.jar
      5: 1.7.32 | ${jetty.base}/lib/slf4j/slf4j-api-1.7.32.jar
      6: 3.1.0 | ${jetty.home}/lib/servlet-api-3.1.jar
      7: 3.1.0.M0 | ${jetty.home}/lib/jetty-schemas-3.1.jar
      8: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-http-9.4.44.v20210927.jar
      9: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-server-9.4.44.v20210927.jar
      10: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-xml-9.4.44.v20210927.jar
      11: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-util-9.4.44.v20210927.jar
      12: 9.4.44.v20210927 | ${jetty.home}/lib/jetty-io-9.4.44.v20210927.jar

      Eclipse Jetty 10 & 11

      You can see your configuration of Jetty with a --list-modules command in your $JETTY_BASE directory:

      $ java -jar $JETTY_HOME/start.jar --list-modules
      Enabled Modules:
      ----------------
      0) resources transitive provider of resources for logging-log4j2
      1) logging/slf4j dynamic dependency of logging-log4j2
      transitive provider of logging/slf4j for logging-log4j2
      2) logging-log4j2 ${jetty.base}/start.d/logging-log4j2.ini
      3) bytebufferpool transitive provider of bytebufferpool for server
      init template available with --add-module=bytebufferpool
      4) threadpool transitive provider of threadpool for server
      init template available with --add-module=threadpool
      5) server transitive provider of server for http
      init template available with --add-module=server
      6) http ${jetty.base}/start.d/http.ini
      

      Here you can see that the logging-log4j2 module is explicitly enabled and that it transitively depends on log4j2-api.   The following command will show what version of the library is being used:

      $ java -jar $JETTY_HOME/start.jar --list-config
      Properties:
      -----------
      java.version = 14.0.2
      java.version.major = 14
      java.version.micro = 2
      java.version.minor = 0
      java.version.platform = 14
      jetty.base = /tmp/test
      jetty.base.uri = file:///tmp/test
      jetty.home = /opt/jetty-home-11.0.7
      jetty.home.uri = file:///opt/jetty-home-11.0.7
      jetty.webapp.addServerClasses = org.apache.logging.log4j.,org.slf4j.
      log4j.version = 2.14.1
      runtime.feature.alpn = true
      slf4j.version = 2.0.0-alpha5

      Here we can see that the vulnerable Log4j2 2.14.1 version is being used.
      The following commands will remove that jar and update the Jetty base to use the fixed Log4j2 2.17.0 jar:

      $ echo 'log4j.version=2.17.0' >> start.d/logging-log4j2.ini
      $ rm -f lib/logging/*
      $ java -jar $JETTY_HOME/start.jar --create-files
      $ java -jar $JETTY_HOME/start.jar --list-config
      Properties:
      -----------
      java.version = 14.0.2
      java.version.major = 14
      java.version.micro = 2
      java.version.minor = 0
      java.version.platform = 14
      jetty.base = /tmp/test
      jetty.base.uri = file:///tmp/test
      jetty.home = /opt/jetty-home-11.0.7
      jetty.home.uri = file:///opt/jetty-home-11.0.7
      jetty.webapp.addServerClasses = org.apache.logging.log4j.,org.slf4j.
      log4j.version = 2.17.0
      runtime.feature.alpn = true
      slf4j.version = 2.0.0-alpha5
      
      Jetty Server Classpath:
      -----------------------
      Version Information on 11 entries in the classpath.
      Note: order presented here is how they would appear on the classpath.
      changes to the --module=name command line options will be reflected here.
      0: (dir) | ${jetty.base}/resources
      1: 2.0.0-alpha5 | ${jetty.home}/lib/logging/slf4j-api-2.0.0-alpha5.jar
      2: 2.17.0 | ${jetty.base}/lib/logging/log4j-slf4j18-impl-2.17.0.jar
      3: 2.17.0 | ${jetty.base}/lib/logging/log4j-api-2.17.0.jar
      4: 2.17.0 | ${jetty.base}/lib/logging/log4j-core-2.17.0.jar
      5: 5.0.2 | ${jetty.home}/lib/jetty-jakarta-servlet-api-5.0.2.jar
      6: 11.0.7 | ${jetty.home}/lib/jetty-http-11.0.7.jar
      7: 11.0.7 | ${jetty.home}/lib/jetty-server-11.0.7.jar
      8: 11.0.7 | ${jetty.home}/lib/jetty-xml-11.0.7.jar
      9: 11.0.7 | ${jetty.home}/lib/jetty-util-11.0.7.jar
      10: 11.0.7 | ${jetty.home}/lib/jetty-io-11.0.7.jar

      Conclusions

      It is recommended that you update any usage of Log4j2 immediately.

      Once you upgrade your version of Jetty, you will need to edit the start.d/logging-log4j2.ini file to remove the explicit setting of the Log4j2 version, so that you may use newer Log4j2 versions.

    • The Jetty Performance Effort

      One can only improve what can be reliably measured. To assert that Jetty’s performance is as good as it can be, doesn’t degrade over time and to facilitate future optimization work, we need to be able to reliably measure its performance.

      The primary goal

      The Jetty project wanted an automated performance test suite. Every now and then some performance measurements were done, with some ad-hoc tools and a lot of manual steps. In the past few months an effort has been made to try to come up with an automated performance test suite that could help us with the above goals and more, like making it easy to better visualize the performance characteristics of the tested scenarios for instance.

      We have been working on and off such test suite over the past few months. The primary goal was to write a reliable, fully automated test that can be used over time to measure, understand and compare performance over time.

      A basic load-testing scenario

      A test must be stable over time, and the same is true for performance tests: these ought to report stable performance over time to be considered repeatable. Since this is already a challenge in itself, we decided to start with the simplest possible scenario that is limited in realism but easy to grasp and still useful to get a quick overview of the server’s overall performance.

      The basis of that scenario is a simple HTTPS (i.e.: HTTP/1.1 over TLS) GET on a single resource that returns a few bytes of in-memory hard-coded data. To avoid a lot of complexity, the test is going to run on dedicated physical machines that are hosted in an environment entirely under our control. This way, it is easy to assert what kind of performance they’re capable of, that the performance is repeatable, that those machines are not doing anything else, that the network between them is capable enough and not overloaded, and so on.

      Load, don’t strangle

      As recommended in the Jetty Load Generator documentation, to get meaningful measurements we want one machine running Jetty (the server), one generating a fixed 100 requests/s load (the probe) and four machines each generating a fixed 60K requests/s load (the loaders). This setup is going to load Jetty with around 240K (4 loaders doing 60K each) requests per second, which is a good figure given the hardware we have: it was chosen based on the fact that it is enough traffic to get the server machine to burn around 50% of its total CPU time, i.e.: loading but not strangling it. The way we found this figure simply was by trial and error.

      Choosing a load that will not push the server to constant 100% CPU is important: while running a test that tries to run the heaviest possible load does have its use, such test is not a load test but a limit test. A limit test is good for figuring out how a software behave under a load too heavy for the hardware it runs on, for instance to make sure that it degrades gracefully instead of crashing and burning into flames when a certain limit is reached. But such test is of very limited use to figure out how fast your software responds under a manageable (i.e.: normal) load, which is what we are most commonly interested in.

      Planning the scenario

      The server’s code is pretty easy since it’s just about setting up Jetty: configuring the connector, SSL context and test handler is basically all it takes. For the loaders, the Jetty Load Generator is meant just for that task so it’s again fairly easy to write this code by making use of that library. The same is also true for the probe as the Jetty Load Generator can be used for it too, and can be configured to record each request’s latency too. And say we want to do that for three minutes to get a somewhat realistic idea of how the server does behave under a flat load.

      Deploying and running a test over multiple machines can be a daunting task, which is why we wrote the Jetty Cluster Orchestrator whose job is to make it easy to write some java code to distribute, execute and control it on a set of machines, using only the SSH protocol. Thanks to this tool, getting some code to run on the six necessary machines can be done simply while writing a plain standard JUnit test.

      So we basically have these three methods that we get running over the six machines:

      void startServer() { ... }
      
      void runProbeGenerator() { ... }
      
      void runLoadGenerator() { ... }

      We also need a warmup phase during which the test runs but no recording is made. The Jetty Load Generator is configured with a duration, so the original three minutes duration has to grow by that warmup duration. We decided to go with one minute for that warmup, so the total load generation duration is now four minutes. So both runProbeGenerator() and runLoadGenerator() are going to run for four minutes each. After the first minute, a flag is flipped to indicate the end of the warmup phase and to make the recording start. Once runProbeGenerator() and runLoadGenerator() return the test is over and the server is stopped then the recordings are collected and analyzed.

      Summarizing the test

      Here’s a summary of the procedure the test is implementing:

      1. Start the Jetty server on one server machine: call startServer().
      2. Start the Jetty Load Generator with a 100/s throughput on one probe machine: call runProbeGenerator().
      3. Start the Jetty Load Generator with a 60K/s throughput on four load machines: call runLoadGenerator().
      4. Wait one minute for the warmup to be done.
      5. Start recording statistics on all six machines.
      6. Wait three minutes for the run to be done.
      7. Stop the Jetty server.
      8. Collect and process the recorded statistics.
      9. (Optional) Perform assertions based on the recorded statistics.

      Results

      It took some iterations to get to the above scenario, and to get it to run repeatably. Once we got confident the test’s reported performance figures could be trusted, we started seriously analyzing our latest release (Jetty 10.0.2 at that time) with it.

      We quickly found a performance problem with a stack trace generated on the fast path, thanks to the Async Profiler’s flame graph that is generated on each run for each machine. Issue #6157 was opened to track this problem that has been solved and made it to Jetty 10.0.4.

      After spending more time looking at the reported performance, we noticed that the ByteBuffer pool we use by default is heavily contended and reported as a major time consumer by the generated flame graphs. Issue #6379 was opened to track this issue. A quick investigation of that code proved that minor modifications could provide an appreciable performance boost that made it to Jetty 10.0.6.

      While working on our backlog of general cleanups and improvements, issue #6322 made it to the top of the pile. Investigating it, it became apparent that we could improve the ByteBuffer pool a step further by adopting the RetainableByteBuffer interface everywhere in the input path and slightly modifying its contract, in a way that enabled us to write a much more performant ByteBuffer pool. This work was released as part of Jetty 10.0.7.

      Current status of Jetty’s performance

      Here are a few figures to give you some idea of what Jetty can achieve: while our test server (powered by a 16 cores Intel Core i9-7960X) is under a 240.000 HTTPS requests per second load, the probe measured that most of the time, 99% of its own HTTPS requests were served in less than 1 millisecond, as can be seen on this graph.

      Thanks to the collected measurements, we could add performance-related assertions to the test and made it run regularly against 10.0.x and 11.0.x to make sure performance won’t unknowingly degrade over time for those branches. We are now also running the same test over HTTP/1.1 clear text and TLS as well as HTTP/2.0 clear text and TLS too.

      The test also works against the 9.4.x branch but we do not yet have assertions for that branch because it has a different performance profile, so a different load profile is needed and different performance figures are to be expected. This has yet to happen but that is in our todo list.

      More test scenarios are going to be added to the test suite over time as we see fit. For instance, to measure certain load scenarios we deem important, to cover certain aspects or features or any other reason why we’d want to measure performance and ensure its stability over time.

      In the end, making Jetty as performant as possible and continuously optimizing it has always been on Webtide’s mind and that trend will continue in the future!

    • Eclipse Jetty Servlet Survey

      This short 5-minute survey is being presented to the Eclipse Jetty user community to validate conjecture the Jetty developers have for how users will leverage JakartaEE servlets and the Jetty project. We have some features we are gauging interest in before supporting in Jetty 12 and your responses will help shape its forthcoming release.

      We will summarize results in a future blog.

    • Less is More? Evolving the Servlet API!

      With the release of the Servlet API 5.0 as part of Eclipse Jakarta EE 9.0 the standardization process has completed its move from the now-defunct Java Community Process (JCP) to being fully open source at the Eclipse Foundation, including the new Eclipse EE Specification Process (JESP) and the transition of the APIs from the javax.* to the jakarta.* namespace.  The move represents a huge amount of work from many parties, but ultimately it was all meta work, in that Servlet 5.0 API is identical to the 4.0 API in all regards but name, licenses, and process, i.e. nothing functional has changed.

      But now with the transition behind us, the Servlet API project is now free to develop the standard into a 5.1 or 6.0 release.  So in this blog, I will put forward my ideas for how we should evolve the Servlet specification, specifically that I think that before we add new features to the API, it is time to remove some.

      Backward Compatibility

      Version 1.0  was created in 1997 and it is amazing that over 2 decades later, a Servlet written against that version should still run in the very latest EE container.  So why with such a great backward compatible record should we even contemplate introducing breaking changes to future Servlet API specification?  Let’s consider some of the reasons that a developer might choose to use EE Servlets over other available technologies:

      Performance
      Not all web applications need high performance and when they do, it is seldom the Servlet container itself that is the bottleneck.   Yet pure performance remains a key selection criteria for containers as developers either wish to have the future possibility of high request rates or need every spare cycle available to help their application meet an acceptable quality of service. Also there is the environmental impact of the carbon foot print of unnecessary cycles wasted in the trillion upon trillions of HTTP requests executed.   Thus application containers always compete on performance, but unfortunately many of the features added over the years have had detrimental affects to over-all performance as they often break the “No Taxation without Representation” principle: that there should not be a cost for all requests for a feature only used by <1%.
      Features
      Developers seek to have the current best practice features available in their container.   This may be as simple as changing from byte[] to ByteBuffers or Collections, or it may be more fundamental integration of things such as dependency injection, coding by convention, asynchronous, reactive, etc.  The specification has done a reasonable job supporting such features over the years, but mistakes have been made and some features now clash, causing ambiguity and complexity. Ultimately feature integration can be an N2 problem, so reducing or simplifying existing features can greatly reduce the complexity of introducing new features.
      Portability
      The availability of multiple implementations of the Servlet specification is a key selling point.  However the very same issues of poor integration of many features has resulted in too many dark corners of the specification where the expected behavior of a container is simply not defined, so portability is by no means guaranteed.   Too often we find ourselves needing to be bug-for-bug compatible with other implementations rather than following the actual specification.
      Familiarity
      Any radical departure from the core Servlet API will force developers away from what  they know and to evaluate alternatives.  But there are many non core features in the API and this blog will make the case that there are some features which can can be removed and/or simplified without hardly being noticed by the bulk of applications.  My aim with this blog is that your typical Servlet developer will think: “why is he making such a big fuss about something I didn’t know was there”, whilst your typical Servlet container implementer will think “Exactly! that feature is such a PITA!!!”.

      If the Servlet API is to continue to be relevant, then it needs to be able to compete with start-of-the-art HTTP servers that do not support decades of EE legacy.  Legacy can be both a strength and a weakness, and I believe now is the time to focus on the former.  The namespace break from java.* to jakarta.* has already introduced a discontinuity in backward compatibility.   Keeping 5.0 identical in all but name to 4.0 was the right thing to do to support automatic porting of applications.  However, it has also given developers a reason to consider alternatives, so now is the time to act to ensure that Servlet 6.0 a good basis for the future of EE Servlets.

      Getting Cross about Cross-Context Dispatch

      Let’s just all agree upfront, without going into the details, that cross-context dispatch is a bad thing. For the purposes of the rest of this blog, I’m ignoring the many issues of cross-context dispatch.  I’ll just say that every issue I will discuss below becomes even more complex when cross-context dispatch is considered, as it introduces: additional class loaders; different session values in the same session ID space; different authentication realms; authorization bypass. Don’t even get me started on the needless mind-bending complexities of a context that forwards to another then forwards back to the original…

      Modern web applications are now often broken up into many microservices, so the concept of one webapp invoking another is not in itself bad, but the idea of those services being co-located in the same container instance is not very general nor flexible assumption. By all means, the Servlet API should support a mechanism to forward or include other resources, but ideally, this should be done in a way that works equally for co-resident, co-located, and remote resources.

      So let’s just assume cross-context dispatch is already dead.

      Exclude Include

      The concept of including another resource in a response should be straight forward, but the specification of RequestDispatcher.include(...) is just bizarre!

      @WebServlet(urlPatterns = {"/servletA/*"})
      public static class ServletA extends HttpServlet
      {
          @Override protected void doGet(HttpServletRequest request,
                                         HttpServletResponse response) throws IOException
          {
              request.getRequestDispatcher("/servletB/infoB").include(request, response);
          }
      }

      The ServletA above includes ServletB in its response.  However, whilst within ServletB any calls to getServletPath() or getPathInfo(),will still return the original values used to call ServletA, rather than the “/servletB” or “/infoB”  values for the target Servlet (as is done for a call to  forward(...)).  Instead the container must set an ever-growing list of Request attributes to describe the target of the include and any non trivial Servlet that acts on the actual URI path must do something like:

      public boolean doGet(HttpServletRequest request, HttpServletResponse response)
          throws ServletException, IOException
      {
          String servletPath;
          String pathInfo;
          if (request.getAttribute(RequestDispatcher.INCLUDE_REQUEST_URI) != null)
          {
              servletPath = (String)
                  request.getAttribute(RequestDispatcher.INCLUDE_SERVLET_PATH);
              pathInfo = (String)
                  request.getAttribute(RequestDispatcher.INCLUDE_PATH_INFO);
          }
          else
          {
              servletPath = request.getServletPath();
              pathInfo = request.getPathInfo();
          }
          String pathInContext = URIUtil.addPaths(servletPath, pathInfo);
          // ...
      }

      Most Servlets do not do this, so they are unable to be correctly be the target of an include.  For the Servlets that do correctly check, they are more often than not wasting CPU cycles needlessly for the vast majority of requests that are not included.

      Meanwhile,  the container itself must set (and then reset) at least 5 attributes, just in case the target resource might lookup one of them. Furthermore, the container must disable most of the APIs on the response object during an include, to prevent the included resource from setting the headers. So the included Servlet must be trusted to know that it is being included in order to serve the correct resource, but is then not trusted to not call APIs that are inconsistent with that knowledge. Servlets should not need to know the details of how they were invoked in order to generate a response. They should just use the paths and parameters of the request passed to them to generate a response, regardless of how that response will be used.

      Ultimately, there is no need for an include API given that the specification already has a reasonable forward mechanism that supports wrapping. The ability to include one resource in the response of another can be provided with a basic wrapper around the response:

      @WebServlet(urlPatterns = {"/servletA/*"})
      public static class ServletA extends HttpServlet
      {
          @Override
          protected void doGet(HttpServletRequest request,
                               HttpServletResponse response) throws IOException
          {
              request.getRequestDispatcher("/servletB/infoB")
                  .forward(request, new IncludeResponseWrapper(response));
          }
      }

      Such a response wrapper could also do useful things like ensuring the included content-type is correct and better dealing with error conditions rather than ignoring an attempt to send a 500 status. To assist with porting, the include can be deprecated it’s implementation replaced with a request wrapper that reinstates the deprecated request attributes:

      @Deprecated
      default void include(ServletRequest request, ServletResponse response)
          throws ServletException, IOException
      {
          forward(new Servlet5IncludeAttributesRequestWrapper(request),
                  new IncludeResponseWrapper(response));
      }

      Dispatch the DispatcherType

      The inclusion of the method Request.getDispatcherType()in the Servlet API is almost an admission of defeat that the specification got it wrong in so many ways that required a Servlet to know how and/or why it is being invoked in order to function correctly. Why must a Servlet know its DispatcherType? Probably so it knows it has to check the attributes for the corresponding values? But what if an error page is generated asynchronously by including a resource that forwards to another? In such a pathological case, the request will contain attributes for ERROR, ASYNC, and FORWARD, yet the type will just be FORWARD.

      The concept of DispatcherType should be deprecated and it should always return REQUEST.  Backward compatibility can be supported by optionally applying a wrapper that determines the deprecated DispatcherType only if the method is called.

      Unravelling Wrappers

      A key feature that really needs to be revised is 6.2.2 Wrapping Requests and Responses, introduced in Servlet 2.3. The core concept of wrappers is sound, but the requirement of Wrapper Object Identity (see Object Identity Crisis below) has significant impacts. But first let’s look at a simple example of a request wrapper:

      public static class ForcedUserRequest extends HttpServletRequestWrapper
      {
          private final Principal forcedUser;
          public ForcedUserRequest(HttpServletRequest request, Principal forcedUser)
          {
              super(request);
              this.forcedUser = forcedUser;
          }
          @Override
          public Principal getUserPrincipal()
          {
              return forcedUser;
          }
          @Override
          public boolean isUserInRole(String role)
          {
              return forcedUser.getName().equals(role);
          }
      }

      This request wrapper overrides the existing getUserPrincipal() and isUserInRole(String)methods to forced user identity.  This wrapper can be applied in a filter or in a Servlet as follows:

      @WebServlet(urlPatterns = {"/servletA/*"})
      public static class ServletA extends HttpServlet
      {
          @Override
          protected void doGet(HttpServletRequest request, HttpServletResponse response)
              throws ServletException, IOException
          {
              request.getServletContext()
                  .getRequestDispatcher("/servletB" + req.getPathInfo())
                  .forward(new ForcedUserRequest(req, new UserPrincipal("admin")),
                           response);
          }
      }

      Such wrapping is an established pattern in many APIs and is mostly without significant problems. For Servlets there are some issues: it should be better documented if  the wrapped user identity is propagated if ServletB makes any EE calls (I think no?);  some APIs have become too complex to sensibly wrap (e.g HttpInputStream with non-blocking IO). But even with these issues, there are good safe usages for this wrapping to override existing methods.

      Object Identity Crisis!

      The Servlet specification allows for wrappers to do more than just override existing methods! In 6.2.2, the specification says that:

      “… the developer not only has the ability to override existing methods on the request and response objects, but to provide new API… “

      So the example above could introduce new API to access the original user principal:

      public static class ForcedUserRequest extends HttpServletRequestWrapper
      {
          // ... getUserPrincipal & isUserInRole as above
          public Principal getOriginalUserPrincipal()
          {
              return super.getUserPrincipal();
          }
          public boolean isOriginalUserInRole(String role)
          {
              return super.isUserInRole(role);
          }
      }
      

      In order for targets to be able to use these new APIs then they must be able to downcast the passed request/response to the known wrapper type:

      @WebServlet(urlPatterns = {"/servletB/*"})
      public static class ServletB extends HttpServlet
      {
          @Override
          protected void doGet(HttpServletRequest req, HttpServletResponse resp)
              throws ServletException, IOException
          {
              MyWrappedRequest myr = (MyWrappedRequest)req;
              resp.getWriter().printf("user=%s orig=%s wasAdmin=%b%n",
                  req.getUserPrincipal(),
                  myr.getOriginalUserPrincipal(),
                  myr.isOriginalUserInRole("admin"));
          }
      }

      This downcast will only work if the wrapped object is passed through the container without any further wrapping, thus the specification requires “wrapper object identity”:

      … the container must ensure that the request and response object that it passes to the next entity in the filter chain, or to the target web resource if the filter was the last in the chain, is the same object that was passed into the doFilter method by the calling filter. The same requirement of wrapper object identity applies to the calls from a Servlet or a filter to RequestDispatcher.forward  or  RequestDispatcher.include, when the caller wraps the request or response objects.

      This “wrapper object identity” requirement means that the container is unable to itself wrap requests and responses as they are passed to filters and servlets. This restriction has, directly and indirectly, a huge impact on the complexity, efficiency, and correctness of Servlet container implementations, all for very dubious and redundant benefits:

      Bad Software Components
      In the example of ServletB above, it is a very bad software component as it cannot be invoked simply by respecting the signature of its methods. The caller must have a priori knowledge that the passed request will be downcast and any other caller will be met with a ClassCastException. This defeats the whole point of an API specification like Servlets, which is to define good software components that can be variously assembled according to their API contracts.
      No Multiple Concerns
      It is not possible for multiple concerns to wrap request/responses. If another filter applies its own wrappers, then the downcast will fail. The requirement for “wrapper object identity” requires the application developer to have total control over all aspects of the application, which can be difficult with discovered web fragments and ServletContainerInitializers.
      Mutable Requests
      By far the biggest impact of “wrapper object identity” is that it forces requests to be mutable! Since the container is not allowed to do its own wrapping within RequestDispatcher.forward(...) then the container must make the original request object mutable so that it changes the value returned from getServletPath() to reflect the target of the dispatch.  It is this impact that has significant impacts on complexity, efficiency, and correctness:

      • Mutating the underlying request makes the example implementation of isOriginalUserInRole(String) incorrect because it calls super.isUserInRole(String) whose result can be mutated if the target Servlet has a run-as configuration.  Thus this method will inadvertently return the target rather than the original role.
      • There is the occasional need for a target Servlet to know details of the original request (often for debugging), but the original request can mutate so it cannot be used. Instead, an ever-growing list of Request attributes that must be set and then cleared on the original request attributes, just in case of the small chance that the target will need one of them.  A trivial forward of a request can thus require at least 12 Map operations just to make available the original state, even though it is very seldom required. Also, some aspects of the event history of a request are not recoverable from the attributes: the isUserInRolemethod; the original target of an include that does another include.
      • Mutable requests cannot be safely passed to asynchronous processes, because there will be a race between the other thread call to a request method and any mutations required as the request propagates through the Servlet container (see the “Off to the Races” example below).  As a result, asynchronous applications SHOULD copy all the values from the request that they MIGHT later need…. or more often than not they don’t, and many work by good luck, but may fail if timing on the server changes.
      • Using immutable objects can have significant benefits by allowing the JVM optimizer and GC to have knowledge that field values will not change.   By forcing the containers to use mutable request implementations, the specification removes the opportunity to access these benefits. Worse still, the complexity of the resulting request object makes them rather heavy weight and thus they are often recycled in object pools to save on the cost of creation. Such pooled objects used in asynchronous environments can be a recipe for disaster as asynchronous processes may reference a request object after it has been recycled into another request.
      Unnecessary
      New APIs can be passed on objects set as request attribute values that will pass through multiple other wrappers, coexist with other new APIs in attributes and do not require the core request methods to have mutable returns.

      The “wrapper object identity” requirement has little utility yet significant impacts on the correctness and performance of implementations. It significantly impairs the implementation of the container for a feature that can be rendered unusable by a wrapper applied by another filter.  It should be removed from Servlet 6.0 and requests passed in by the container should be immutable.

      Asynchronous Life Cycle

      A bit of history

      Jetty continuations were a non-standard feature introduced in Jetty-6 (around 2005) to support thread-less waiting for asynchronous events (e.g. typically another HTTP request in a chat room). Because the Servlet API had not been designed for thread-safe access from asynchronous processes, the continuations feature did not attempt to let arbitrary threads call the Servlet API.  Instead, it has a suspend/resume model that once the asynchronous wait was over, the request was re-dispatched back into the Servlet container to generate a response, using the normal blocking Servlet API from a well-defined context.

      When the continuation feature was standardized in the Servlet 3.0 specification, the Jetty suspend/resume model was supported with the APIs ServletRequest.startAsync() and AsyncContext.dispatch() methods.  However (against our strongly given advice), a second asynchronous model was also enabled, as represented by ServletRequest.startAsync() followed by AsyncContext.complete().  With the start/complete model, instead of generating a response by dispatching a container-managed thread, serialized on the request, to the Servlet container, arbitrary asynchronous threads could generate the response by directly accessing the request/response objects and then call the AsyncContext.complete() method when the response had been fully generated to end the cycle.   The result is that the entire API, designed not to be thread safe, was now exposed to concurrent calls. Unfortunately there was (and is) very little in the specification to help resolve the many races and ambiguities that resulted.

      Off to the Races

      The primary race introduced by start/complete is that described above caused by mutable requests that are forced by “wrapper object identity”. Consider the following asynchronous Servlet:

      @WebServlet(urlPatterns = {"/async/*"}, asyncSupported = true)
      @RunAs("special")
      public static class AsyncServlet extends HttpServlet
      {
          @Override
          protected void doGet(HttpServletRequest request, HttpServletResponse response)
              throws ServletException, IOException
          {
              AsyncContext async = request.startAsync();
              PrintWriter out = response.getWriter();
              async.start( () ->
              {
                  response.setStatus(HttpServletResponse.SC_OK);
                  out.printf("path=%s special=%b%n",
                             request.getServletPath(),
                             request.isUserInRole("special"));
                  async.complete();
              });
          }
      }

      If invoked via a RequestDispatcher.forward(...), then the result produced by this Servlet is a race: will the thread dispatched to execute the lambda execute before or after the thread returns from the `doGet` method (and any applied filters) and the pre-forward values for the path and role are restored? Not only could the path and role be reported either for the target or caller, but the race could even split them so they are reported inconsistently.  To avoid this race, asynchronous Servlets must copy any value that they may use from the request before starting the asynchronous thread, which is needless complexity and expense. Many Servlets do not actually do this and just rely on happenstance to work correctly.

      This problem is the result of  the start/complete lifecycle of asynchronous Servlets permitting/encouraging arbitrary threads to call the existing APIs that were not designed to be thread-safe.  This issue is avoided if the request object passed to doGet is immutable and if it is the target of a forward, it will always act as that target. However, there are other issues of the asynchronous lifecycle that cannot be resolved just with immutability.

      Out of Time

      The example below is a very typical race that exists in many applications between a timeout and asynchronous processing:

      @Override
      protected void doGet(HttpServletRequest request,
                           HttpServletResponse response) throws IOException
      {
          AsyncContext async = request.startAsync();
          PrintWriter out = response.getWriter();
          async.addListener(new AsyncListener()
          {
              @Override
              public void onTimeout(AsyncEvent asyncEvent) throws IOException
              {
                  response.setStatus(HttpServletResponse.SC_BAD_GATEWAY);
                  out.printf("Request %s timed out!%n", request.getServletPath());
                  out.printf("timeout=%dms%n ", async.getTimeout());
                  async.complete();
              }
          });
          CompletableFuture<String> logic = someBusinessLogic();
          logic.thenAccept(answer ->
          {
              response.setStatus(HttpServletResponse.SC_OK);
              out.printf("Request %s handled OK%n", request.getServletPath());
              out.printf("The answer is %s%n", answer);
              async.complete();
          });
      }

      Because the handling of the result of the business logic may be executed by a non-container-managed thread, it may run concurrently with the timeout callback. The result can be an incorrect status code and/or the response content being interleaved. Even if both lambdas grab a lock to mutually exclude each other, the results are sub-optimal, as both will eventually execute and one will ultimately throw an IllegalStateException, causing extra processing and a spurious exception that may confuse developers/deployers.

      The current specification of the asynchronous life cycle is the worst of both worlds for the implementation of the container. On one hand, they must implement the complexity of request-serialized events, so that for a given request there can only be a single container-managed thread in service(...), doFilter(...), onWritePossible(), onDataAvailable(), onAllDataRead()and onError(), yet on the other hand an arbitrary application thread is permitted to concurrently call the API, thus requiring additional thread-safety complexity. All the benefits of request-serialized threads are lost by the ability of arbitrary other threads to call the Servlet APIs.

      Request Serialized Threads

      The fix is twofold: firstly make more Servlet APIs immutable (as discussed above) so they are safe to call from other threads;  secondly and most importantly, any API that does mutate state should only be able to be called from request-serialized threads!   The latter might seem a bit draconian as it will make the lambda passed to thenAccept in the example above throw an IllegalStateException when it tries to setStatus(int) or call complete(), however, there are huge benefits in complexity and correctness and only some simple changes are needed to rework existing code.

      Any code running within a call to service(...), doFilter(...), onWritePossible(), onDataAvailable(), onAllDataRead()and onError() will already be in a request-serialized thread, and thus will require no change. It is only code executed by threads managed by other asynchronous components (e.g. the lambda passed to thenAccept() above) that need to be scoped. There is already the method AsyncContext.start(Runnable) that allows a non-container thread to access the context (i.e. classloader) associated with the request. An additional similar method AsyncContext.dispatch(Runnable) can be provided that not only scopes the execution but mutually excludes it and serializes it against any call to the methods listed above and any other dispatched Runnable. The Runnables passed may be executed within the scope of the dispatch call if possible (making the thread momentarily managed by the container and request serialized) or scheduled for later execution.  Thus calls to mutate the state of a request can only be made from threads that are serialized.

      To make accessing the dispatch(Runnable) method more convenient, an executor can be provided with AsyncContext.getExecutor() which provides the same semantic.  The example above can now be simply updated:

      @Override
      protected void doGet(HttpServletRequest request,
                           HttpServletResponse response) throws IOException
      {
          AsyncContext async = request.startAsync();
          PrintWriter out = response.getWriter();
          async.addListener(new AsyncListener()
          {
              @Override
              public void onTimeout(AsyncEvent asyncEvent) throws IOException
              {
                  response.setStatus(HttpServletResponse.SC_BAD_GATEWAY);
                  out.printf("Request timed out after %dms%n ", async.getTimeout());
                  async.complete();
              }
          });
          CompletableFuture<String> logic = someBusinessLogic();
          logic.thenAcceptAsync(answer ->
          {
              response.setStatus(HttpServletResponse.SC_OK);
              out.printf("The answer is %s%n", answer);
              async.complete();
          }, async.getExecutor());
      }

      Because the AsyncContext.getExecutor() is used to invoke the business logic consumer, then the timeout and business logic response methods are mutually excluded. Moreover, because they are serialized by the container, the request state can be checked between each, so that if the business logic has completed the request, then the timeout callback will never be called, even if the underlying timer expires while the response is being generated. Conversely, if the business logic result is generated after the timeout, then the lambda to generate the response will never be called.  Because both of the tasks in this example call complete, then only one of them will ever be executed.

      And Now You’re Complete

      In the example below, a non-blocking read listener has been set on the request input stream, thus a callback to onDataAvailable() has been scheduled to occur at some time in the future.  In parallel, an asynchronous business process has been initiated that will complete the response:

      @Override
      protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException
      {
          AsyncContext async = request.startAsync();
          request.getInputStream().setReadListener(new MyReadListener());
          CompletableFuture<String> logicB = someBusinessLogicB();
          PrintWriter out = response.getWriter();
          logicB.thenAcceptAsync(b ->
          {
              out.printf("The answer for %s is B=%s%n", request.getServletPath(), b);
              async.complete();
          }, async.getExecutor());
      }

      The example uses the proposed APIs above so that any call to complete is mutually excluded and serialized with the call to doGet and onDataAvailable(...). Even so, the current spec is unclear if the complete should prevent any future callback to onDataAvailable(...) or if the effect of complete() should be delayed until the callback is made (or times out). Given that the actions can now be request-serialized, the spec should require that once a request serialized thread that has called complete returns, then the request cycle is complete and there will be no other callbacks other than onComplete(...), thus cancelling any non-blocking IO callbacks.

      To Be Removed

      Before extending the Servlet specification, I believe the following existing features should be removed or deprecated:

      • Cross context dispatch deprecated and existing methods return null.  Once a request is matched to a context, then it will only ever be associated with that context and the getServletContext() method will return the same value no matter what state the request is in.
      • The “Wrapper Object Identity” requirement is removed and the request object will be required to be immutable in regards to the methods affected by a dispatch and may be referenced by asynchronous threads.
      • The RequestDispatcher.include(...) is deprecated and replaced with utility response wrappers.  The existing API can be deprecated and its implementation changed to use a request wrapper to simulate the existing attributes.
      • The special attributes for FORWARD, INCLUDE, ASYNC are removed from the normal dispatches.  Utility wrappers will be provided that can simulate these attributes if needed for backward compatibility.
      • The getDispatcherType() method is deprecated and returns REQUEST, unless a utility wrapper is used to replicate the old behavior.
      • Servlet API methods that mutate state will only be callable from request-serialized container-managed threads and will otherwise throw IllegalStateException. New AsyncContext.dispatch(Runnable) and AsyncContext.getExecutor() methods will provide access to request-serialization for arbitrary threads/lambdas/Runnables

      With these changes, I believe that many web applications will not be affected and most of the remainder could be updated with minimal effort. Furthermore, utility filters can be provided that apply wrappers to obtain almost all deprecated behaviors other than Wrapper Object Identity. In return for the slight break in backward compatibility, the benefit of these changes would be significant simplifications and efficiencies of the Servlet container implementations. I believe that only with such simplifications can we have a stable base on which to build new features into the Servlet specification. If we can’t take out the cruft now, then when?

      The plan is to follow this blog up with another proposing some more rationalisation of features (I’m looking at you sessions and authentication), before another blog proposing some new features an future directions.

    • Introducing Jetty Load Generator

      The Jetty Project just released the Jetty Load Generator, a Java 11+ library to load-test any HTTP server, that supports both HTTP/1.1 and HTTP/2.
      The project was born in 2016, with specific requirements. At the time, very few load-test tools had support for HTTP/2, but Jetty’s HttpClient did. Furthermore, few tools supported web-page like resources, which were important to model in order to compare the multiplexed HTTP/2 behavior (up to ~100 concurrent HTTP/2 streams on a single connection) against the HTTP/1.1 behavior (6-8 connections). Lastly, we were more interested in measuring quality of service, rather than throughput.
      The Jetty Load Generator generates requests asynchronously, at a specified rate, independently from the responses. This is the Jetty Load Generator core design principle: we wanted the request generation to be constant, and measure response times independently from the request generation. In this way, the Jetty Load Generator can impose a specific load on the server, independently of the network round-trip and independently of the server-side processing time. Adding more load generators (on the same machine if it has spare capacity, or using additional machines) will allow the load against the server to increase linearly.
      Using this core principle, you can setup the load testing by having N load generator loaders that impose the load on the server, and 1 load generator probe that imposes a very light load and measures response times.
      For example, you can have 4 loaders that impose 20 requests/s each, for a total of 80 requests/s seen by the server. With this load on the server, what would be the experience, in terms of response times, of additional users that make requests to the server? This is exactly what the probe measures.
      If the load on the server is increased to 160 requests/s, what would the probe experience? The same response times? Worse? And what are the probe response times if the load on the server is increased to 240 requests/s?
      Rather than trying to measure some form of throughput (“what is the max number of requests/s the server can sustain?”), the Jetty Load Generator measures the quality of service seen by the probe, as the load on the server increases. This is, in practice, what matters most for HTTP servers: knowing that, when your server has a load of 1024 requests/s, an additional user can still see response times that are acceptable. And knowing how the quality of service changes as the load increases.
      The Jetty Load Generator builds on top of Jetty’s HttpClient features, and offers:

      • A builder-style Java API, to embed the load generator into your own code and to have full access to all events emitted by the load generator
      • A command-line tool, similar to Apache’s ab or wrk2, with histogram reporting, for ease of use, scripting, and integration with CI servers.

      Download the latest command-line tool uber-jar from: https://repo1.maven.org/maven2/org/mortbay/jetty/loadgenerator/jetty-load-generator-starter/

      $ cd /tmp
      $ curl -O https://repo1.maven.org/maven2/org/mortbay/jetty/loadgenerator/jetty-load-generator-starter/1.0.2/jetty-load-generator-starter-1.0.2-uber.jar
      

      Use the --help option to display the available command line options:

      $ java -jar jetty-load-generator-starter-1.0.2-uber.jar --help
      

      Then run it, for example:

      $ java -jar jetty-load-generator-starter-1.0.2-uber.jar --scheme https --host your_server --port 443 --resource-rate 1 --iterations 60 --display-stats
      

      You will obtain an output similar to the following:

      ----------------------------------------------------
      -------------  Load Generator Report  --------------
      ----------------------------------------------------
      https://your_server:443 over http/1.1
      resource tree     : 1 resource(s)
      begin date time   : 2021-02-02 15:38:39 CET
      complete date time: 2021-02-02 15:39:39 CET
      recording time    : 59.657 s
      average cpu load  : 3.034/1200
      histogram:
      @                     _  37 ms (0, 0.00%)
      @                     _  75 ms (0, 0.00%)
      @                     _  113 ms (0, 0.00%)
      @                     _  150 ms (0, 0.00%)
      @                     _  188 ms (0, 0.00%)
      @                     _  226 ms (0, 0.00%)
      @                     _  263 ms (0, 0.00%)
      @                     _  301 ms (0, 0.00%)
                         @  _  339 ms (46, 76.67%) ^50%
         @                  _  376 ms (7, 11.67%) ^85%
        @                   _  414 ms (5, 8.33%) ^95%
      @                     _  452 ms (1, 1.67%)
      @                     _  489 ms (0, 0.00%)
      @                     _  527 ms (0, 0.00%)
      @                     _  565 ms (0, 0.00%)
      @                     _  602 ms (0, 0.00%)
      @                     _  640 ms (0, 0.00%)
      @                     _  678 ms (0, 0.00%)
      @                     _  715 ms (0, 0.00%)
      @                     _  753 ms (1, 1.67%) ^99% ^99.9%
      response times: 60 samples | min/avg/50th%/99th%/max = 303/335/318/753/753 ms
      request rate (requests/s)  : 1.011
      send rate (bytes/s)        : 189.916
      response rate (responses/s): 1.006
      receive rate (bytes/s)     : 41245.797
      failures          : 0
      response 1xx group: 0
      response 2xx group: 60
      response 3xx group: 0
      response 4xx group: 0
      response 5xx group: 0
      ----------------------------------------------------
      

      Use the Jetty Load Generator for your load testing, and report comments and issues at https://github.com/jetty-project/jetty-load-generator. Enjoy!

    • A story about Unix, Unicode, Java, filesystems, internationalization and normalization

      Recently, I’ve been investigating some test failures that I only experienced on my own machine, which happens to run some flavor of Linux. Investigating those failures, I ran down a rabbit hole that involves Unix, Unicode, Java, filesystems, internationalization and normalization. Here is the story of what I found down at the very bottom.

      A story about Unix internationalization

      One test that was failing is testAccessUniCodeFile, with the following exception:

      java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters: swedish-å.txt
      	at java.base/sun.nio.fs.UnixPath.encode(UnixPath.java:145)
      	at java.base/sun.nio.fs.UnixPath.(UnixPath.java:69)
      	at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:279)
      	at java.base/java.nio.file.Path.resolve(Path.java:515)
      	at org.eclipse.jetty.util.resource.FileSystemResourceTest.testAccessUniCodeFile(FileSystemResourceTest.java:335)
      	...
      

      This test asserts that Jetty can read files with non-ASCII characters in their names. But the failure happens in Path.resolve, when trying to create the file, before any Jetty code is executed. But why?
      When accessing a file, the JVM has to deal with Unix system calls. The Unix system call typically used to create a new file or open an existing one is int open(const char *path, int oflag, …); which accepts the file name as its first argument.
      In this test, the file name is "swedish-å.txt" which is a Java String. But that String isn’t necessarily encoded in memory in a way that the Unix system call expects. After all, a Java String is not the same as a C const char * so some conversion needs to happen before the C function can be called.
      We know the Java String is represented by a UTF-8-encoded byte[] internally. But how is the C const char * actually is supposed to be represented? Well, that depends. The Unix spec specifies that internationalization depends on environment variables. So the encoding of the C const char * depends on the LANG, LC_CTYPE and LC_ALL environment variables and the JVM has to transform the Java String to a format determined by these environment variables.
      Let’s have a look at those in a terminal:

      $ echo "LANG=\"$LANG\" LC_CTYPE=\"$LC_CTYPE\" LC_ALL=\"$LC_ALL\""
      LANG="C" LC_CTYPE="" LC_ALL=""
      $
      

      C is interpreted as a synonym of ANSI_X3.4-1968 by the JVM which itself is a synonym of US-ASCII.
      I’ve explicitly set this variable in my environment as some commands use it for internationalization and I appreciate that all the command line tools I use strictly stick to English. For instance:

      $ sudo LANG=C apt-get remove calc
      ...
      Do you want to continue? [Y/n] n
      Abort.
      $ sudo LANG=fr_BE.UTF-8 apt-get remove calc
      Do you want to continue? [O/n] n
      Abort.
      $
      

      Notice the prompt to the question Do you want to continue? that is either [Y/n] (C locale) or [O/n] (Belgian-French locale) depending on the contents of this variable. Up until now, I didn’t know that it also impacted what files the JVM could create or open!
      Knowing that, it is now obvious why the file cannot be created: it is not possible to convert the "swedish-å.txt" Java String to an ASCII C const char * simply because there is no way to represent the å character in ASCII.
      Changing the LANG environment variable to en_US.UTF-8 allowed the JVM to successfully make that Java-to-C string conversion which allowed that test to pass.
      Our build has now been changed to force the LC_ALL environment variable (as it is the one that overrides the other ones) to en_US.UTF-8 before running our tests to make sure this test passes even on environments with non-unicode locales.

      A story about filesystem Unicode normalization

      There was an extra pair of failing tests, that reported the following error:

      java.lang.AssertionError:
      Expected: is <404>
           but: was <200>
      Expected :is <404>
      Actual   :<200>
      

      For the context, those tests are about creating a file with a non-ASCII name encoded in some way and trying to serve it over HTTP with a request to the same non-ASCII name encoded in a different way. This is needed because Unicode supports different forms of encoding, notably Normalization Form Canonical Composition (NFC) and Normalization Form Canonical Decomposition (NFD). For our example string “swedish-å.txt”, this means there are two ways to encode the letter “å”: either U+00e5 LATIN SMALL LETTER A WITH RING ABOVE (NFC) or U+0061 LATIN SMALL LETTER A followed by U+030a COMBINING RING ABOVE (NFD).
      Both are canonically equivalent, meaning that a unicode string with the letter “å” encoded either as NFC or NFD should be considered the same. Is that true in practice?
      The failing tests are about creating a file whose name is NFC-encoded then trying to serve it over HTTP with the file name encoded in the URL as NFD and vice-versa.
      When running those tests on MacOS on APFS, the encoding never matters and MacOS will find the file with a NFC-encoded filename when you try to open it with a NFD-encoded canonically equivalent filename and vice-versa.
      When running those tests on Linux on ext4 or Windows on NTFS, the encoding always matters and Linux/Windows will not find the file with a NFC-encoded filename when you try to open it with a NFD-encoded canonically equivalent filename and vice-versa.
      And this is exactly what the tests expect:

      if (OS.MAC.isCurrentOs())
        assertThat(response.getStatus(), is(HttpStatus.OK_200));
      else
        assertThat(response.getStatus(), is(HttpStatus.NOT_FOUND_404));
      

      What I discovered is that when running those tests on Linux on ZFS, the encoding sometimes matters and Linux may find the file with a NFC-encoded filename when you try to open it with a NFD-encoded canonically equivalent filename and vice-versa, depending upon the ZFS normalization property; quoting the manual:

      normalization = none | formC | formD | formKC | formKD
          Indicates whether the file system should perform a unicode normalization of file names whenever two file names are compared, and which normalization algorithm should be used. File names are always stored unmodified, names are normalized as part of any comparison process. If this property is set to a legal value other than none, and the utf8only property was left unspecified, the utf8only property is automatically set to on. The default value of the normalization property is none. This property cannot be changed after the file system is created.
      

      So if we check the normalization of the filesystem upon which the test is executed:

      $ zfs get normalization /
      NAME                      PROPERTY       VALUE          SOURCE
      rpool/ROOT/nabo5t         normalization  formD          -
      $
      

      we can understand why the tests fail: due to the normalization done by ZFS, Linux can open the file given canonically equivalent filenames, so the test mistakenly assumes that Linux cannot serve this file. But if we create a new filesystem with no normalization property:

      $ zfs get normalization /unnormalized/test/directory
      NAME                      PROPERTY       VALUE          SOURCE
      rpool/unnormalized        normalization  none           -
      $
      

      and run a copy of the tests from it, the tests succeed.
      So we’ve adapted both tests to make them detect if the filesystem supports canonical equivalence and basing the assertion on that detection instead of hardcoding which OS behaves in which way.

    • Community Projects & Contributors Take on Jakarta EE 9

      With the recent release of JakartaEE9, the future for Java has never been brighter. In addition to headline projects moving forward into the new jakarta.* namespace, there has been a tremendous amount of work done throughout the community to stay at the forefront of the changing landscape. These efforts are the summation of hundreds of hours by just as many developers and highlight the vibrant ecosystem in the Jakarta workspace.
      The Jakarta EE contributors and committers came together to shape the 9 release. They chose to create a reality that benefits the entire Jakarta EE ecosystem. Sometimes, we tend to underestimate our influence and the power of our actions. Now that open source is the path of Jakarta EE, you, me, all of us can control the outcome of this technology. 
      Such examples that are worthy of emulation include the following efforts. In their own words:

      Eclipse Jetty – The Jetty project recently released Jetty 11, which has worked towards full compatibility with JakartaEE9 (Servlet, JSP, and WebSocket). We are driven by a mission statement of “By Developers, For Developers”, and the Jetty team has worked since the announcement of the so-called “Big Bang” approach to move Jetty entirely into the jakarta.* namespace. Not only did this position Jetty as a platform for other developers to push their products into the future, but also allowed the project to quickly adapt to innovations that are sure to come.

      [Michael Redich] The Road to Jakarta EE 9, an InfoQ news piece, was published this past October to highlight the efforts by Kevin Sutter, Jakarta EE 9 Release Lead at IBM, and to describe the progress made this past year in making this new release a reality. The Java community should be proud of their contributions to Jakarta EE 9, especially implementing the “big bang,” and discussions have already started for Jakarta EE 9.1 and Jakarta EE 10. The Q&A with Kevin Sutter in the news piece includes the certification and voting process for all the Jakarta EE specifications, plans for upcoming releases of Jakarta EE, and how Java developers can get involved in contributing to Jakarta EE. Personally, I am happy to have been involved in Jakarta EE having authored 14 Jakarta EE-related InfoQ news items for the three years, and I look forward to taking my Jakarta EE contributions to the next level. I have committed to contributing to the Jakarta NoSQL specification which is currently under development. The Garden State Java User Group (in which I serve as one of its directors) has also adopted Jakarta NoSQL. I challenge anyone who still thinks that the Java programming language is dead because these past few years have been an exciting time to be part of this amazing Java community!

      WildFly 22 Beta1 contains a tech preview EE 9 variant called WildFly Preview that you can download from the WildFly download page.  The WildFly team is still working on passing the needed (Jakarta EE 9) TCKs (watch for updates via the wildfly.org site.)  WildFly Preview includes a mix of native EE 9 APIs and implementations (i.e. ones that use the  jakarta.* namespace) along with many APIs and implementations from EE 8 (i.e. ones that use the  java.* namespace). This mix of namespaces is made possible by using the Eclipse community’s excellent Eclipse Transformer project to bytecode transformer legacy EE 8 artifacts to EE 9 when the server is provisioned. Applications that are written for EE 8 can also run on WildFly Preview, as a similar transformation is performed on any deployments managed by the server.

      Apache TomEE is a Jakarta EE application server based on Apache Tomcat. The project main focus is the Web Profile up until Jakarta EE 8. However, with Jakarta EE 9 and some parts being optional or pruned, the project is considering the full platform for the future. TomEE is so far a couple of tests down (99% coverage) before it reaches compatibility with Jakarta EE 8 (See Introducing TCK Work and how it helps the community jump into the effort). For Jakarta EE 9, the Community decided to pick a slightly different path than other implementations. We have already produced a couple of Apache TomEE 9 milestones for Jakarta EE 9 based on a customised version of the Eclipse Transformer. It fully supports the new jakarta.* namespace. Not to forget, the project also implements MicroProfile.

      Open Liberty is in the process of completing a Compatible Implementation for Jakarta EE 9.  For several months, the Jakarta EE 9 implementation has been rolling out via the “monthly” Betas.  Both of the Platform and Web Profile TCK testing efforts are progressing very well with 99% success rates.  The expectation is to declare one (or more) of the early Open Liberty 2021 Betas as a Jakarta EE 9 Compatible Implementation.  Due to Open Liberty’s flexible architecture and “zero migration” goal, customers can be assured that their current Java EE 7, Java EE 8, and Jakarta EE 8 applications will continue to execute without any changes required to the application code or server configuration.  But, with a simple change to their server configuration, customers can easily start experimenting with the new “jakarta” namespace in Jakarta EE 9.

      Jelastic PaaS is the first cloud platform that has already made Jakarta EE 9 release available for the customers across a wide network of distributed hosting service providers. For the last several months Jelastic team has been actively integrating Jakarta EE 9 within the cloud platform and in December made an official release. The certified container images with the following software stacks are already updated and available for customers across over 100 data centers: Tomcat, TomEE, GlassFish, WildFly and Jetty. Jelastic PaaS provides an easy way to create environments with new Jakarta EE 9 application servers for deep testing, compatibility checks and running live production environments. It’s also possible now to redeploy existing containers with old versions to the newest ones in order to reduce the necessary migration efforts, and to expedite adoption of cutting-edge cloud native tools and products. 


      [
      Amelia Eiras] Pull Request 923- Jakarta EE 9 Contributors Card is a formidable example of eleven-Jakartees coming together to create, innovate and collaborate on an Integration-Feature that makes it so that no contributor, who helped on Jakarta EE 9 release, be forgotten in the new landing page for the EE 9 Release. Who chose those Contributors? None. That is the sole point of the existence of PR923.I chose to lead the work on the PR and worked openly by prompt communications delivered the day that Tomitribe submitted the PR – Jakarta EE Working Group message to the forum to invite other Jakartees to provide input in the creation of the new feature. With Triber Andrii, who wrote the code and the feedback of those involved, the feature is active and used in the EE 9 contributors cards, YOU ROCK WALL
      The Integration-Feature will be used in future releases.  We hope that it is also adopted by any project, community, or individual in or outside the Eclipse Foundation to say ThankYOU with actions to those who help power & maintain any community. 

      • PR logistics: 11 Jakartees came together and produced 116 exchanges that helped merge the code. Thank you, Chris (Eclipse WebMaster) for helping check the side of INFRA. The PR’s exchanges lead us to choose the activity from 2 GitHub sources: 1) https://github.com/jakartaee/specifications/pulls all merged pulls and 2)  https://github.com/eclipse-ee4j all repositories.
      • PR Timeframe: the Contributors’ work accomplished from October 31st, 2019 to November 20th, 2020, was boxed and is frozen.   The result is that the Contributor Cards highlight 6 different Jakartees at a time every 15 seconds.  A total of 171 Jakartee Contributors (committers and contributors, leveled) belong to the amazing people behind EE 9 code. While working on that PR, other necessary improvements become obvious. A good example is the visual tweaks PR #952 we submitted that improved the landing page’s formatting, cards’ visual, etc. 

      Via actions, we chose to not “wait & see”, saving the project $budget, but also enabling openness to tackle the stuff that could have been dropped into “nonsense”. 

       
      In open-source, our actions project a temporary part of ourselves, with no exceptions. Those actions affect positively or negatively any ecosystem. Thank you for taking the time to read this #SharingIsCaring blog.