Andres Almiray (aalmiray) interviewed me at the JCrete unconference.
We spoke about the history of the Jetty project (which is 22 years old – like Java itself), how Jetty has been able to stay on the edge all these years, how contribution to Open Source Projects works these days, and what is the simplest thing that a person can do to contribute to an Open Source Project.
Enjoy the video interview (about 16 mins):
Blog
-
Contributing to Open Source (and Jetty !)
-
CometD and NodeJS, part 2
In our previous blog, we presented the case of a Webtide customer, Genesys, that needed to integrate CometD in NodeJS and how we developed a CometD client capable of running in the NodeJS environment.
In this article we present the other side of the solution, that is, how we implemented the CometD NodeJS Server. Leveraging this, Genesys was able to use the standard CometD JavaScript Client in the browser front-end application to talk to the CometD NodeJS server application, which in turn used the CometD NodeJS Client to talk to the Java CometD server application.
The CometD NodeJS Server is based on the same CometD concepts present in the CometD Java server.
In particular, there is a central object, theCometDServer, that handles HTTP requests and responses provided by the NodeJS environment. TheCometDServerobject is also a repository for sessions and channels, that are the two primary concepts used in a server-side CometD application. Both sessions and channels emit events that an application can listen to in order to implement the required business logic.
Installing the CometD NodeJS Server is easy:npm install cometd-nodejs-server
The minimal setup of a CometD NodeJS Server application is the following:
var http = require('http'); var cometd = require('cometd-nodejs-server'); var cometdServer = cometd.createCometDServer(); var httpServer = http.createServer(cometdServer.handle); httpServer.listen(0, 'localhost', function() { // Business logic here. });Now you can use the CometD NodeJS Server APIs to be notified when a message arrives on a certain channel:
var channel = cometdServer.createServerChannel('/service/chat'); channel.addListener('message', function(session, channel, message, callback) { // Your message handling here. // Invoke the callback to signal that handling is complete. callback(); });Further examples of API usages can be found at the CometD NodeJS Server project.
With the CometD NodeJS Client and Server projects, Genesys was able to leverage CometD throughout the whole process chain, from the browser to NodeJS to the Java CometD server. This allowed Genesys the use of a consistent API throughout the whole architecture, with the same concepts and a very smooth learning curve for developers. -
CometD and NodeJS, part 1
In addition to our Lifecycle Support offerings, Webtide is also committed to helping develop new functionality to meet customer needs for the open source projects Webtide supports, CometD and Eclipse Jetty.
Recently Genesys, a global leader in customer experience solutions and one of Webtide’s customers, reached out regarding their usage of CometD, looking for help integrating CometD with NodeJS.
Their architecture had a browser front-end application talking to a NodeJS server application, which in turn talked to a Java CometD server application. Server-side events emitted by the CometD application needed to travel through NodeJS all the way down to the front-end, and the front-end needed a way to register interest for those events.
At the time the CometD project did not have any NodeJS integration, so Genesys partnered with Webtide to develop the integration as a sponsored effort, leveraging our knowledge as the experts behind CometD.
This resulted in two new CometD sub-projects, CometD NodeJS Client and CometD NodeJS Server, and in publishing CometD artifacts in NPM.
The first step was to publish the CometD JavaScript Client to NPM. Starting with CometD 3.1.0, you can now do:npm install cometd
and have the CometD JavaScript Client available for developing your front-end applications.
However, the CometD JavaScript Client does not run in NodeJS because it assumes a browser environment. In particular it assumes the existence of thewindowglobal object, and of theXMLHttpRequestAPIs and functionalities such as HTTP cookie handling.
Initially, rewriting a pure NodeJS CometD client was considered, but discarded as it would have duplicated a lot of code written with years of field experience. It turned out that implementing the parts of the browser environment needed by the CometD JavaScript Client was simpler, and the CometD NodeJS Client was born.
The CometD NodeJS Client implements the minimum requirements to run the CometD JavaScript Client inside a NodeJS environment. It uses the NodeJS HTTP facilities to implementXMLHttpRequest, exposes awindowglobal object and few other functionalities present in a browser environment such as timers (window.setTimeout(...)) and logging (window.console).
Writing a CometD NodeJS client application is now very simple. First, install the CometD client libraries:npm install cometd-nodejs-client npm install cometd
Second, write your application:
require('cometd-nodejs-client').adapt(); var lib = require('cometd'); var cometd = new lib.CometD(); ...Following this framework, Genesys was able to utilize CometD from within NodeJS to talk to the Java CometD server application and vice-versa.
In the next blog we will take a look at the CometD NodeJS Server which allows the front-end application to talk to the NodeJS server application, therefore using CometD from the front-end application through NodeJS to the Java CometD server. -
HTTP Trailers in Jetty
HTTP/1.1 and HTTP/2 have the concept of trailers, that is HTTP headers that can be sent after the message body, in both requests and responses.
In HTTP/1.1 trailers can be sent using the chunked transfer coding, for example in requests (but the same is valid in responses):POST / HTTP/1.1\r\n Host: host\r\n Transfer-Encoding: chunked\r\n \r\n A\r\n 0123456789\r\n 0\r\n Trailer-Name: trailer-value\r\n Foo: bar\r\n \r\n
As you can see, between the indication of the terminal chunk length
0\r\nand the terminal empty line\r\n, HTTP/1.1 allows to put the trailers.
In HTTP/2, the situation is similar:HEADERS - end_stream=false DATA - length=10, end_stream=false HEADERS - end_stream=true
The first
HEADERSframe contains the request line and headers, followed by aDATAframe that does not end the stream yet, followed by aHEADERSframe that contains the trailers, and that ends the stream.
A typical use of trailers would be to add dynamically generated metadata about the content, for example message integrity checksums.
Another typical use is for applications that stream content: in case of problems during the streaming, they can add trailers with information about what went wrong.
Other protocols such as gRPC make use of the trailers and therefore can be mapped on top of HTTP.
The Servlet APIs, up to version 3.1, do not expose a standard API to access the trailers. HTTP trailers APIs are, however, now being discussed for inclusion in Servlet 4.0.
The recently released Jetty 9.4.4.v20170414 includes support for HTTP trailers, for both HTTP/1.1 and HTTP/2, via custom Jetty APIs.
This is how you can use them in a Servlet:public class TrailerServlet extends HttpServlet { @Override protected void service(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { Request jettyRequest = (Request)request; // Read the content first. ServletInputStream input = jettyRequest.getInputStream(); while (true) { int read = input.read(); if (read < 0) { break; } } // Now the request trailers can be accessed. HttpFields requestTrailers = jettyRequest.getTrailers(); // Use the request trailers. HttpFields responseTrailers = new HttpFields(); trailers.put("trailer1", "foo"); // Set trailer Supplier to tell the container // that there will be response trailers. Response jettyResponse = (Response)response; jettyResponse.setTrailers(() -> trailers); // Write some content and commit the response. ServletOutputStream output = response.getOutputStream(); output.write("foo_bar_baz"); output.flush(); // Add another trailer. trailers.put("trailer2", "bar"); // Write more content. output.write("done"); // Add a last trailer. trailers.put("last", "baz"); } }Request trailers will only be available after the request content has been fully read.
For the response trailers, the reason to use aSupplierin the response APIs is to tell the container to use the chunked transfer coding (in case of HTTP/1.1), even if the response content length is known. In this way, the container can prepare for sending the trailers, and eventually send them when the whole content has been sent.
Try out HTTP trailers in Jetty 9.4.4, and report back how you use it and how you like it (so that we can make it even better) either in the Jetty mailing lists, or in a Jetty GitHub issue (open it just for the discussion).
Enjoy ! -
Jetty, Cookies and RFC6265 Compliance
Starting with patch 9.4.3, Jetty will be fully compliant with RFC6265, which presents changes to cookies which may have significant impact for some users.
Up until now Jetty has supported Version=1 cookies defined in RFC2109 (and continued in RFC2965) which allows for special/reserved characters (control, separator, et al) to be enclosed within double quotes when declared in aSet-Cookieresponse header:
Example:Set-Cookie: foo="bar;baz";Version=1;Path="/secur"
Which was added to the HTTP Response headers using the following calls.
Cookie cookie = new Cookie("foo", "bar;baz"); cookie.setPath("/secur"); response.addCookie(cookie);This allowed for normally non-permitted characters (such as the ; separator found in the example above) to be used as part of a cookie value. With the introduction of RFC6265 (replacing the now obsolete RFC2965 and RFC2109) , this use of Double Quotes to enclose special characters is no longer possible.
This change was made as a reaction to the strict RFC6265 validation rules present in Chrome/Chromium.
As such, users are now required to encode their cookie values to use these characters.
Utilizing javax.servlet.http.Cookie, this can be done as:Cookie cookie = new Cookie("foo", URLEncoder.encode("bar;baz", "utf-8"));Starting with Jetty 9.4.3, we will now validate all cookie names and values when being added to the
HttpServletResponsevia theaddCookie(Cookie)method. If there is something amiss, Jetty will throw anIllegalArgumentExceptionwith the details.
Of note, this newaddCookie(Cookie)validation will be applied via the ServerConnector, and will work on HTTP/1.0, HTTP/1.1, and HTTP/2.0
Additionally, Jetty has added aCookieComplianceproperty to theHttpConfigurationobject which can be utilized to define which cookie policy the ServerConnectors will adhere to. By default, this will be set to RFC6265.
In the standard Jetty Distribution, this can be found in the server’s jetty.xml as:<Set name="cookieCompliance"> <Call class="org.eclipse.jetty.http.CookieCompliance" name="valueOf"> <Arg><Property name="jetty.httpConfig.cookieCompliance" default="RFC6265"/></Arg> </Call> </Set>
Or if you are utilizing the module system in the Jetty distribution, you can set the
jetty.httpConfig.cookieComplianceproperty in the appropriate start INI for your${jetty.base} (such as ${jetty.base}/start.ini or ${jetty.base}/start.d/server.ini):## Cookie compliance mode of: RFC6265 # jetty.httpConfig.cookieCompliance=RFC6265
Or, for older Version=1 Cookies, use:
## Cookie compliance mode of: RFC2965 # jetty.httpConfig.cookieCompliance=RFC2965
-
Patch for a Patch!
Are you an Eclipse Jetty user who enjoys contributing to the open source project and wants to let the rest of the world know? Of course you are! As a thank you to our great community, we’ve had some fancy patches made up and have launched a Patch for a Patch program. If you submit a patch to the Jetty project and it is accepted, we will send you a jetty:// iron-on patch that you can attach to your bag, coat, house, pet…etc. Show friends, family and strangers your dedication to the open source community!
If you have submitted a patch in the last year and want to take advantage of this offer, please fill out this form, which will ask for your contact information and a link to the patch you submitted. Supplies are limited! We will ship anywhere worldwide that we can reach for a reasonable amount.

-
CometD 3.1.0 Released
The CometD Project is happy to announce the availability of CometD 3.1.0.
CometD 3.1.0 builds on top of the CometD 3.0.x series, bringing improvements and new features.
You can find a migration guide at the official CometD documentation site.What’s new in CometD 3.1.0
CometD 3.1.0 now supports HTTP/2.
HTTP/2 support should be transparent for applications, since the browser on the client-side and the server (such as Jetty) on the server-side will take care of handling HTTP/2 so that nothing changes for applications.
However, CometD applications may now leverage the fact that the application is deployed over HTTP/2 and remove the limit of only one outstanding long poll per client.
This means that CometD applications that are opened in multiple browser tabs and using HTTP/2 can now have each tab performing the long poll, rather than just one tab.
CometD 3.1.0 brings support for messages containing binary data.
Now that JavaScript has evolved and that it supports binary data types, the use case of uploading or downloading files or other binary data could be more common.
CometD 3.1.0 allows applications to specify binary data in messages, and the CometD implementation will take care of converting the binary data into the textual format (using the Z85 encoding) required to send the message, and of converting the textual format back into binary data when the message is received.
Binary data support is available in both the JavaScript and Java CometD libraries.
In the JavaScript library, several changes have been made to support both the CommonJS and AMD module styles.
CometD 3.1.0 is now also deployed to NPM and Bower.
The package name for both NPM and Bower iscometd, please make sure you filter out all the other variants such ascometd-jquerythat are not directly managed by the CometD Project.
The CometD JavaScript library has been designed in a way that leverages bindings to JavaScript toolkits such as jQuery or Dojo.
This is because JavaScript toolkits are really good at working around browser quirks/differences/bugs and we did not want to duplicate all those magic workarounds in CometD itself.
In CometD 3.1.0 a new binding is available, for Angular 1. As a JavaScript toolkit, Angular 1 requires tight integration with other libraries that makeXMLHttpRequestcalls, and the binding architecture of the CometD JavaScript library fits in just nicely.
You can now use CometD from within Angular 1 applications in a way that is very natural for Angular 1 users.
The JavaScript library now supports also vanilla transports. This means that you are not bound to use bindings, but you can write applications without using any framework or toolkit, or using just the bare minimum support given by module loaders such as RequireJS or build-time tools such as Browserify or webpack.
Supporting vanilla transports was possible since recent browsers have finally fixed all the quirks and agreed on theXMLHttpRequestevents that a poor JavaScript developer should use to write portable-across-browsers code.
A couple of new Java APIs have been added, detailed in the migration guide.What’s changed in CometD 3.1.0
In the JavaScript library, browser evolution also brought support for
window.sessionStorage, so now the CometD reload extension is using theSessionStoragemechanism rather than using cookies.
You can find the details on the CometD reload extension documentation.
It is now forbidden to invokehandshake()multiple times without disconnecting in-between, so applications need to ensure that the handshake operation is performed only once.
In order to better support CommonJS, NPM and Bower, the location of the JavaScript files has changed.
Applications will probably need to change paths that were referencing the CometD JavaScript files and bindings as detailed in the migration guide.
Adding support for binary data revealed a mistake in the processing of incoming messages. While this has not been fixed in CometD 3.0.x to avoid breaking existing code, it had to be fixed in CometD 3.1.0 to support correctly binary data.
This change affects only applications that have written custom extensions, implementing eitherBayeuxServer.Extension.send(...)orServerSession.Extension.send(...). Refer to the migration guide for further details.
CometD 3.1.0 now supports all Jetty versions from the 9.2.x, 9.3.x and 9.4.x series.
While before only the Jetty 9.2.x series was officially supported, now we have decided to support all the above Jetty series to allow CometD users to benefit from bug fixes and performance improvements that come when upgrading Jetty.
Do not mix Jetty versions, however. If you decide to use Jetty 9.3.15, make sure that all the Jetty libraries used in your CometD application reference that Jetty version, and not other Jetty versions.What’s been removed in CometD 3.1.0
CometD 3.1.0 drops support for Jackson 1.x, since Jackson 2.x is now mainstream.
Server-side parameterallowMultiSessionsNoBrowserhas been removed, since sessions not identified by the CometD cookie are not allowed anymore for security reasons.Conclusions
CometD 3.1.0 is now the mainstream CometD release, and will be the primary focus for development and bug fixes.
CometD 3.0.x enters the maintenance mode, so that only urgent or sponsored fixes will be applied to it, possibly leading to new CometD 3.0.x releases – although these will be rare.
Work on CometD 4.x will start soon, using issue #647 as the basis to review the CometD APIs to be fully non-blocking and investigating the possibility of adding backpressure. -
Thread Starvation with Eat What You Kill
This is going to be a blog of mixed metaphors as I try to explain how we avoid thread starvation when we use Jetty’s eat-what-you-kill[n]The EatWhatYouKill strategy is named after a hunting proverb in the sense that one should only kill to eat. The use of this phrase is not an endorsement of hunting nor killing of wildlife for food or sport.[/n] scheduling strategy.
Jetty has several instances of a computing pattern called ProduceConsume, where a task is run that produces other tasks that need to be consumed. An example of a Producer is the HTTP/1.1 Connection, where the Producer task looks for IO activity on any connection. Each IO event detected is a Consumer task which will read the handle the IO event (typically a HTTP request). In Java NIO terms, the Producer in this example is running the NIO Selector and the Consumers are handling the HTTP protocol and the applications Servlets. Note that the split between Producing and Consuming can be rather arbitrary and we have tried to have the HTTP protocol as part of the Producer, but as we have previously blogged, that split has poor mechanical sympathy. So the key abstract about the Producer Consumer pattern for Jetty is that we use it when the tasks produced can be executed in any order or in parallel: HTTP requests from different connections or HTTP/2 frames from different streams.Eat What You Kill
Mechanical Sympathy not only affects where the split is between producing and consuming, but also how the Producer task and Consumer tasks should be executed (typically by a thread pool) and such considerations can have a dramatic effect on server performance. For example, if one thread produced a task then it is likely that the CPU’s cache is now hot with all the data relating to that task, and so it is best that the same CPU consumes that task using the hot cache. This could be achieved with complex core locking mechanism, but it is far more straight-forward to consume the task using the same thread.
Jetty has an ExecutionStrategy called Eat-What-You-Kill (EWYK), that has excellent mechanical sympathy properties. We have previously explained this strategy in detail, but in summary it follows the hunters ethic[n]The EatWhatYouKill strategy is named after a hunting proverb in the sense that one should only kill to eat. The use of this phrase is not an endorsement of hunting nor killing of wildlife for food or sport.[/n] that one should only kill (produce) something that you intend to eat (consume). This strategy allows a thread to only run the producing task if it is immediately able to run any consumer task that is produced (using the hot CPU cache). In order to allow other consumer task to run in parallel, another thread (if available) is dispatched to do more producing and consuming.Thread Starvation
EWYK is an excellent execution strategy that has given Jetty significant better throughput and reduced latency. That said, it is susceptible to thread starvation when it bites off more than it can chew.
The issue is that EWYK works by using the same thread that produced a task to immediately consume the task and it is possible (even likely) that the consumer task will block as it is often calling application code which may do blocking IO or which is set to wait for some other event. To ensure this does not block the entire server, EWYK will dispatch another task to the thread pool that will do more producing.
The problem is that if the thread pool is empty (because all the threads are in blocking application code) then the last non-blocked producing thread may produce a task which it then calls and also blocks. A task to do more producing will have been dispatched to the thread pool, but as it was generated from the last available thread, the producing task will be waiting in the job queue for an available thread. All the threads are blocking and it may be that they are all blocking on IO operations that will only be unblocked if some data is read/written. Unless something calls the NIO Selector, the read/write will not been seen. Since the Selector is called by the Producer task, and that is waiting in the queue, and the queue is stalled because of all the threads blocked waiting for the selector the server is now dead locked by thread starvation!Always two there are!
Jetty’s clever solution to this problem is to not only run our EWYK execution strategy, but to also run the alternative ProduceExecuteConsume strategy, where one thread does all the producing and always dispatches any produced tasks to the thread pool. Because this is not mechanically sympathetic, we run the producer task at low priority. This effectively reserves one thread from the thread pool to always be a producer, but because it is low priority it will seldom run unless the server is idle – or completely stalled due to thread starvation. This means that Jetty always has a thread available to Produce, thus there is always a thread available to run the NIO Selector and any IO events that will unblock any threads will be detected. This needs one more trick to work – the producing task must be able to tell if a detected IO task is non-blocking (i.e. a wakeup of a blocked read or write), in which case it executes it itself rather than submitting the task to any execution strategy. Jetty uses the InvocationType interface to tag such tasks and thus avoid thread starvation.
This is a great solution when a thread can be dedicated to always Producing (e.g. NIO Selecting). However Jetty has other Producer-Consumer patterns that cannot be threadful. HTTP/2 Connections are consumers of IO Events, but are themselves producers of parsed HTTP/2 frames which may be handled in parallel due to the multiplexed nature of HTTP/2. So each HTTP/2 connection is itself a Produce-Consume pattern, but we cannot allocate a Producer thread to each connection as a server may have many tens of thousands connections!
Yet, to avoid thread starvation, we must also always call the Producer task for HTTP/2. This is done as it may parse HTTP/2 flow control frames that are necessary to unblock the IO being done by applications threads that are blocked and holding all the available threads from the pool.
Even if there is a thread reserved as the Producer/Selector by a connector, it may detect IO on a HTTP/2 connection and use the last thread from the thread pool to Consume that IO. If it produces a HTTP/2 frame and EWYK strategy is used, then the last thread may Consume that frame and it too may block in application code. So even if the reserved thread detects more IO, there are no more available threads to consume them!
So the solution in HTTP/2 is similar to the approach with the Connector. Each HTTP/2 connection has two executions strategies – EWYK, which is used when the calling thread (the Connector’s consumer) is allowed to block, and the traditional ProduceExecuteConsume strategy, which is used when the calling thread is not allowed to block. The HTTP/2 Connection then advertises itself as an InvocationType of EITHER to the Connector. If the Connector is running normally a EWYK strategy will be used and the HTTP/2 Connection will do the same. However, if the Connector is running the low priority ProduceExecutionConsume strategy, it invokes the HTTP/2 connection as non-blocking. This tells the HTTP/2 Connection that when it is acting as a Consumer of the Connectors task, it must not block – so it uses its own ProduceExecuteConsume strategy, as it knows the Production will parse the HTTP/2 frame and not perform the Consume task itself (which may block).
The final part is that the HTTP/2 frame Producer can look at the frames produced. If they are not frames that will block when handled (i.e. Flow Control) they are handled by the Producer and not submitted to any strategy to be Consumed. Thus, even if the Server is on it’s last thread, Flow Control frames will be detected, parsed and handled – unblocking other threads and avoiding starvation! -
HTTP/2 at JAX
I was invited to speak at the JAX conference in Mainz about HTTP/2.
Jetty has always been a front-runner when it’s about web protocols: first with WebSocket, then with SPDY and finally with HTTP/2.
We believe that HTTP/2 is going to make the web much better, and we try to spread the word at conferences.
The JAX conference was great, and despite most of the sessions being in German, I had the chance to network with various speakers – it is always great to be able to speak to top notch people over breakfast or dinner, or while waiting for the next session.
Below you can find Oracle’s Yolande Poirier video interviewing me about HTTP/2 and the JAX textual interview about the same argument.
Enjoy !
-
Unix Sockets for Jetty 9.4?
In the 20th year of Jetty development we are finally considering a bit of native code integration to provide Unix Domain Sockets in Jetty 9.4!
Typically the IO performance of pure java has been close enough to native code for all the use cases of a HTTP Server, with the one key exception of SSL/TLS. I’m not exactly sure why the JVM has never provided a decent implementation of TLS – I’m guessing it is a not technical problem. Historically, this has never been a huge issue as most large scalable deployments have offloaded SSL/TLS to the load balancer and the pure java server has been more than sufficient to receive the unencrypted traffic from the load balancer.
However, there is now a move to increase the depth that SSL/TLS penetrates the data centre and some very large Jetty users are looking to have all internal traffic encrypted to improve internal security and integrity guarantees. In such deployments, it is not possible to offload the TLS to the load balancer and encryption needs to be applied locally on the server. Jetty of course fully supports TLS, but that currently means we need to use the slow java TLS implementation.
Thus we are looking at alternative solutions and it may be possible to plug in a native JSSE implementation backed by openSSL. While conceptually attractive, the JSSE API is actually a very complex one that is highly stateful and somewhat fragile to behaviour changes from implementations. While still a possibility, I would prefer to avoid such a complex semantic over a native interface (perhaps I just answered my own question about why their is not a performant JSSE provider?).
The other key option is to use a local instance native TLS offload to something like haproxy or ngnix and then make a local connection to pure java Jetty. This is a viable solution and the local connector is typically highly performant and low latency. Yet this architecture also opens the option of using Unix Domain Sockets to further optimize that local connection – to reduce data copies and avoid dispatch delays. Thus I have used the JNR unix socket implementation to add unix-sockets to jetty-9.4 (currently in a branch, but soon to be merged to master).
My current target for a frontend with this is haproxy , primarily because it can work a the TCP level rather than at the HTTP level and we have already used it in offload situations with both HTTP/1 and HTTP/2. We need only the TCP level proxy since in this scenario any parsing of HTTP done in the offloader can be considered wasted effort… unless it is being used for something like load balancing… which in this scenario is not appropriate as you will rarely load balance to a local connection (NB there has been some deployment styles that did do load balancing to multiple server instances on the same physical server, but I believe that was to get around JVM limitations on large servers and I’m not sure they still apply).
So the primary target for this effort is terminating SSL on the application server rather than the load balancer in an architecture like ( [x] is a physical machine [[x]] is multiple physical machines ):
[[Client]] ==> [Balancer] ==> [[haproxy-->Jetty]]It is in the very early days for this, so our most important goals ahead is to find some test scenarios where we can check the robustness and the performance of the solution. Ideally we are looking for a loaded deployment that we could test like:
+-> [Jetty]
/
[[Client]] ==> [Balancer] ---> [haproxy--lo0-->Jetty]
\
+-> [haproxy--usock-->Jetty] Also from a Webtide perspective we have to consider how something like this could be commercially supported as we can’t directly support the JNR native code. Luckily the developers of JNR are sure that development of JNR will continue and be supported in the (j)Ruby community. Also as JNR is just a thin very thin veneer over the standard posix APIs, there is limited scope for complex problems within the JNR software and a very well known simple semantic that needs to be supported. Another key benefit of the unixsocket approach is that it is an optimization on an already efficient local connection model, which would always be available as a fallback if there was some strange issue in the native code that we could not immediately support.
So early days with this approach, but initial effort looks promising. As always, we are keen to work with real users to better direct the development of new features like this in Jetty.