After having some discussions on spdy-dev and having some experience with our current push implementation, we’ve decided to change a few things to the better.
Jetty now sends all push resources non interleaved to the client. That means that the push resources are being sent sequentially to the client one after the other.
The ReferrerPushStrategy which automatically detects which resources need to be pushed for a specific main resource. See SPDY – we push! for details. Previously we’ve just send the push resources in random order back to the client. However with the change to sequentially send the resources, it’s best to keep the order that the first browser client requested those resources. So we changed the implementation of ReferrerPushStrategy accordingly.
This all aims at improving the time needed for rendering the page in the browser by sending the data to the browser as the browser needs them.
Author: Thomas Becker
-
Jetty SPDY push improvements
-
Jetty SPDY to HTTP Proxy
We have SPDY to SPDY and HTTP to SPDY proxy functionality implemented in Jetty for a while now.
An important and very common use case however is a SPDY to HTTP proxy. Imagine a network architecture where network components like firewalls need to inspect application layer contents. If those network components are not SPDY aware and able to read the binary protocol you need to terminate SPDY before passing the traffic through those components. Same counts for other network components like loadbalancers, etc.
Another common use case is that you might not be able to migrate your legacy application from an HTTP connector to SPDY. Maybe because you can’t use Jetty for your application or your application is not written in Java.
Quite a while ago, we’ve implemented a SPDY to HTTP proxy functionality in Jetty. We just didn’t blog about it yet. Using that proxy it’s possible to gain all the SPDY benefits where they really count…on the slow internet with high latency, while terminating SPDY on the frontend and talking plain HTTP to your backend components.
Here’s the documentation to setup a SPDY to HTTP proxy:
http://www.eclipse.org/jetty/documentation/current/spdy-configuring-proxy.html#spdy-to-http-example-config -
Why detecting concurrent issues can be difficult
Jetty 9’s NIO code is a nearly complete rewrite with improved architecture, cleaner and clearer code base and best of all it’ll be even faster and more efficient than jetty 7/8’s NIO layer. Detecting concurrent code issues is usually not a trivial thing. In today’s blog I will describe how it took us 4 days to resolve a single concurrent issue in our brand new NIO code. The Fix is in jetty 9 Milestone 1.
I will try to keep this blog entry as general as possible and won’t go too much into detail of this single issue or the jetty code, but describe how I usually try to resolve concurrent code issues and what I’ve done to debug this issue.
However doing NIO right is not a trivial thing to do. As well as writing code that is absolutely thread safe during highly concurrent executions. We’ve been pleased how well the new NIO code has been working from scratch. That was due to good test coverage and the great skills of the people who wrote it (Simone Bordet and Greg Wilkins mainly). However last week we found a spdy load test failing occasionally.
Have a look at the test if you’re interested in the details. For this blog it’s sufficient to know, that there’s a client that opens a spdy connection to the server and then will open a huge amount of spdy streams to the server and send some data back and forth. The streams are opened by 50 concurrent threads as fast as possible.
Most of the time the test runs just fine. Occasionally it got completely stuck at a certain point and timed out.
When debugging such concurrent issues you should always try first to get the test fail more consistently. If you manage to get that done, then it’s way easier to determine if a fix you try is successful or not. If only every 10th run fails, you do a fix and then the test runs fine for twenty runs it might have been your fix or you’ve just made 20 lucky runs. So once you think you’ve fixed a concurrent code issue that happens intermittently, make sure you run the test in a loop until it either fails or the test has run often enough that you can be sure it succeeded.
This is the bash one-liner I usually use:export x=0 ; while [ $? -eq "0" ] ; do ((x++)) ; echo $x ; mvn -Dtest=SynDataReplyDataLoadTest test ; done
It’ll run the test in a loop until an error occurs or you stop it. I leave it running until I’m totally sure that the problem is fixed.
For my specific issue I raised the test iterations from 500 to 1500 and that made the test fail about every 2nd run which is pretty good for debugging. Sometimes you’re not able to make the test fail more often and you’ve to rely on running the test often enough as described above.
Then whenever something gets stuck, you should get a few thread dumps of the JVM while it’s stuck and have a look if there’s something as obvious as a deadlock or a thread busy looping, etc. For this case, everything looked fine.
Next thing you usually should do is to carefully add some debug output to gain more information about the cause of the problem. I say carefully, because every change you do and especially expensive operations like writing a log message might affect the timing of your concurrent code execution and make the problem occur less often or in worst case it doesn’t occur at all. So simply turning on debug loglevel solved the problem once for all. Tried to convince Greg that we simply have to ship jetty with DEBUG enabled and blame customers who turn it off… 😉
Even a single log message printed for each iteration affected the timing enough to let the problem occur way less often. Too much logging and the problem doesn’t occur at all.
Instead of logging the information I needed, we’ve tried to keep the desired information in memory by adding some fields and make them accessible from the test to print them at a later stage.
I suspected that we might miss a call to flush() in our spdy StandardSession.java which will write DataFrames from a queue through Jetty’s NIO layer to the tcp layer. So for debugging I’ve stored some information about the last calls to append(), prepend(), flush(), write(), completed(). Most important for me was to know who the last callers to those methods was, the state of StandardSession.flushing(), the queue size, etc.
Simone told me the trick to have a scheduled task running in parallel to the thread which can then print all the additional information once the test goes stuck. Usually you know how long a normal test run takes. Then you add some time to be safe and have the scheduled task executed printing the desired information after enough time passed to be sure that the test is stuck. In my case it was about 50s when I could be sure that the test usually should have finished. I’ve raised the timeouts (2*50seconds for example) to make sure that the test is stuck long enough before the scheduled task is executed. But even collecting too much data this way made the test fail less often giving me a hard time to debug this. Having to do 10 test runs which all take about 2 min. before one failed already wastes 20 min. …
I’ve had a thesis: “Missing call to flush()” and thus everything stuck in the server’s spdy queue. And the information I collected as described above seem to have proofed my thesis. I found:
– pretty big queue size on the server
– server stuck sending spdy dataframes
Everything looked obvious. But at the end this is concurrent code. I double checked the code in StandardSession.java to make sure that the code is really threadsafe and that we do not miss a call to flush in every concurrent scenario. Code looked good for me, but concurrent code issues are rarely obvious. Triple checked it, nothing. So lets proof the thesis by doing a call to flush() from my scheduled task once the test is stuck and this should get the StandardSession back to send the queued data frames. However, it didn’t. So my thesis was wrong.
I’ve added some more debug information about the state StandardSession was in. And I could figure out that it is stuck sending a spdy frame to the client. StandardSession commits a single frame to the underlying NIO code and will wait until the NIO code calls a callback (StandardSession.completed()) before it flushes the next spdy frame. However completed() has not been called by the NIO layer indicating a single frame being stuck somewhere between the NIO layer of the server and the client. I was printing some debug information for the client as well and I could see that the last frame successfully sent by the server has not reached the spdy layer of the client. In fact the client usually was about 10.000 to 30.000 frames behind?!
So I used wireshark + spdyshark to investigate some network traces to see which frames are on the wire. We’ve compared several tcp packets and it’s hex encoded spdy frame bytes on the server and client with what we see in our debug output. And it looked like that the server didn’t even send the 10k-30k frames which are missing on the client. Again indicating an issue on the server side.
So I went through the server code and tried to identify why so many frames might not have been written and if we queue them somewhere I was not aware of. We don’t. As described above StandardSession commits a single spdy frame to the wire and waits until completed() is being called. completed() is only called if the dataframe has been committed to the tcp stack of the OS.
After a couple of hours of finding nothing, I went back to investigate the tcp dumps. In the dumps I’ve seen several tcp zerowindow and tcp windowfull flags being set by client and server indicating that the sender of the flag has a full RX (receive) buffer. See the wireshark wiki for details. As long as the client/server are updating the window size once they have read from the RX buffer and freed up some space everything’s good. As I’ve seen that this happens, I didn’t care too much on those as this is pretty normal behavior especially taking into account that the new NIO layer is pretty fast in sending/receiving data.
Now it was time to google a bit for jdk issues causing this behavior. And hey, I’ve found a problem which looked pretty similar to ours:
https://forums.oracle.com/forums/thread.jspa?messageID=10379569
Only problem is, I’ve had no idea how setting -Djava.net.preferIPv4Stack=true could affect an existing IPv4 connection and that the solution didn’t help. 🙂
As I’ve had no more better ideas on what to investigate, I’ve spend some more hours on investigating the wireshark traces I’ve collected. And with the help of some filters, etc. and looking at the traces from the last successfully transferred frame to the top, I figured that at a certain point the client stopped updating it’s RX window. That means that the client’s RX buffer was full and the client stopped reading from the buffer. Thus the server was not allowed to write to the tcp stack and thus the server got stuck writing, but not because of a problem on the server side. The problem was on the client!
Giving that information Simone finally found the root cause of the problem (dang, wasn’t me who finally found the cause! Still I’m glad Simone found it).
Now a short description of the problem for the more experienced developers of concurrent code. The problem was a non threadsafe update to a variable (_interestOps):private void updateLocalInterests(int operation, boolean add) { int oldInterestOps = _interestOps; int newInterestOps; if (add) newInterestOps = oldInterestOps | operation; else newInterestOps = oldInterestOps & ~operation; if (isInputShutdown()) newInterestOps &= ~SelectionKey.OP_READ; if (isOutputShutdown()) newInterestOps &= ~SelectionKey.OP_WRITE; if (newInterestOps != oldInterestOps) { _interestOps = newInterestOps; LOG.debug("Local interests updated {} -> {} for {}", oldInterestOps, newInterestOps, this); _selector.submit(_updateTask); } else { LOG.debug("Ignoring local interests update {} -> {} for {}", oldInterestOps, newInterestOps, this); } }There’s multiple threads calling updateLocalInterestOps() in parallel. The problem is caused by Thread A calling:
updateLocalInterestOps(1, true)
trying to set/add read interest to the underlying NIO connection. And Thread B returning from a write on the connection trying to reset write interest by calling:
updateLocalInterestOps(4, false)
at the same time.
If Thread A gets preempted by Thread B in the middle of it’s call to updateLocalInterestOps() at the right line of code, then Thread B might overwrite Thread A’s update to _interestOps in this linenewInterestOps &= ~SelectionKey.OP_WRITE;
which does a bitwise negate operation.
This is definitely not an obvious issue and one that happens to the best programmers when writing concurrent code. And this proofs that it is very important to have a very good test coverage of any concurrent code. Testing concurrent code is not trivial as well. And often enough you can’t write tests that reproduce a concurrent issue in 100% of the cases. Even running 50 parallel threads each doing 500 iterations revealed the issue in only about every 5th to 10th run. Running other stuff in the background of my macbook made the test fail less often as it affected the timing by making the whole execution a bit slower. Overall I’ve spent 4 days on a single issue and many hours have been spent together with Simone on skype calls investigating it together.
Simone finally fixed it by making the method threadsafe with a well known non-blocking algorithm (see Brian Goetz – Java Concurrency In Practise chapter 15.4 if you have no idea how the fix works):
http://git.eclipse.org/c/jetty/org.eclipse.jetty.project.git/commit/?h=jetty-9&id=39fb81c4861d4d88436539ce9675d8f3d8b7be74
I’ve seen in numerous projects that if such problems occur on production servers you’ll definitely gonna have a hard time finding the root cause. In production environments these kind of issues will happen rarely. Maybe you get something in the logs, maybe a customer complains. You investigate, everything looks good. You ignore it. Then another customer complains, etc.
In tests you limit the area of code you have to investigate. Still it can and most of the times will be hard to debug concurrent code issues. In production code it will be way more difficult to isolate the problem or get a test written afterwards.
If you write concurrent code, make sure you test it very well and that you take extra care about thread safety. Think about every variable, state, etc. twice and then a third time. Is this really threadsafe?
Conclusions: Detecting concurrent code issues is not trivial (well I knew that before), I need a faster macbook (filtering 500k packets in wireshark is cpu intensive), Jetty 9’s NIO layer written by Greg and Simone is great and Simone Bordet is a concurrent code rockstar (well I knew that before as well)!
Cheers,
Thomas -
Fully functional SPDY-Proxy
We keep pushing our SPDY implementation and with the upcoming Jetty release we provide a fully functional SPDY proxy server out of the box.
Simply by configuration you can setup Jetty to provide a SPDY connector where clients can connect to via SPDY and will be transparently proxied to a target host speaking SPDY as well or another web protocol.
Here’s some details about the internals. The implementation is modular and can easily be extended. There’s a HTTPSPDYProxyConnector that accepts incoming requests and forwards them to a ProxyEngineSelector. ProxyEngineSelector will forward the request to an appropiate ProxyEngine for the given target host protocol.
Which ProxyEngine to use is determined by the configured ProxyServerInfos which hold the information about known target hosts and the protocol they speak.
Up to now we only have a ProxyEngine implementation for SPDY. But implementing other protocols like HTTP should be pretty straight forward and will follow. Contributions are like always highly welcome!
https://www.webtide.com is already served through a proxy connector forwarding to a plain SPDY connector on localhost.
For more details and an example configuration check the SPDY proxy documentation out. -
SPDY – non representative benchmark for plain http vs. spdy+push on webtide.com
I’ve done a quick run with the Page Benchmarker Extension on chromium to measure the difference between http and SPDY + push. Enabling benchmarks restricts chromium to SPDY draft 2 so we’ll run without flow control.
Note that the website is not the fastest (in fact it’s pretty slow). But if these results will prove themselves valid in real benchmarks than a reduced latency of ~473ms is pretty awesome.
Here’s the promising result:

I’ve done several iterations of this benchmark test with ten runs each. The advantage of spdy was always between 350-550ms.
Disclaimer: This is in no way a representative benchmark. This has neither been run in an isolated test environment, nor is webtide.com the right website to do such benchmarks! This is just a promising result, nothing more. We’ll do proper benchmarking soon, I promise. -
SPDY – we push!
SPDY, Google’s web protocol, is gaining momentum. Intending to improve the user’s web experience it aims at severely reducing page load times.
We’ve blogged about the protocol and jetty’s straight forward SPDY support already: Jetty-SPDY is joining the revolution! and SPDY support in Jetty.
No we’re taking this a step further and we push!
SPDY push is one of the coolest features in the SPDY protocol portfolio.
In the traditional http approach the browser will have to request a html resource (the main resource) and do subsequent requests for each sub resource. Every request/response roundtrip will add latency.
E.g.:
GET /index.html – wait for response before before browser can request sub resources
GET /img.jpg
GET /style.css – wait for response before we can request sub resources of the css
GET /style_image.css (referenced in style.css)
This means a single request – response roundtrip for each resource (main and sub resources). Worse some of them have to be done sequentially. For a page with lots of sub resources, the amount of connections to the server (traditionally browsers tend to open 6 connections) will also limit the amount of sub resources that can be fetched in parallel.
Now SPDY will reduce the need to open multiple connections by multiplexing requests over a single connection and does more improvements to reduce latency as described in previous blog posts and the SPDY spec.
SPDY push will enable the server to push resources to the browser/client without having a request for that resource. For example if the server knows that index.html contains a reference to img.jpg, style.css and that style.css contains a reference to style_image.css, the server can push those resources to the client.
To take the previous example:
GET /index.html
PUSH /img.jpg
PUSH /style.css
PUSH /style_image.css
That means only a single request/response roundtrip for the main resource. And the server immediately sends out the responses for all sub resources. This heavily reduces overall latency, especially for pages with high roundtrip delays (bad/busy network connections, etc.).
We’ve written a unit test to benchmark the differences between plain http, SPDY and SPDY + push. Note that this is not a real benchmark and the roundtrip delay is emulated! Proper benchmarks are already in our task queue, so stay tuned. However, here’s the results:
HTTP: roundtrip delay 100 ms, average = 414
SPDY(None): roundtrip delay 100 ms, average = 213
SPDY(ReferrerPushStrategy): roundtrip delay 100 ms, average = 160
Sounds cool? Yes, I guess that sounds cool! 🙂
Even better in jetty this means only exchanging a Connector with another, provide our implementation of the push strategy – done. Yes, that’s it. Only by changing some lines of jetty config you’ll get SPDY and SPDY + push without touching your application.
Have a look at the Jetty Docs to enable SPDY. (will be updated soon on how to add a push strategy to a SPDY connector.)
Here’s the only thing you need to configure in jetty to get your application served with SPDY + push transparently:
<New id=”pushStrategy”>
<Arg type=”List”>
<Array type=”String”>
<Item>.*.css</Item>
<Item>.*.js</Item>
<Item>.*.png</Item>
<Item>.*.jpg</Item>
<Item>.*.gif</Item>
</Array>
</Arg>
<Set name=”referrerPushPeriod”>15000</Set>
</New>
<Call name=”addConnector”>
<Arg>
<New>
<Arg>
<Ref id=”sslContextFactory” />
</Arg>
<Arg>
<Ref id=”pushStrategy” />
</Arg>
<Set name=”Port”>11081</Set>
<Set name=”maxIdleTime”>30000</Set>
<Set name=”Acceptors”>2</Set>
<Set name=”AcceptQueueSize”>100</Set>
<Set name=”initialWindowSize”>131072</Set>
</New>
</Arg>
</Call>
So how do we push?
We’ve implemented a pluggable mechanism to add a push strategy to a SPDY connector. Our default strategy, called ReferrerPushStrategy is using the “referer” header to identify push resources on the first time a page is requested.
The browser will request the main resource and quickly afterwards it usually requests all sub resources needed for that page. ReferrerPushStrategy will use the referer header used in the sub requests to identify sub resources for the main resource defined in the referer header. It will remember those sub resources and on the next request of the main resource, it’ll push all sub resources it knows about to the client.
Now if the user will click on a link on the main resource, it’ll also contain a referer header for the main resource. However linked resources should not be pushed to the client in advance! To avoid that ReferrerPushStrategy has a configurable push period. The push strategy will only remember sub resources if they’ve been requested within that period from the very first request of the main resource since application start.
So this is some kind of best effort strategy. It does not know which resources to push at startup, but it’ll learn on a best effort basis.
What does best effort mean? It means that if the browser doesn’t request the sub resources fast enough (within the push period timeframe) after the initial request of the main resource it’ll never learn those sub resources. Or if the user is fast enough clicking links, it might push resources which should not be pushed.
Now you might be wondering what happens if the browser has the resources already cached? Aren’t we sending data over the wire which the browser actually already has? Well, usually we don’t. First we use the if-modified-since header to identify if we should push sub resources or not and second the browser can refuse push streams. If the browser gets a syn for a sub resource it already has, then it can simply reset the push stream. Then the only thing that has been send is the syn frame for the push stream. Not a big drawback considering the advantages this has.
There has to be more drawbacks?!
Yes, there are. SPDY implementation in jetty is still experimental. The whole protocol is bleeding edge and implementations in browsers as well as the server still have some rough edges. There is already broad support amoung browsers for the SPDY protocol. Stable releases of firefox and chromium/chrome support SPDY draft2 out of the box and it already works really well. SPDY draft 3 however is only supported with more recent builds of the current browsers. SPDY push seems to work properly only with SPDY draft 3 and the latest chrome/chromium browsers. However we’re all working hard on getting the rough edges smooth and I presume SPDY draft 3 and push will be working in all stable browsers soon.
We also had to disable push for draft 2 as this seemed to have negative effects on chromium up to regular browser crashes.
Try it!
As we keep eating our own dog-food, https://www.webtide.com is already updated with the latest code and has push enabled. If you want to test the push functionality get a chrome canary or a chromium nightly build and access our company’s website.
This is how it’ll look in the developer tools and on chrome://net-internals page.
developer-tools (note that the request has been done with an empty cache and the pushed resources are being marked as read from cache):

net-internals (note the pushed and claimed resource count):

Pretty exciting! We keep “pushing” for more and better SPDY support. Improve our push strategy and support getting SPDY a better protocol. Stay tuned for more stuff to come.
Note that SPDY stuff is not in any official jetty release, yet. But most probably will be in the next release. Documentation for jetty will be updated soon as well. -
Jetty JMX Webservice
Jetty JMX Webservice is a webapp providing a RESTful API to query JMX mbeans and invoke mbean operations without the hassle that comes with RMI. No more arguments with your firewall admin, just a single http port.
That alone might not be a killer feature, but Jetty JMX Webservice also aggregates multiple mbeans having the same ObjectName in the same JVM (e.g. jmx beans for multiple webapps) as well as from multiple jetty instances. That way you’ve a single REST api aggregating JMX mbeans for one or more jetty instances you can use to feed your favourite monitoring system for example.
The whole module is in an early development phase and may contain some rough edges. But we wanted to check for interest of the community early and get some early feedback.
We’ve started a very simple JQuery based webfrontend as a showcase on what the REST api can be used for.
Instance Overview:

Or a realtime memory graph gathering memory consumption from the REST api. This is an accordion like view. You see an accordion line for each node showing the current heap used. You can open each line and get a realtime graph of memory consumption. The memory used in the accordion and the graphs are updated in realtime which is hard to show in a picture:

Pretty cool. Note that this is just a showcase on how the REST api can be used.
URLs Paths of the webservice
/ws/ – index page
/ws/nodes – aggregated basic node information
/ws/mbeans – list of all aggregated mbeans
/ws/mbeans/[mbean objectName] – detailed information about all attributes and operations this mbean offers
/ws/mbeans/[mbean objectName]/attributes – aggregate page containing the values of all attributes of the given mbean
/ws/mbeans/[mbean objectName]/attributes/[attributeName] – aggregated values of a specific attribute
/ws/mbeans/[mbean objectName]/operation/[operationName] – invoke specified operation
Examples URLs:
/ws/mbeans/java.lang:type=Memory
/ws/mbeans/java.lang:type=Memory/operations/gc
How to get it running
Here’s all you need to do to get jetty-jmx-ws running in your jetty instance and some examples how the REST api looks like and how it can be used. Should take less than 15 min.- Checkout the sandbox project
svn co https://svn.codehaus.org/jetty-contrib/sandbox/jetty-jmx-ws - cd into the new directory and build the project
cd jetty-jmx-ws && mvn clean install - Make sure you got the [INFO] BUILD SUCCESS message
- Copy the war file you’ll find in the projects target directory into the webapps directory of your jetty instance
cp target/jetty-jmx-ws-[version].war [pathToJetty]/webapps - Access the webapp by browsing to:
http://[jettyinstanceurl]/jetty-jmx-ws-[version]/ws/
e.g.:
http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/ - You’re done. 🙂
How to use it: Starting point
As it is a RESTful api it will guide you from the base URL to more detailed pages. The base URL will return:<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Index> <mBeans>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans</mBeans> <nodes>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/nodes</nodes> </Index>
It shows you two URLs.
The first one will guide you through a list of mbeans which is aggregated. This means it’ll show you mbeans which do exist on ALL configured instances. mebans which exist only on a single instance will be filtered out.<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <MBeans> <MBean> <ObjectName>JMImplementation:type=MBeanServerDelegate</ObjectName> <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/JMImplementation:type=MBeanServerDelegate</URL> </MBean> <MBean> <ObjectName>com.sun.management:type=HotSpotDiagnostic</ObjectName> <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/com.sun.management:type=HotSpotDiagnostic</URL> </MBean> <MBean> <ObjectName>java.lang:type=Memory</ObjectName> <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory</URL> </MBean> SNIPSNAP - lots of mbeans </MBeans>
The second URL shows you basic node information for all configured nodes:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Index> <nodes> <name>localhost:1099</name> <jettyVersion>7.4.1-SNAPSHOT</jettyVersion> <threadCount>42</threadCount> <peakThreadCount>45</peakThreadCount> <heapUsed>41038176</heapUsed> <heapInit>0</heapInit> <heapCommitted>85000192</heapCommitted> <heapMax>129957888</heapMax> <jmxServiceURL>service:jmx:rmi:///jndi/rmi://localhost:1099/jettyjmx</jmxServiceURL> </nodes> <nodes> <name>localhost:1100</name> <jettyVersion>7.4.1-SNAPSHOT</jettyVersion> <threadCount>45</threadCount> <peakThreadCount>47</peakThreadCount> <heapUsed>73915872</heapUsed> <heapInit>0</heapInit> <heapCommitted>129957888</heapCommitted> <heapMax>129957888</heapMax> <jmxServiceURL>service:jmx:rmi:///jndi/rmi://localhost:1100/jettyjmx</jmxServiceURL> </nodes> </Index>
Howto query a single mbean
This example shows how to let you guide through the REST api to query a specific mbean. This example will guide you through to the memory mbean.- From the base URL follow the link to the mbeans list:
http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans - Search for the mbean name you’re looking for:
<MBean> <ObjectName>java.lang:type=Memory</ObjectName> <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory</URL> </MBean>
- Open the link inside the URL tag
http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory - You’ll get a list of all operations which can be executed on that mbean and all attributes which can be queried:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <MBean> <ObjectName>java.lang:type=Memory</ObjectName> <Operations> <ObjectName>java.lang:type=Memory</ObjectName> <Operation> <Name>gc</Name> <Description>gc</Description> <ReturnType>void</ReturnType> <URL>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/operations/gc</URL> </Operation> </Operations> <Attributes> <Attribute> <Name>HeapMemoryUsage</Name> <description>HeapMemoryUsage</description> <type>javax.management.openmbean.CompositeData</type> <isReadable>true</isReadable> <isWritable>false</isWritable> <uri>http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/attributes/HeapMemoryUsage</uri> </Attribute> SNIPSNAP - lots of attributes cutted </Attributes> </MBean>
- Besides some information about all operations and attributes you’ll find URLs to invoke operations like:
http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/operations/gcwhich will invoke a garbage collection.And URLs to display the attributes’ values like:
http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/attributes/HeapMemoryUsage
Will show you:<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <mBeanAttributeValueJaxBeans> <Attribute> <AttributeName>HeapMemoryUsage</AttributeName> <NodeName>localhost:1099</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=85000192, init=0, max=129957888, used=28073528})</Value> </Attribute> <Attribute> <AttributeName>HeapMemoryUsage</AttributeName> <NodeName>localhost:1100</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=129957888, init=0, max=129957888, used=69793976})</Value> </Attribute> </mBeanAttributeValueJaxBeans> - You can as well get an aggregated view of all attributes for an mbean by just adding attributes to the mbeans url:
http://localhost:8080/jetty-jmx-ws-7.4.1-SNAPSHOT/ws/mbeans/java.lang:type=Memory/attributes
will return:<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <mBeanAttributeValueJaxBeans> <Attribute> <AttributeName>HeapMemoryUsage</AttributeName> <NodeName>localhost:1099</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=85000192, init=0, max=129957888, used=30005472})</Value> </Attribute> <Attribute> <AttributeName>HeapMemoryUsage</AttributeName> <NodeName>localhost:1100</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=129957888, init=0, max=129957888, used=68043064})</Value> </Attribute> <Attribute> <AttributeName>NonHeapMemoryUsage</AttributeName> <NodeName>localhost:1099</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=85356544, init=24317952, max=136314880, used=52749944})</Value> </Attribute> <Attribute> <AttributeName>NonHeapMemoryUsage</AttributeName> <NodeName>localhost:1100</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=java.lang.management.MemoryUsage,items=((itemName=committed,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=init,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=max,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=used,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={committed=92868608, init=24317952, max=136314880, used=78705952})</Value> </Attribute> <Attribute> <AttributeName>ObjectPendingFinalizationCount</AttributeName> <NodeName>localhost:1099</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>0</Value> </Attribute> <Attribute> <AttributeName>ObjectPendingFinalizationCount</AttributeName> <NodeName>localhost:1100</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>0</Value> </Attribute> <Attribute> <AttributeName>Verbose</AttributeName> <NodeName>localhost:1099</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>false</Value> </Attribute> <Attribute> <AttributeName>Verbose</AttributeName> <NodeName>localhost:1100</NodeName> <ObjectName>java.lang:type=Memory</ObjectName> <Value>false</Value> </Attribute> </mBeanAttributeValueJaxBeans>
Security
It’s a webapp with some servlets. So secure it the same way you would any servlet.
What’s next?
There’s two more features which I didn’t describe. I will write a follow up soon describing how to use them.- You can filter by nodes with QueryParams
- Invoke operations whith parameters
- Configuration for multiple instances
- Get JSON instead of XML by setting accept header
If there’s interest in this project, I will also take care to write some manuals and add them to the wiki pages.
- Checkout the sandbox project