Saturday, August 17, 2013

jersey server side client and resource with in memory request handling

who's to say who's crazy and who's not?

This is a question from one of my favorite comedians, Otis Lee Crenshaw. He answers his own question with this (paraphrased):

If you can play the guitar and harmonica at the same time like Bob Dylan or Neil Young you're considered a genius. Make that extra effort to strap a pair of cymbals to your knees and people will cross the street just to get the hell away from you.

I guess what I'm saying is, that in this post, I think the cymbals may be firmly affixed.

I've spent a fair amount of time thinking about the problem this blog post is centered on, and I think that the solution I'm suggesting has value, though I could certainly understand people not liking it. At all. So, without further adieu...

the problem

Let's say you have a web application, and you have some data (a resource, if you will) that you're pulling in on the server side and spitting out into a view using a typical MVC pattern. For the sake of giving some context to this problem, let's say your data is messages posted by a user. Since there's no massive website that deals with this kind of data at unfathomable scale, I thought this would be a nice unique use case.

Your MVC application has worked well, but it could stand to be improved. At some point, you decide on pursuing either one or both of the following (since the net result is the same):

  • There's not much value in displaying more than the last 20 messages for any given user up front on a page. Fetching additional posts on the client side via an AJAX call makes more sense.
  • This website should have a mobile presence, but data consumption is a concern. Loading smaller groups of messages on a user by user basis on demand is preferable for the user experience.

In either case, a popular solution is to create a RESTful endpoint that exposes those messages. A simple call to /messages/{user}?offset=x&limit=y returning a nice lightweight JSON representation, and we've satisfied both cases. Problem solved!

Or is it...

You don't want to pull down everything this way, or at least not on the desktop site. There's value in pulling some of this data server side: your clients don't have to make additional connections to get the initial view, you don't have to consume additional resources on the server side for handling network traffic, and you may be able to more easily reuse other resources like database sessions.

At the same time, (I think) there's value in being consistent. Is the controller of your desktop view accessing your messages the same way your resources are? Are you sure you're serializing and deserializing to the same values client side as your are server side? You could programmatically invoke your resources and get typesafe return values, but is the behavior different when it's a request vs a method call? How are the execution paths different from one another?

Do they have to be different at all?

an idea for a solution

There's an obvious problem with making calls to the resource via an actual HTTP request when you're taking about a client and resource that exist in the same JVM: all the overhead of actually making a network connection via localhost. That may be in order of a few milliseconds, but those milliseconds add up, and studies have shown end users have little patience for latency.

If you make enough calls, this can easily turn into 10's or even 100's of milliseconds of latency

As it turns out, you can avoid this network hop entirely, but it requires some (mildly kludgy) work.

Jersey provides a type called Connector for its client API. This type is used to handle connections that are created from ClientRequest instances. I mentioned in a previous post how to set up an Apache HTTP Client connector if you'd like to see an example of this.

More interestingly, the type ApplicationHandler can be injected using Jersey's @Context based injection system, and represents a hook into the Jersey server instance that is only a few levels removed from the methods of the servlet container itself. All of Jersey's routing logic is downstream from ApplicationHandler, so sending requests to it means you're largely getting the full HTTP request taste without the calories.

We're going to need to capture the ApplicationHandler instance at startup. Unfortunately, the code to do this is quite ugly in its current state. You could no doubt do this more cleanly using dependency injection, but I think you get the point. First we'll need a provider for Jersey that will allow us to capture the instance, and a way to construct Connector instances with that instance:

Then we'll need to wire it up to the application:

Now, we need the connector itself.

translating different request and response representations

Most of the work done in the connector is ugly get/set logic; nothing sophisticated or glamorous. It still needs some explanation though.

At a high level, we need to implement the apply method, and inside get a ContainerRequest from a ClientRequest, pass it to the ApplicationHandler, and then convert a ContainerResponse to a ClientResponse. Here's the skeleton code:

Let's dig into building the request first:

We end up copying the main three pieces of data we need:

  • The URI data (needed to construct)
  • The headers
  • The request body (or entity)

I should call out that two of the arguments to the ContainerRequest constructor are null. The first is a reference to a SecurityContext instance, which is outside the scope of this post. The second is called PropertiesDelegate, which isn't actually Javadoc'd. The example works without it, though I may go back and dig into what it does later.

Now that we have the request, we need to send it to the ApplicationHandler:

As you can see, we get a ContainerResponse instance back, which we'll need to convert to a Response, which is then used to create the ClientResponse instance we have to return:

Full disclosure: I don't know if I'm covering all the bases as far as what needs to be set on Response. I have for my example, but I don't know what I may be missing for other use cases at this point. I will update this post as I discover other data points that may need to be handled.

With this in place, we have apply fully implemented:

There's a full implementation of the ServerSideConnector type located here. Another method has to be implemented for asynchronous functionality that closely resembles the apply method shown, along with a close and getName method that can be seen in the linked code.

use within a client and performance

Here's an example client that handles resources that deal with Messages. As you can see, the invocation of the client is pretty cookie cutter, and the Connector implementation can be swapped out per the lines in the constructor:

At the beginning of this post I called out performance as a concern due to the latency of passing data over a network connection when it could be passed directly via memory instead. Let's see if the in-memory solution indeed performs better.

HTTP client requests:

In-memory requests:

Yep, I'd say it's faster.

is this really necessary?

Probably not, but I wanted to figure out how to do it. I do think there's value in hitting a consistent set of execution points for a single type of transaction, and that one way to keep it consistent is to have a single entry point, which this accomplishes (though with some overhead).

One major motivation for this was a continual challenge I've faced in regard to supporting both desktop and mobile versions of a website, and keeping those sites consistent.

Another use case I can think of for this is being able to break apart an application into more of a service oriented architecture as time permits. Up front it may not make sense to split an application up into too many services, but as your organization, web traffic, and resources grow, your need to split your application up to scale either to your customer base or your development staff will grow as well. Using a client right out of the gate, and being able to abstract interactions to the resources backing it from internal to external by changing two lines of code does, in my opinion, has value.

I'm interested to hear feedback about this solution, because I know it strays from the norm considerably. Like I said, the cymbals are firmly attached; are you planning on crossing the street?

Sunday, August 11, 2013

setting up jersey client 2.0 to use httpclient, timeouts, and max connections

the goal

I've been trying to get my head wrapped around Jersey 2.0 client after playing around with the server a fair amount, having some experience configuring the client for Jersey 1.x. There's typically a standard set of things I look to configure:

  • Connection pooling via HttpClient
  • The read timeout
  • The connection timeout
  • Maximum number of connections
  • Maximum number of connections per host (or route)
  • JSON support

I decided to figure out how to accomplish this in Jersey 2.0, which was less obvious than I had anticipated. It's important to note that you absolutely should set these values up in your application, because the defaults are a combination of being overly restrictive and overly generous. The defaults when using HttpClient are as follows:

  • Infinite connection and read timeouts
  • By default when using Jersey's ApacheConnector you will get an instance of BasicClientConnectionManager which limits you to a single connection (though it is thread safe)
  • If you're configuring a PoolingClientConnectionManager instead, you'll have a maximum of 20 total connections with a maximum of 2 connections per host (or route)

As I'm sure you've already realized, these defaults are not going to scale at all.

for every parameter a config, and for every config a parameter

I've never been a fan of classes that contain a bunch of String based properties for an API because it can be difficult to figure out what goes where. There's no type to easily match against, so any method like setProperty(String key, Object value) could have anything set, and unless it's Javadoc'd that could be something from FooProperties.* or BarProperties.* for example.

(For the record, I like to define values like this with enum instances, sometimes with a common interface, to make it easier to find what should be used via an IDE)

I'm going to break down each part of the list above and how to go about configuring that feature by creating a class called ClientFactory one piece at a time.

connection and read timeouts

To begin, we need to start with a class called ClientConfig. Jersey uses this to configure client instances via its ClientBuilder API. We can set the connection and read timeouts with this class, as shown below:

using httpclient with connection pooling

Now let's set up a ClientConnectionManager that uses pooling. We should also set the limits on the number of connections a little bit higher, since 20 total and 2 per host is on the low side (we'll use 100 and 20 instead):

You have to use the implementation PoolingClientConnectionManager to set the max values; the interface ClientConnectionManager doesn't provide these methods. You can also be much more fine grained about how many connections per host are allowed with the setMaxPerRoute method. For example, let's say it was ok to have 40 max connections when hitting localhost:

configuring the jersey connector... with the config... which needs the connector

This code is a little obtuse, but it's how you need to have ClientConfig know about HttpClient and have HttpClient know about the timeout settings. We need to create a new ApacheConnector from the configuration, which specifies the pooled connection manager, and then set the connector in the configuration. Hopefully this makes more sense looking at the code:

constructing the client and adding json support

Now we have all the configuration we need to actually create the Client instance and set up JSON support:

putting it all together

Here's the full view of everything we set up to build the client. We're configuring the timeouts, setting up a pooled connection manager instance, restricting the maximum total number of connections and connections per host, adding an override for the maximum connections to localhost, and configuring JSON handling:

As a point of reference, here are the relevant pieces for a Maven pom to get the correct dependencies for this example:

Saturday, August 10, 2013

using generic types in response entities with jersey 2.0 client

a special type for a special case

Java Generics are great, but they can truly be a beast to deal with once you start dealing with Class references that can't carry a generic type with them. In the case of using Jersey 2.0's client API, you may come into a situation where a resource returns a generified type, such as List<Message>. You can't map the entity to List<Message>.class, but thankfully Jersey has a very easy way to handle this case:

That's it! Just binding it to GenericType<List<Message>>(){} takes care of it!

Source: http://jersey.576304.n2.nabble.com/How-can-I-parse-a-java-util-List-lt-gt-Is-it-supported-by-the-Jersey-client-td2300852.html

understanding java's native heap (or c-heap) and java heap

inspiration for this post

Not that long ago, I was diagnosing an issue on Jenkins where I was seeing an OutOfMemoryError in a native API. "What hijinks be these?" I thought to myself, since while the memory footprint was high GC wasn't getting out of control. Of course, like so many things, I had to learn what the cause of these errors was within the context of a service outage.

my first exposure to java's native heap

When we think of the Java heap, we usually think of this chunk of memory that is kept in order for us by the Garbage Collector, and why wouldn't we? Any call to the new operator is allocating memory within the heap for whatever instance we're creating, and the Garbage Collector is keeping tabs on that instance so that, when it's no longer in use, the memory can be reclaimed within the heap. That last bit is important. I'm sure most people already know this, but it's still worth calling out that the heap doesn't shrink once it's grown, and will grow up to its max heap (-Xmx) size.

If you're using a 32-bit JVM, the max you can set your heap to is 4GB (or less depending on the OS), which is inclusive of the max heap and permgen size. Conversely, on a 64-bit JVM, you're limited by the machine as to what you set as the boundaries to your heap (depending on JVM implementation and CompressedOops).

What you have left to work with, in both of these limitations, is the free space available to the native heap (or c-heap). I'm calling out that this is the free space available because the Java heap we've all grown to know and love is a section of the native heap; they're not mutually exclusive areas of memory. This space is used for native APIs and data, and it can most definitely run out.

Let's say you're using a 32-bit JVM, your OS can handle a 4GB heap, and you've allocated 3.5GB as the max heap and 384MB to permgen. Should you max those out, you've left your native heap with 128MB to do everything it needs to. In some applications this may not be a problem, but under certain conditions, say if you're heavily using IO, you could end up exhausting this memory, leaving you with an out of memory error in a native method. For example:

java.lang.OutOfMemoryError
  at java.util.zip.ZipFile.open(Native Method)
  ...

There are a few more interesting details about the native heap that are worth pointing out:

  • The native heap isn't managed by the garbage collector. The portion that makes up the Java heap is, of course.
  • Using -XX:+HeapDumpOnOutOfMemoryError won't actually work on OutOfMemoryErrors caused by exhaustion of the native heap. There's a bug ticket logged for this which was, in my opinion incorrectly, closed as not reproducible.
  • A heap dump won't actually reveal what's happening in the native heap; you'll need process monitoring to figure that out.
  • Loading anything into or out of the native heap from the Java heap that isn't already a byte array requires serialization for insertion and deserialization for retrieval.

can you store stuff in the native heap from within your application?

Honestly, I could write an entire blog post just about off-heap storage (in fact I started to write one here and stopped). I may very well write a post about that, but I'll leave you with the following advice: Yes you can and probably shouldn't on your own. There are a couple of ways to do this, one being ByteBuffer (the "legit" way) and sun.misc.Unsafe which you have to pry out of the JVM using reflection backdoors.

One detail that may not be obvious is that there are other settings for direct memory in the JVM. There's another flag that can be passed to the JVM called -XX:MaxDirectMemorySize which is different than heap size. Terracotta has an excellent write up about this, which while it's for their product BigMemory touches on a lot of interesting data points that have to do with off-heap memory management.

I'd also like to point out that ByteBuffer delegates to a class called Bits which has some really sketchy implementation details when you allocate memory, so you shouldn't make calls to allocate any more often than necessary. Rather than type out the details, I'll just show you the code in all its glory (I put my name in the comments for the lines I wanted to draw your attention to):

If you were trying to store large blocks of data represented by byte arrays without blowing up your old generation heap space, using off-heap storage can be very beneficial. You can put all of the data in a ByteBuffer and read it from an InputStream, though that involves keeping track of offsets of data in the buffer and either writing your own InputStream to support the buffer or finding one that's already implemented in another project.

If you were trying to use off-heap storage as a cache for Java objects, you should probably look at something preexisting like Hazelcast or Terracotta's BigMemory. The two challenges you'll end up with trying to handle this yourself are serialization/deserialization since all objects must be converted to/from byte arrays, and managing how you're accessing that data directly from memory. The serialization and deserialization aspects can be painful from a performance standpoint, especially using Java's built in serialization. You can get significantly better performance using something like Kryo or Jackson Smile which serializes to binary JSON. There's also fast-serialization, which claims to be faster than Kryo and has some off-heap storage implemented with more in the works. Hazelcast recently did a comparison of Kryo and Smile, and the results clearly show a noticeable improvement in performance. Accessing the data is also non-trivial, since you need to allocate large chunks of data and manage offsets yourself to fetch the correct data.

If you were trying to use off heap for dealing with IO, you should check out Netty, which not only works very well and intuitively, but also does the job better than ByteBuffer

There's a really nice blog post at http://mishadoff.github.io/blog/java-magic-part-4-sun-dot-misc-dot-unsafe/ that goes through the many things you can and shouldn't do with Unsafe if you're interested. There's also a fantastic writeup about using ByteBuffer and dealing with all of its idiosyncrasies at http://mindprod.com/jgloss/bytebuffer.html

Wednesday, August 7, 2013

having trouble with jersey 2.0 and servlet 3.0? you need jersey-core-servlet!

a subtle problem

I just came across this with some example code I'm working on, and the problem is easy to miss.

Let's say you want to use the JAXRS @ApplicationPath annotation for your Jersey application, and you don't want to use a web.xml file anymore, i.e. you want to programmatically define your servlet using Servlet 3.0 mechanisms. You have everything set up just like you've seen in all of the examples online, you run mvn jetty:run, and... Nothing.

two dependencies

There are two dependencies that serve similar purposes; adapting Jersey 2.0's Application class to a servlet instance. One of them includes compatibility for Servlet 2.x, and the other doesn't:

You'd think that this just means one has the ability to support Servlet 2.x and the other doesn't. In my experience, the case is that automatic Servlet 3.0 hooks won't actually work with jersey-container-servlet-core at all; it only works using jersey-container-servlet. Amusingly, the comment in the pom that is generated by the Jersey archetype (not the Grizzly one) is, at least in my mind, equally misleading:

In my mind, it makes more sense for this to say:

proof

I set up a very basic example of this behavior that you can feel free to mess around with if you like. The following two classes make up what is just about the most basic example for a Jersey app (though I use ResourceConfig instead of Application just because I like its flexibility better):

To make this work, here's the pom we're going to use. In comments in the dependencies section, you can see which line you need to comment to experience the problem:

simple, but not so obvious, solution

Hopefully this helps you. This actually had me scratching my head (which means grinding my teeth as I gradually type harder) for the better part of a day before I realized I'd been bamboozled by similar dependencies.