Sunday, December 22, 2013

kryo vs smile vs json part 1: a misguided shootout

this may be my most frustrating post so far

First, a little background.

At some point, even when you can scale horizontally, you start to examine aspects of your application that you can easily take for granted in the grand scheme of things for performance gains. One of those points when dealing with web services is serialization. There's general knowledge that Java serialization is slow, and XML is bloated compared to JSON. JSON is a pretty safe pick in general: it's readable, lightweight, and fast. That said, what happens when you want to do better than JSON in your RESTful web service?

A colleague and I came to this point recently, where the majority of his transaction overhead was spent unmarshalling requests and marshalling responses. This application comes under very high load, so the obvious conclusion was "well, there's a clear place to start to improve things." From there, we started looking at Apache Thrift, Google ProtoBuf (or Protocol Buffers), Kryo, Jackson Smile and, of course as a control, JSON. Naturally, we wanted to invest some time comparing these to each other.

I looked around online a lot at performance benchmarks and found some data dealing with Kryo, ProtoBuf and others located at https://github.com/eishay/jvm-serializers/wiki. The data presented there was very low level, and my goal was quite literally to produce the least sophisticated comparison of these frameworks possible, ideally using the 4-6 line samples on their respective wikis. My reasoning for this was that there is likely a common case of people not investing a huge amount of time trying to optimize their serialization stack, but rather trying to seek out a drop-in boost in the form of a library.

This is where the frustration comes into play. My results don't quite match what I've seen elsewhere, which caused me to question them several times and revisit the benchmarks I was performing. They still don't quite match, and to be honest I'm questioning the benchmark code I linked to after discovering calls to System.gc() all over the place, but I feel like I have enough data that it's worth posting something up here.

the experiment: use cases, setup, metrics, and the contenders

Let's talk about the use cases I was trying to cover first:

  • Don't go over the network. Do everything in memory to avoid external performance influences in the benchmark.
  • Serialize an object that is reasonably complex and representative of something a web service may use.
  • Serialize objects that have both small and large data footprints.
  • Use the most basic setup possible to perform the serialization and deserialization.

The setup was:

  • Run a "warm up" pass before gathering metrics to remove initial load factors on JVM startup that won't be a constant issue, and to fragment the heap slightly to both simulate real-world conditions and not give a potential advantage to a single framework.
  • Run a series of batches of entities to gather enough data to arrive at a reasonable conclusion of performance.
  • Randomize the data a bit to try and keep things in line with real-world conditions. The data is randomized from a small data set, with the assumption being that the differences in size are small enough and the batches are large enough to get a reasonably even distribution, meaning the metrics will converge on a figure that is a reasonable measurement of performance.

The following metrics were recorded:

  • Measure the average time to serialize and deserialize a batch of 100,000 entities.
  • Measure the average size of a response.
  • Measure the average time of an individual serialization/deserialization

Lastly, the contenders:

The use of the Jackson Smile JAXRS provider may seem odd, but I have a good reason. The basic Smile example is only a few lines, while the Smile JAXRS provider class is almost 1000 (!!!) lines. There's a lot of extra work going on in that class, and felt it was worth comparing because 1) many people could end up using this adapter in the wild and 2) perhaps there are some optimizations that should be benchmarked.

code

All of the code used in this can be found at https://github.com/theotherian/serialization-shootout/tree/master/serialization-shootout

Here's a tree representation of what the entity being serialized/deserialized, Car, looks like:

Here are the harnesses being used:

the results: normal size objects

By normal, I mean on the smaller size; most data is in order of 10's of bytes:

Key data points:

  • Kryo and Smile are clearly more performant than JSON in terms of time spent and size of payload.
  • Kryo and Smile are close: Kryo performs better but Smile is slightly smaller.
  • Kryo has the fastest raw serialization/deserialization performance by a significant amount over both Smile and JSON.
  • The Smile JAXRS provider is significantly slower than its raw counterpart.

the results: large size objects

For this comparison, I added portions of Wikipedia articles as part of the object, all equal in length:

Key data points:

  • Kryo is best in breed by a wide margin here, handling batches in 1.2s vs 1.9s for both Smile and JSON. Serialization and deserialization are both significantly faster.
  • Variance in size is practically nonexistent between all the frameworks.
  • Smile JAXRS really looks like a dog here, taking 2.6s to handle a batch and showing surprisingly poor deserialization performance.

the winner: kryo (with HUGE MASSIVE caveats)

Kryo clearly has some advantages here, but it also has one major disadvantage: Kryo instances are not thread safe. Did you hear that?

KRYO INSTANCES ARE NOT THREAD SAFE!

This caused me to show the same amount of rage DateFormat did years ago. BFD you may say, thinking "Just create a Kryo instance each time!" Well, what if I told you that each batch of the normal size objects takes a whopping NINE SECONDS when I moved the creation of the Kryo object inside the harness' method.

No sir; if you're going to use Kryo you need to have thread local storage for your Kryo instances or you are going to be in for some serious pain. Depending on the load of your application, you may want to pre-create them as a pool within a servlet initializer that is scaled to the number of threads you have in your container.

Quite frankly I'm astonished that there's so much overhead encountered on an instance that isn't thread safe, but I also haven't delved into the API enough to know what the reasons are behind this. Still though, it creates some very annoying design implications that you'll need to make sure are accounted for correctly in your application.

Part of me would sooner call Smile the winner since it doesn't have this particular issue, but after looking at the JAXRS provider for it I'm left scratching my head.

However, when it comes to larger entities, Smile offered marginal improvement over JSON, whereas Kryo clearly won that round.

Based on the results in the first pass, I think Kryo showed the most improvement, but also a fair number of warts.

next steps

I'm far from finished here, but felt compelled to get something published. I plan on doing the following things next:

  • Getting feedback from others about my approach and the data to see if I'm way off the mark.
  • Potentially benchmarking ProtoBuf here too. It's more painful to set up, but worth experimenting with to get more data.
  • Figuring why Smile JAXRS is so miserably slow.
  • Messing around with Kryo's optimization (an example of this is here).
  • Looking at other BSON libraries.

I do genuinely feel like I'm missing some critical piece of data or type of test here, so if you see anything that could stand to be addressed, please let me know in the comments!

Monday, December 2, 2013

making guava cache better with jmx

caching with jmx is just so much better

If you've never used this before, you're missing out. Being able to remotely check statistics on your cache to measure its effectiveness, as well as being able to purge it at runtime is invaluable. Sadly Guava doesn't have this baked in the way ehcache does, but it's relatively easy to add.

Most of my work is a slightly different take on some work a fellow Github user named kofemann produced (located here) which contains the JMX beans and bean registration logic. I made a few alterations to the code, pulling the registration out into a separate class (I really didn't like the bean doing all that work in the constructor) and adding a refreshAll method.

taking advantage of refresh after write functionality

If you've read my previous blog post about the awesomeness that is Guava's refresh after write functionality, then you'll see how it can be advantageous when it comes to JMX management. If you didn't read my post (shame on you), then it's worth calling out using refresh after write allows for asynchronous loading of cache values, meaning you never block barring the initial loading of the cache.

This can be used via JMX management as well by iterating through the keys of the cache and calling refresh for each one, which will load new values without causing clients of the cache to block (as opposed to purging the cache). Purging a cache is a dangerous thing to do under certain circumstances, since missing values will trigger loading events that will block clients at runtime and potentially overwhelm either your application server or even your underlying data storage. I would argue that ehcache is particularly bad because of potential read contention caused by write blocking. To clarify, several threads in your application can block waiting for cache values to be reloaded, and all of those blocking threads will then compete over a limited number of read locks after the write lock has been released, potentially causing a CPU spike and considerable latency in your application under the worst conditions. When I say worst conditions, I'm speaking from very recent and harrowing experience, so I have the lumps to say with the utmost certainty this can happen. :)

the implementation

For JMX you need an interface and an implementation. The interface can be found on my Gist and doesn't really need to be shown in the post. The implementation is below; it's really a wrapper around Guava's CacheStats object and the cleanup/invalidateAll methods, as well as my refreshAll method:

As I said before, refreshAll has the advantage of not causing your application to potentially lock up due to cache contention; everything will load up in the background. Depending on how you have your thread pool set up for performing refreshes, you can also throttle how hard you're hitting your data store by restricting the number of concurrent fetches of data by limiting the threads available.

registering your cache in jmx

This is pretty straightforward: just pass your cache (in this case a LoadingCache because of refreshAll) to the method shown below and you'll expose it via JMX for statistics and management:

feedback

Let me know if this works for you; I plan on using this soon in a high load environment, so I'll follow up with any results I find to help out my readers. I feel kind of bad bagging on ehcache so much recently, but it's caused me enough gray hair over the last month that I plan on focusing several blog posts around caching.

Thursday, November 14, 2013

non-blocking cache with guava and listenable futures (or why I hate ehcache some days)

trying to scale? just throw more cache at the problem!

Yes, I'm going to use the term "cache" in lots of ironic and financially metaphoric ways in this post. My apologies.

Caching makes a lot of things possible in everything we do on the Internet and on computer systems in general. That said, caching can also get you into trouble for a variety of reasons such as how wisely you use memory, how performant your cache is under contention, and how effective your cache is (i.e. cache-hit ratio).

If you're using Java, chances are you've heard of ehcache at some point. While there's a lot that ehcache does well, there's a particular aspect of it that in my experience doesn't scale well, and under certain conditions can take down your application. In fact, part of the reason I'm writing this blog post is the aftermath of raising my arms in the air and screaming after examining a performance issue related to ehcache which caused a failed load test today.

mo' (eh)cache, mo' problems

When reading data from ehcache, you end up blocking until a result can be returned (more on that here). While this is necessary for the first time you fetch data from the cache, since something has to be loaded in order to be returned, it's probably not the case that you need to block for subsequent requests. To clarify this, if you're caching something for 1 hour and it takes 5 seconds to load, you probably don't care about loading data that's 1 hour and 5 seconds old, especially if the alternative is blocking the request to your application for 5 seconds to do it as well as every other request trying to load that data.

Unfortunately, and if I'm wrong here I hope someone will call me out in the comments, ehcache blocks every time the data needs to be reloaded. Furthermore, it uses a ReadWriteLock for all reads as well along with a fixed number of mutexes (2048 by default), so you can end up with read contention as well given enough load. While I understand the decisions that were made and why, there are cases where it isn't ideal and you don't want to grab any locks to create blocking conditions.

making your cache work for you

To be fair, this problem really manifests itself when you have high contention on specific keys, particularly when reloading events occur. In most cases ehcache performs perfectly fine; this post isn't meant to be a general condemnation of a very popular and useful library. That said, in order to solve the problem we don't really want to block on reads or writes; we want to refresh our data in the background, and only update what consumers of the cache see when we have that refreshed data.

This can be accomplished by having a thread go and reload the data while the cache returns stale entries until new data is available. This accomplishes both the goals of not requiring read or write locks outside of the initial population of the cache, which is unavoidable. Even better, Guava Cache has all of this functionality baked in.

refresh after write and listenable futures in guava cache

Guava Cache can handle this by telling the CacheBuilder via refreshAfterWrite to refresh entries by calling the reload method in the CacheLoader instance used to construct your LoadingCache instance. The reload method returns a ListenableFuture, which is the same as a regular Future but exposes a method to register a callback. In this case, the callback is used to update the value in the cache once we've finished retrieving it.

Here's an example of this in action:

The sleeps are in there to create artificial latency to show what this looks like in action. If you run this you'll see the asynchronous load events kick off and can witness the five seconds of latency in between that event firing and the data being updated. You should also notice that reads keep succeeding in the meantime. There is a small spike the first time an asynchronous load fires, which I assume is a one-time cost resource allocation within Guava Cache.

There is one point to consider when doing this, which is how to shut down your refresh thread. In my example I used a ThreadFactory (courtesy of ThreadFactoryBuilder) to set my refresh thread as a daemon thread, which allows the JVM to shut down at the end. I also used the ThreadFactory to name the thread, which I would recommend as a general practice to make debugging easier on yourself whenever you're creating thread pools. In my example there aren't any resource concerns, so it doesn't matter if the thread is terminated, but if you had resource cleanup to perform for some reason you'd want to wire up a shutdown hook to your ExecutorService in your application since the pool would exist eternally.

For a use case like this, you'd want to be judicious about how many threads you're willing to allocate to this process as well. The number should scale somewhat to the maximum number of entries and refresh interval you choose so that you can refresh in a timely manner without consuming too many resources in your application.

conclusion

If you've come across this problem, then I hope this post helps you get past it. To reiterate what I said before, ehcache is a solid API overall, it just doesn't handle this case well. I haven't tested the Guava Cache implementation under high load conditions yet, so it's certainly possible that it has issues I've left out of the post, but from a face value standpoint it addresses the issues I've seen with ehcache in a way that doesn't involve rolling your own solution from scratch.

Feel free to share any feedback or things I may have missed in the comments!

Saturday, September 28, 2013

handlebars server side and client side with jersey

uniting the client and the server side... sort of

In this post, I'm sharing some code I've been working on to try and help address an issue I think is prone to happen with many web applications: keeping server side and client side rendering somewhat consistent.

Web users have grown accustomed to rich web experiences with plenty of nice interactions without page refreshes. Search engines, on the other hand, continue to require that data be rendered directly in the browser (crawlable AJAX is orthogonal to this detail) if you want to have any SEO relevance. You end up left with a dilemma: you have to render some of your data server side to keep people coming to your site, and you need to be able to render additional data client side to keep people coming to your site. While these are easily achievable using loads of technologies, keeping them consistent is slightly more challenging.

handlebars and helper functions

Disclaimer: I'm a handlebars fan boy, and an equally big fan boy of Edgar Espina's handlebars-java, a server-side implementation of the templating engine. I think handlebars adds just enough to mustache to get around some of the violations of DRY and provides better modularity of the view overall.

The biggest, most versatile tool in handlebars' shed is, in my opinion, helper functions. They provide a way to modularize view logic in a nice reusable way. A good example of this is alternating classes when rendering a list of items.

Helper functions get even more powerful in handlebars-java, where you can implement them in Java on the server side. Need to fetch data from the database? Knock yourself out! Need to make a web service call? Go for it! You can also use helper functions written in JavaScript on the server side as well (Edgar was nice enough to implement this after I made a request), so you don't have to worry about your Front End Developers having their code left out.

compilation and precompilation via handlebars-java

This probably sounds confusing at first blush; how could there be compilation and precompilation? Typically for the client side, you'd want to precompile your templates; asking your clients to do the compilation for you is, well, a bit rude. This turns any handlebars template into a JavaScript function that will perform all the rendering you set up in your template. JSON objects are passed to the function to resolve the values referenced with the {{}} syntax.

Everything in handlebars-java is compiled via an ANTLR grammar, and all object graphs passed into it when rendering have their values resolved via reflection; no serialization into JSON takes place. Precompilation within handlebars-java does the same thing that precompilation with handlebars.js would do, except that it can render the precompiled template inline on the page for you on the server side. This may not sound like anything to write home about, but it provides several advantages:

  • Your server can automatically recompile and re-precompile your templates as you make changes; effectively automating the process.
  • The caching mechanisms in handlebars-java are well designed, so you don't end up unnecessarily recompiling anything.
  • Upgrading handlebars.js can be automated since all templates will be re-precompiled (though this could also be undesirable since it could involve major regression testing).
  • Anything compiled server side for server side rendering can be compiled into JavaScript for client side rendering.

Ultimately, you can reuse a good portion of your server side stuff on the client side as well. An example of this is presented further down.

bringing jersey into the mix and mapping helper functions to resources

If we're going to have all sorts of nice client side functionality, pulling down data after the page is rendered, we'll need some resources on the server side to handle it. At the same time though, if we're pulling in data via helper functions, it would be advantageous to keep the code execution paths the same. My proposal to you is that a Java handlebars helper function could shadow a Jersey resource if it's something that is expected to be available client side as well. My suggestion requires some amount of dogmatic conventions that may not be suitable for all audiences:

  • Your @Path annotation and the name of the helper method should be the same.
  • The argument to your resource's GET should be the same as the argument to your handlebars function's context argument.
  • The helper method and the resource should exist in the same class.

I think this makes more sense with an example. Let's say we have a site that allows people to post messages, and other people can subscribe to the messages they post. Since no large website has cornered the market on this yet, I figured it would provide a fun "what if" example.

Here's an example of what I would have written as a resource for messages:

using handlebars templates with our... err... "functional resource"

Let's take a look at what a Handlebars template rendered serverside might look like:

As you can see, we're using the server-side helper function within the {{#latestmessages}} tag, and passing the data to a partial called messages that looks like this:

Now, ironically, the mobile example I have is larger than the desktop one because of the JavaScript involved with making the calls. Chances are that you would be dealing with a lot more data than my example shows under real conditions, so I'm asking for your forgiveness on the silliness of this disparity. That said, the mobile version:

Allow me to call out the important bits of the page:

  • We're pulling in the handlebars JavaScript (full for the example rather than runtime) and jQuery. It's worth mentioning templates rendered server side don't have to pull in the handlebars JavaScript.
  • Inside of a script tag, we're calling the precompile function that exists server side, and precompiling our messages partial into the page as a JavaScript function.
  • On the server side, we're rendering a series of names we've looked up, and a link on each name. (Note: we're not loading any messages server side)
  • We're binding a function to click events on each name link that will hide other people's messages, and will then either fetch the selected person's messages or display messages that were already fetched for that person.
  • When fetching messages, we're calling latestmessages as a URI and appending the name to the URI, the same way we called {{#latestmessages name}} on the server side. This is the consistency that I've been trying to drive towards.

automatically mapping "functional" resources to their helper functions

I'm really not thrilled at the notion of having resources representing functions; but in this case the direction I'm trying to go is having a function really just provide a means of fetching data and nothing else. Ultimately any Jersey resource maps an HTTP method to a Java method in the resource class, so in my mind they're not so far removed with the exception that handlebars effectively maintains all helper functions at a single level.

That aside, per another blog post I wrote about resource filters in Jersey, I'm going to create a filter that will register any resource that is also a helper function within handlebars-java and use the value in @Path as the name. In its current form I feel like this is a bit crude, though it is effective if you follow the convention:

By doing this, you're creating a one-to-one relationship of a helper function to a resource for consistency purposes.

an example project to mess around with

I set up a project on Github that you can use to mess around with this pattern of doing things. It's by no means perfect, and I plan on playing around with it more, but I think it can offer a fair amount of flexibility if you're willing to buy into the pattern as a whole.

Please feel free to share your thoughts on this approach; I'm interested to hear your feedback!

Saturday, August 17, 2013

jersey server side client and resource with in memory request handling

who's to say who's crazy and who's not?

This is a question from one of my favorite comedians, Otis Lee Crenshaw. He answers his own question with this (paraphrased):

If you can play the guitar and harmonica at the same time like Bob Dylan or Neil Young you're considered a genius. Make that extra effort to strap a pair of cymbals to your knees and people will cross the street just to get the hell away from you.

I guess what I'm saying is, that in this post, I think the cymbals may be firmly affixed.

I've spent a fair amount of time thinking about the problem this blog post is centered on, and I think that the solution I'm suggesting has value, though I could certainly understand people not liking it. At all. So, without further adieu...

the problem

Let's say you have a web application, and you have some data (a resource, if you will) that you're pulling in on the server side and spitting out into a view using a typical MVC pattern. For the sake of giving some context to this problem, let's say your data is messages posted by a user. Since there's no massive website that deals with this kind of data at unfathomable scale, I thought this would be a nice unique use case.

Your MVC application has worked well, but it could stand to be improved. At some point, you decide on pursuing either one or both of the following (since the net result is the same):

  • There's not much value in displaying more than the last 20 messages for any given user up front on a page. Fetching additional posts on the client side via an AJAX call makes more sense.
  • This website should have a mobile presence, but data consumption is a concern. Loading smaller groups of messages on a user by user basis on demand is preferable for the user experience.

In either case, a popular solution is to create a RESTful endpoint that exposes those messages. A simple call to /messages/{user}?offset=x&limit=y returning a nice lightweight JSON representation, and we've satisfied both cases. Problem solved!

Or is it...

You don't want to pull down everything this way, or at least not on the desktop site. There's value in pulling some of this data server side: your clients don't have to make additional connections to get the initial view, you don't have to consume additional resources on the server side for handling network traffic, and you may be able to more easily reuse other resources like database sessions.

At the same time, (I think) there's value in being consistent. Is the controller of your desktop view accessing your messages the same way your resources are? Are you sure you're serializing and deserializing to the same values client side as your are server side? You could programmatically invoke your resources and get typesafe return values, but is the behavior different when it's a request vs a method call? How are the execution paths different from one another?

Do they have to be different at all?

an idea for a solution

There's an obvious problem with making calls to the resource via an actual HTTP request when you're taking about a client and resource that exist in the same JVM: all the overhead of actually making a network connection via localhost. That may be in order of a few milliseconds, but those milliseconds add up, and studies have shown end users have little patience for latency.

If you make enough calls, this can easily turn into 10's or even 100's of milliseconds of latency

As it turns out, you can avoid this network hop entirely, but it requires some (mildly kludgy) work.

Jersey provides a type called Connector for its client API. This type is used to handle connections that are created from ClientRequest instances. I mentioned in a previous post how to set up an Apache HTTP Client connector if you'd like to see an example of this.

More interestingly, the type ApplicationHandler can be injected using Jersey's @Context based injection system, and represents a hook into the Jersey server instance that is only a few levels removed from the methods of the servlet container itself. All of Jersey's routing logic is downstream from ApplicationHandler, so sending requests to it means you're largely getting the full HTTP request taste without the calories.

We're going to need to capture the ApplicationHandler instance at startup. Unfortunately, the code to do this is quite ugly in its current state. You could no doubt do this more cleanly using dependency injection, but I think you get the point. First we'll need a provider for Jersey that will allow us to capture the instance, and a way to construct Connector instances with that instance:

Then we'll need to wire it up to the application:

Now, we need the connector itself.

translating different request and response representations

Most of the work done in the connector is ugly get/set logic; nothing sophisticated or glamorous. It still needs some explanation though.

At a high level, we need to implement the apply method, and inside get a ContainerRequest from a ClientRequest, pass it to the ApplicationHandler, and then convert a ContainerResponse to a ClientResponse. Here's the skeleton code:

Let's dig into building the request first:

We end up copying the main three pieces of data we need:

  • The URI data (needed to construct)
  • The headers
  • The request body (or entity)

I should call out that two of the arguments to the ContainerRequest constructor are null. The first is a reference to a SecurityContext instance, which is outside the scope of this post. The second is called PropertiesDelegate, which isn't actually Javadoc'd. The example works without it, though I may go back and dig into what it does later.

Now that we have the request, we need to send it to the ApplicationHandler:

As you can see, we get a ContainerResponse instance back, which we'll need to convert to a Response, which is then used to create the ClientResponse instance we have to return:

Full disclosure: I don't know if I'm covering all the bases as far as what needs to be set on Response. I have for my example, but I don't know what I may be missing for other use cases at this point. I will update this post as I discover other data points that may need to be handled.

With this in place, we have apply fully implemented:

There's a full implementation of the ServerSideConnector type located here. Another method has to be implemented for asynchronous functionality that closely resembles the apply method shown, along with a close and getName method that can be seen in the linked code.

use within a client and performance

Here's an example client that handles resources that deal with Messages. As you can see, the invocation of the client is pretty cookie cutter, and the Connector implementation can be swapped out per the lines in the constructor:

At the beginning of this post I called out performance as a concern due to the latency of passing data over a network connection when it could be passed directly via memory instead. Let's see if the in-memory solution indeed performs better.

HTTP client requests:

In-memory requests:

Yep, I'd say it's faster.

is this really necessary?

Probably not, but I wanted to figure out how to do it. I do think there's value in hitting a consistent set of execution points for a single type of transaction, and that one way to keep it consistent is to have a single entry point, which this accomplishes (though with some overhead).

One major motivation for this was a continual challenge I've faced in regard to supporting both desktop and mobile versions of a website, and keeping those sites consistent.

Another use case I can think of for this is being able to break apart an application into more of a service oriented architecture as time permits. Up front it may not make sense to split an application up into too many services, but as your organization, web traffic, and resources grow, your need to split your application up to scale either to your customer base or your development staff will grow as well. Using a client right out of the gate, and being able to abstract interactions to the resources backing it from internal to external by changing two lines of code does, in my opinion, has value.

I'm interested to hear feedback about this solution, because I know it strays from the norm considerably. Like I said, the cymbals are firmly attached; are you planning on crossing the street?

Sunday, August 11, 2013

setting up jersey client 2.0 to use httpclient, timeouts, and max connections

the goal

I've been trying to get my head wrapped around Jersey 2.0 client after playing around with the server a fair amount, having some experience configuring the client for Jersey 1.x. There's typically a standard set of things I look to configure:

  • Connection pooling via HttpClient
  • The read timeout
  • The connection timeout
  • Maximum number of connections
  • Maximum number of connections per host (or route)
  • JSON support

I decided to figure out how to accomplish this in Jersey 2.0, which was less obvious than I had anticipated. It's important to note that you absolutely should set these values up in your application, because the defaults are a combination of being overly restrictive and overly generous. The defaults when using HttpClient are as follows:

  • Infinite connection and read timeouts
  • By default when using Jersey's ApacheConnector you will get an instance of BasicClientConnectionManager which limits you to a single connection (though it is thread safe)
  • If you're configuring a PoolingClientConnectionManager instead, you'll have a maximum of 20 total connections with a maximum of 2 connections per host (or route)

As I'm sure you've already realized, these defaults are not going to scale at all.

for every parameter a config, and for every config a parameter

I've never been a fan of classes that contain a bunch of String based properties for an API because it can be difficult to figure out what goes where. There's no type to easily match against, so any method like setProperty(String key, Object value) could have anything set, and unless it's Javadoc'd that could be something from FooProperties.* or BarProperties.* for example.

(For the record, I like to define values like this with enum instances, sometimes with a common interface, to make it easier to find what should be used via an IDE)

I'm going to break down each part of the list above and how to go about configuring that feature by creating a class called ClientFactory one piece at a time.

connection and read timeouts

To begin, we need to start with a class called ClientConfig. Jersey uses this to configure client instances via its ClientBuilder API. We can set the connection and read timeouts with this class, as shown below:

using httpclient with connection pooling

Now let's set up a ClientConnectionManager that uses pooling. We should also set the limits on the number of connections a little bit higher, since 20 total and 2 per host is on the low side (we'll use 100 and 20 instead):

You have to use the implementation PoolingClientConnectionManager to set the max values; the interface ClientConnectionManager doesn't provide these methods. You can also be much more fine grained about how many connections per host are allowed with the setMaxPerRoute method. For example, let's say it was ok to have 40 max connections when hitting localhost:

configuring the jersey connector... with the config... which needs the connector

This code is a little obtuse, but it's how you need to have ClientConfig know about HttpClient and have HttpClient know about the timeout settings. We need to create a new ApacheConnector from the configuration, which specifies the pooled connection manager, and then set the connector in the configuration. Hopefully this makes more sense looking at the code:

constructing the client and adding json support

Now we have all the configuration we need to actually create the Client instance and set up JSON support:

putting it all together

Here's the full view of everything we set up to build the client. We're configuring the timeouts, setting up a pooled connection manager instance, restricting the maximum total number of connections and connections per host, adding an override for the maximum connections to localhost, and configuring JSON handling:

As a point of reference, here are the relevant pieces for a Maven pom to get the correct dependencies for this example:

Saturday, August 10, 2013

using generic types in response entities with jersey 2.0 client

a special type for a special case

Java Generics are great, but they can truly be a beast to deal with once you start dealing with Class references that can't carry a generic type with them. In the case of using Jersey 2.0's client API, you may come into a situation where a resource returns a generified type, such as List<Message>. You can't map the entity to List<Message>.class, but thankfully Jersey has a very easy way to handle this case:

That's it! Just binding it to GenericType<List<Message>>(){} takes care of it!

Source: http://jersey.576304.n2.nabble.com/How-can-I-parse-a-java-util-List-lt-gt-Is-it-supported-by-the-Jersey-client-td2300852.html

understanding java's native heap (or c-heap) and java heap

inspiration for this post

Not that long ago, I was diagnosing an issue on Jenkins where I was seeing an OutOfMemoryError in a native API. "What hijinks be these?" I thought to myself, since while the memory footprint was high GC wasn't getting out of control. Of course, like so many things, I had to learn what the cause of these errors was within the context of a service outage.

my first exposure to java's native heap

When we think of the Java heap, we usually think of this chunk of memory that is kept in order for us by the Garbage Collector, and why wouldn't we? Any call to the new operator is allocating memory within the heap for whatever instance we're creating, and the Garbage Collector is keeping tabs on that instance so that, when it's no longer in use, the memory can be reclaimed within the heap. That last bit is important. I'm sure most people already know this, but it's still worth calling out that the heap doesn't shrink once it's grown, and will grow up to its max heap (-Xmx) size.

If you're using a 32-bit JVM, the max you can set your heap to is 4GB (or less depending on the OS), which is inclusive of the max heap and permgen size. Conversely, on a 64-bit JVM, you're limited by the machine as to what you set as the boundaries to your heap (depending on JVM implementation and CompressedOops).

What you have left to work with, in both of these limitations, is the free space available to the native heap (or c-heap). I'm calling out that this is the free space available because the Java heap we've all grown to know and love is a section of the native heap; they're not mutually exclusive areas of memory. This space is used for native APIs and data, and it can most definitely run out.

Let's say you're using a 32-bit JVM, your OS can handle a 4GB heap, and you've allocated 3.5GB as the max heap and 384MB to permgen. Should you max those out, you've left your native heap with 128MB to do everything it needs to. In some applications this may not be a problem, but under certain conditions, say if you're heavily using IO, you could end up exhausting this memory, leaving you with an out of memory error in a native method. For example:

java.lang.OutOfMemoryError
  at java.util.zip.ZipFile.open(Native Method)
  ...

There are a few more interesting details about the native heap that are worth pointing out:

  • The native heap isn't managed by the garbage collector. The portion that makes up the Java heap is, of course.
  • Using -XX:+HeapDumpOnOutOfMemoryError won't actually work on OutOfMemoryErrors caused by exhaustion of the native heap. There's a bug ticket logged for this which was, in my opinion incorrectly, closed as not reproducible.
  • A heap dump won't actually reveal what's happening in the native heap; you'll need process monitoring to figure that out.
  • Loading anything into or out of the native heap from the Java heap that isn't already a byte array requires serialization for insertion and deserialization for retrieval.

can you store stuff in the native heap from within your application?

Honestly, I could write an entire blog post just about off-heap storage (in fact I started to write one here and stopped). I may very well write a post about that, but I'll leave you with the following advice: Yes you can and probably shouldn't on your own. There are a couple of ways to do this, one being ByteBuffer (the "legit" way) and sun.misc.Unsafe which you have to pry out of the JVM using reflection backdoors.

One detail that may not be obvious is that there are other settings for direct memory in the JVM. There's another flag that can be passed to the JVM called -XX:MaxDirectMemorySize which is different than heap size. Terracotta has an excellent write up about this, which while it's for their product BigMemory touches on a lot of interesting data points that have to do with off-heap memory management.

I'd also like to point out that ByteBuffer delegates to a class called Bits which has some really sketchy implementation details when you allocate memory, so you shouldn't make calls to allocate any more often than necessary. Rather than type out the details, I'll just show you the code in all its glory (I put my name in the comments for the lines I wanted to draw your attention to):

If you were trying to store large blocks of data represented by byte arrays without blowing up your old generation heap space, using off-heap storage can be very beneficial. You can put all of the data in a ByteBuffer and read it from an InputStream, though that involves keeping track of offsets of data in the buffer and either writing your own InputStream to support the buffer or finding one that's already implemented in another project.

If you were trying to use off-heap storage as a cache for Java objects, you should probably look at something preexisting like Hazelcast or Terracotta's BigMemory. The two challenges you'll end up with trying to handle this yourself are serialization/deserialization since all objects must be converted to/from byte arrays, and managing how you're accessing that data directly from memory. The serialization and deserialization aspects can be painful from a performance standpoint, especially using Java's built in serialization. You can get significantly better performance using something like Kryo or Jackson Smile which serializes to binary JSON. There's also fast-serialization, which claims to be faster than Kryo and has some off-heap storage implemented with more in the works. Hazelcast recently did a comparison of Kryo and Smile, and the results clearly show a noticeable improvement in performance. Accessing the data is also non-trivial, since you need to allocate large chunks of data and manage offsets yourself to fetch the correct data.

If you were trying to use off heap for dealing with IO, you should check out Netty, which not only works very well and intuitively, but also does the job better than ByteBuffer

There's a really nice blog post at http://mishadoff.github.io/blog/java-magic-part-4-sun-dot-misc-dot-unsafe/ that goes through the many things you can and shouldn't do with Unsafe if you're interested. There's also a fantastic writeup about using ByteBuffer and dealing with all of its idiosyncrasies at http://mindprod.com/jgloss/bytebuffer.html

Wednesday, August 7, 2013

having trouble with jersey 2.0 and servlet 3.0? you need jersey-core-servlet!

a subtle problem

I just came across this with some example code I'm working on, and the problem is easy to miss.

Let's say you want to use the JAXRS @ApplicationPath annotation for your Jersey application, and you don't want to use a web.xml file anymore, i.e. you want to programmatically define your servlet using Servlet 3.0 mechanisms. You have everything set up just like you've seen in all of the examples online, you run mvn jetty:run, and... Nothing.

two dependencies

There are two dependencies that serve similar purposes; adapting Jersey 2.0's Application class to a servlet instance. One of them includes compatibility for Servlet 2.x, and the other doesn't:

You'd think that this just means one has the ability to support Servlet 2.x and the other doesn't. In my experience, the case is that automatic Servlet 3.0 hooks won't actually work with jersey-container-servlet-core at all; it only works using jersey-container-servlet. Amusingly, the comment in the pom that is generated by the Jersey archetype (not the Grizzly one) is, at least in my mind, equally misleading:

In my mind, it makes more sense for this to say:

proof

I set up a very basic example of this behavior that you can feel free to mess around with if you like. The following two classes make up what is just about the most basic example for a Jersey app (though I use ResourceConfig instead of Application just because I like its flexibility better):

To make this work, here's the pom we're going to use. In comments in the dependencies section, you can see which line you need to comment to experience the problem:

simple, but not so obvious, solution

Hopefully this helps you. This actually had me scratching my head (which means grinding my teeth as I gradually type harder) for the better part of a day before I realized I'd been bamboozled by similar dependencies.

Sunday, July 28, 2013

hibernate, joins, and max results: a match made in hell

what are you ranting about now?

Hibernate's ability to handle joins when specifying a max number of results via Criteria can cause some serious pain. The real problem here is that nothing in the mappings will look wrong. In fact, in several cases the mappings will return the correct data, but with some serious consequences. This is a case where you should probably learn how the hot dogs are being made.

give me something to go off of

For this example, we're going to be using the following two classes, which represent a one to many relationship:

The tables those entities represent contain the following data, and will exist in an HSQLDB instance:

In most cases, we're going to try and get data out of these tables with the following Criteria:

Note that an Order is being added here to prevent any lack of determinism in result sets from obfuscating what's happening.

alright, so what happens?

Nothing good. Let's start off mapping the @OneToMany as such:

We're using eager loading with a JOIN fetch type, and we're defining that the relationship is mapped by the person field in Thing.

Now, let's run our Criteria and take a look at the SQL generated:

Looks like what would be expected. What does persons contain?

Hold on just a second. Why are there two instances of 'Ian' being returned?

Well, the limit applies to the entire result set, not just the parent entities. Since the join is matching the two children, those are the first two rows returned. As a result, not only are we just getting one person back, but we're getting two of the same instance in the list.

Typically, if you're using a JOIN fetch with a Criteria, there's a way to eliminate duplicate root entities:

Unfortunately, it doesn't work here. Since maxResults applies to the underlying result set and not the entities Hibernate is pulling in, adding this restriction actually just limited the list to be a single instance:

that was using join. what about subselect?

I've seen many instances where people switch their OneToMany associations to use SUBSELECT rather than JOIN because they see duplicate parent entities and don't know about the DISTINCT_ROOT_ENTITY transformer. That said, let's take a look at what SUBSELECT does. Here's the mapping:

Let's run it through the original Criteria setup again:

Alright, how's that output looking?

Great! What queries did it run to get the data this time?

WHAT?! Now we have four queries? DO NOT WANT!

Breaking down the SQL a bit, we can see that the first query only selects records from PERSON, while the second selects only records from THING. But...

It didn't just select the THING records for the PERSON records we wanted, it selected every record from THING. The series of ids that the IN clause is selecting against is totally unbounded; the limit statement isn't applied.

Still, that doesn't explain the third and fourth queries, which are indeed the same. The reason these execute is that the Thing instances loaded reference Person instances which don't exist in the session yet. Since they're mapped by the person field in Thing, Hibernate will select each missing Person with a separate query.

Imagine if we had 1000 records in PERSON, each having two corresponding records in THING. We would end up executing 1000 queries to get the data! Generally the enemy with Hibernate is n+1 selects; in this case you have m-n+2 selects, where n is the number of results and m is the total number of records in the parent table.

what if you use a join column instead of mapped by?

Very well, let's change the mapping again:

Side note: the name used in @JoinColumn appears to be case sensitive, at least in HSQLDB and Oracle. Using "person_id" instead of "PERSON_ID" will yield the same additional selects in the last example.

Now what queries ran?

If you think we're out of the woods here, I assure you we're not. Using @JoinColumn changed the relationship such that we're not issuing individual statements for parents, however, we're still fetching in everything. Hibernate still runs the same subquery for IN as before, except that we join to the parent table up front which avoids the additional individual selects:

It's important to realize that the entire contents of both tables are still being pulled into memory here. Again, if you're dealing with a huge number of records, you could easily run out of heap space.

what about using hql instead?

You can try and run this query with HQL. However, the result is not much different than before. Here's the code to do so:

Running it will generate the following SQL query:

The results returned are actually correct; the first two people and their associated things are in the list. Notice again that we have an unbounded query; the limit statement was never applied. Just before the SQL is logged, Hibernate logs a warning:

This means that Hibernate is fetching everything and then trying to apply the first/max result restrictions in memory. As you may imagine, this is also undesirable. This happens because of the join fetch used in the HQL query. Even more comforting is what the JPA spec has to say about this kind of interaction:

The effect of applying setMaxResults or setFirstResult to a query involving fetch joins over collections is undefined. (JPA "Enterprise JavaBeans 3.0, Final Release", Kapitel 3.6.1 Query Interface)

well, if subselect, join, and hql don't work, that just leaves select

Correct, however, FetchMode.SELECT by itself will cause an n+1 select problem.

Luckily there's a way to mitigate that problem.

Hibernate has another annotation called @BatchSize. Unfortunately, the JavaDoc for this annotation is simply "The batch size for SQL loading" and doesn't really explain what it's doing. What the annotation really does is generate a select for the association against multiple ids (up to the number specified) and automatically uses the ids of entities that are in the session that haven't had their association loaded yet. In other words, if you were to get 5 instances of Person, and you had your batch size set to 5, Hibernate will issue a single select for instances of Thing that matched the 5 ids of the Person instances in the session when the any association needed to be loaded. If you had 7 Person instances in the session with a batch size of 5, Hibernate would issue two selects: one for the first 5 and another for the other 2.

Here's the code for this in action (you can use FetchType.LAZY or FetchType.EAGER, they both work):

Here's the SQL generated:

And here are the results, and importantly, the ones we wanted without anything extra being loaded:

The example above does use lazy loading just to call out that it will work in this case since the other examples were eager. If you were to try lazy loading in any other capacity, you'd end up running into other issues:

  • Using FetchMode.SELECT will cause n+1 selects without @BatchSize (and would still do so if you had a batch size of 1)
  • Using FetchMode.SUBSELECT will cause the same unbounded query we saw when using SUBSELECT eagerly, and will generate m-n+2 selects if it uses @OneToMany's mappedBy attribute, where m is the total number of records in the parent table

ok, so the problem can be solved. why the rant?

The main reason I tried to outline so much in this post is that the other examples don't look wrong. Common sense would lead most to believe that the mappings were correct and that the limit would be applied as expected to the parent and children. You don't even see the issue unless you have show_sql turned on, your DBA yells at you for DDoSing the database with tiny individual selects, or your application runs out of memory due to loading huge result sets in memory. In all of those cases, you won't really notice the issue until your data gets a little larger, and by then you could be staring down a production outage depending on the scope of the problem.

I'd never fault someone for getting this wrong, because on paper the annotations seem so logical. There have been several bugs logged for this issue (HB-520, HHH-304 and HHH-2666), and even Gavin King himself says in one of the tickets:

Any limit that gets applied to a query should also get applied to the subselect of any subsequent collection fetches.

Using @BatchSize is syntactically trivial, but it does require some thought. You don't want to stick too high or too low of a number to the batch size. Too low and you'll generate many selects. Too high and you could end up loading more data than you needed, though this is probably more applicable to lazy loading than eager since eager is going to pull in the association for every parent in the session anyway. Having too high of a batch size could adversely affect database performance since the range of the total number of arguments provided to the query is going to be greater unless you have some unusual degree of uniformity in your data; i.e. a batch size of 100 will generate as many as 100 unique statements. As long as you have some notion of the number of entities you're fetching in certain areas where the number of results is bound by a max or a where clause, you should be able to pick a sensible number that will keep the total number of queries you run low.

Alternatively, you could create a separate entity to handle different association loading strategies. I've used this before to enforce a type safe contract that can control what mechanism you're using to load associations, as well as in a polymorphic capacity. If you had an area of your application where you knew you'd be using a max results limitation, you could have your persister logic look up entities using FetchMode.SELECT with @BatchSize, and use another FetchMode and/or FetchType on a different entity when you didn't need batching. Leveraging Hibernate's @MappedSuperclass annotation means you can leverage polymorphism for this use case nicely; declaring the getter for the association in the superclass and mapping the association at the field level in the subclass.

As far as my knowledge of Hibernate is concerned, using batching (or batching with separate entities) is the optimal case for dealing with joins combined with max result limitations. If anyone reading this knows of a better way, or can see a flaw in my logic or code above, please let me know in the comments!

Also, all the files used in this project can be found at https://github.com/theotherian/hibernate-gotchas. Feel free to check out the project and mess around with this yourself!

Saturday, July 27, 2013

creating an in-memory hsqldb instance for using hibernate in a maven build

a simple setup for testing

Lately I've been working on some Hibernate examples I plan on sharing soon, and I wanted to create a simple in memory instance of HSQLDB to test against. Trying to drill down to the bare necessities of what I needed proved to be a little scattered across several resources, so I thought I'd aggregate everything in one place.

This article is assuming a few things:

  • You're using a Maven build
  • You're using Hibernate
  • You don't care about persisting to disk or database state

creating the server

Setting up the server is mostly straightforward, but I do have a subtle change to make the Maven aspect easier:

Note that the location is set to target/mem:test. This tells HSQLDB to create an in-memory only database, which will never persist changes to disk. That said, HSQLDB still writes out a few files (a log, script, and properties file) which uses the name of the database. Prefixing the name with target/ will write to Maven's default build directory so that your workspace doesn't have a bunch of log data in it that you'll have to ignore from version control

configuring hibernate

Now that we can fire up the database, we can connect to it via Hibernate. The config file below will connect, but also has a subtlety similar to what was used above:

Here's where things get a little screwy, and there's a decent chance it's because I don't understand certain aspects of HSQLDB. In this case, the connection string is target/hsql:mem:test. As far as I can tell, both the HSQLDB server and driver write to the same set of files, prefixed by the connection string or database name. When you start up the database and connection pool, you end up with files in target with the names of mem:test.* and hsql:mem:test.*. Since both of these sets of files end up being written to, I figure it's not a bad thing that they're named differently. I do find it a little odd that the client logs data like this, but at least at the moment I don't care enough to see if this can be adjusted. If it can be I'll update this post to reflect how to do that.

wrapping up...

As stated above, I really wanted a sandbox to demonstrate a few Hibernate interactions with, so I really wanted to keep the setup as barebones as possible. There are probably certain things I could or should have configured differently, but the above will work to get you going. I never really use HSQLDB, so if you're reading this article and have some input on a better way to do this or any corrections that should be made, please leave your feedback in the comments :)

Friday, July 12, 2013

how to name your github gists

yes, this is a bit of a hack

I started using Gists to encapsulate all of my code, command line, and log related content rather than using <pre> tags or some styling tool to format code inline. I found that using Gists cleaned up the source of my blog considerably, and made it easier to edit content in general; converting to an HTML escaped version and then having to make changes afterwards was a pain at times

That said, I ran into a different pain, which was Gist's bizarre way of handling organization of data sets

description? check. name? well...

Gists to allow you to set a description for a set of files, which can help you identify what it is that's in there from a list of all your gists. However, it also organizes them from the alphabetically first file name in the set. If you had, for example, a bunch of Maven examples all with a file in the set called 'pom.xml', you'd have a hard time navigating your list of Gists. It also shows the contents of that file in the list, further confusing you.

However, that identification system can be used to your advantage if you're clever. If you create another file in your Gist which is actually a name for the set, but precede that name with a single space, it will be sorted to the top alphabetically. You can't have empty files though, so my advice would be to actually add your description there instead. As an example of how this looks, take a look at my Gists and how they're labeled.

how can that file be excluded when using gists remotely?

Gists offer a script tag based solution for embedding a Gist elsewhere. For example:

<script src="https://gist.github.com/theotherian/5954592.js"></script>

would embed all the files contained in the Gist at https://gist.github.com/theotherian/5954592.js

But what if you just want to embed an individual file, in this case the togglewifi.applescript file? In this case you can append ?file=filename to the URL:

<script src="https://gist.github.com/theotherian/5954592.js?file=togglewifi.applescript"></script>

The end result is this:

ditching blogger's dynamic views

not all they're cracked up to be

After starting this blog with the new Dynamic Views blogger offers, I've decided to go back to the classic templates.

Oddly enough, the sobering reality that they weren't for me happened when I tried to direct a colleague towards my blog for an answer to a question. Trying to explain where to click and how to navigate was surprisingly non-intuitive, whereas the classic template is sort of the model most blogs follow. They're straightforward, easy to navigate, and lightweight.

Another reason for reverting was Github Gists. While you can shoehorn them into blogs using Dynamic Views using JavaScript others have written to consume specially crafted code tags, I'd rather just use the direct links Github provides.

The other views that Dynamic Views offer didn't really do much for me or the blog either. Most of the views are largely inapplicable to a technical blog, and just add a layer of complication that's unnecessary. While I could see value in using them for other non-technical blogs, I think they're a poor fit for blogs heavy on text rather than images.

Tuesday, July 2, 2013

creating resource filters with jersey and jaxrs 2.0

in what situations would I use this feature?

Let's say you have a series of preconditions for certain resources in a web application. For the sake of this example, let's assume we're using Jersey as the controller for a web application rather than just a purely RESTful web service. How can we solve this requirement?

For some simple cases it's not unheard of to see an inheritance model leveraged for this, where resources extend other resources that provide methods for certain preconditions. If single inheritance started to make this unwieldy, a composition or functional approach could be used instead, but you may end up inundating your resource with additional logic. In certain cases, it may be preferable to define this precondition with metadata, perhaps with an annotation.

In cases like this, you can leverage Jersey's dynamic binding of resource filters along with your own annotations to both define conditions as metadata and decouple resources from other business logic in your code base. Below is source code to demonstrate this.

the annotation

Pretty straight forward: no fields, just retained at runtime and able to annotate methods.

the filter

This is a little more interesting. Here we check the user agent to see if it's IE6, and if it is, we abort the request directly in the filter. Aborting the request is a new feature in jaxrs 2.0. In this case, we're sending a 412 response code (PRECONDITION_FAILED), and passing back an error page as the entity via the response.

As you might have guessed, since we're showing a ContainerRequestFilter here, there's also a ContainerResponseFilter that can also be used with resources. The response counterpart is passed both the ContainerRequestContext and a ContainerResponseContext. A typical example of a use for response filters is attaching supplemental headers to a response.

Now that we have our filter, we need a way of binding it to any resources annotated with @IE6NotSupported

the binding

Now we're getting somewhere. Here we have an implementation of a DynamicFeature, another class that's new to jaxrs 2.0. For those unfamiliar with this class, here's a quick snippet of the javadoc:

/**
 * A JAX-RS meta-provider for dynamic registration of post-matching providers
 * during a JAX-RS application setup at deployment time.
 *
 * Dynamic feature is used by JAX-RS runtime to register providers that shall be applied
 * to a particular resource class and method and overrides any annotation-based binding
 * definitions defined on any registered resource filter or interceptor instance.

This is invoked for every method of every resource in your application when the application starts up. In the example above, we're specifically looking if the method is annotated with @IE6NotSupported. We could also change our annotation to support targets of TYPE as well as METHOD, and call resourceInfo.getResourceClass() and perform the same check.

the bootstrap

Now that we have an annotation, a filter, and a way to link the filter to a resource, we need to tell our application to invoke this upon startup.

Typically you'd have many classes mapped here for all the resources and providers in your application, but for the sake of the example we're just mapping the ResourceFilterBindingFeature class.

summary

I hope this helps introduce you to the world of Jersey and jaxrs 2.0 filters. Personally, I find them hugely beneficial for mapping out both pre and post conditions that should be honored for resources without coupling them to the resource itself. I'm a big fan of annotations, and for things like resources I think there are several cases where a precondition can be expressed via metadata rather than in the resource itself.

Thursday, April 25, 2013

This just in: dope with blog forgets to renew domain, more at 11

If any of you tried to hit my blog in the last week, I apologize for the lame domain parking page that came up.  I thought I ordered 5 years of service for my domain, but apparently I only ordered one and it expired.

I've now renewed it and will pay for the domain for 5 years in advance for real this time.

Sorry...

Tuesday, February 12, 2013

Why can't I find a highly rated gigabit wireless router that disables wireless administration?

I've used Linksys routers for quite some time.  After spending a good five years going through Netgear routers like toilet paper in the early 2000's, Linksys saved my sanity with the bullet proof WRT54G.

One of the best, yet potentially overlooked, features of that router (as well as other Linksys routers) is the ability to turn off wireless administration.

After all, why would I want someone to be able to do this?  The biggest issue with wireless access is locking it down; hence WPA2, MAC filtering, etc.  If it's a vulnerability that doesn't involve someone having physical access to your home, then why would I want to bet the house on red and allow settings to be changed?  I like my DNS settings just as they are, thank you.

From what I can gather, Linksys and I are the only people who seem to share awareness of this issue and the feature that prevents it.

My current router (a Linksys e1200) has been functioning just fine, but I want to update my network to gigabit ethernet.  That said, finding a router with favorable ratings and the ability to disable wireless administration is proving fruitless.

So far, I've tried:

  • Apple Airport Extreme: solid as solid gets in terms of hardware; very fast, excellent UI, very responsive, but no lockdown of wireless administration.  Even more disturbing was the Airport Utility's ability to maintain a session for administering the router that didn't expire, or at least didn't appear to.  I asked about this feature on the Apple discussion board, and was largely disregarded as a tinfoil hat wearing clown.  If Apple fixes this issue, I'll buy one the day it's fixed.
  • D-Link DIR-655: I tried this one last summer, and I can't remember all of the details of my experience.  The only thing that stands out in my mind was that I was laughing out loud at the firmware when I was trying to configure it.  Perhaps I should give it another try given that it has largely positive feedback.  I don't know if you can lock down the wireless admin setting though.
  • Asus RT-N66U (and RT-N66R): I have the R; the U and R are functionally equivalent except one is sold in brick and mortar stores and the other online only.  The firmware is pretty decent overall; considerably more responsive than that of the Linksys when using SSL.  However, I still can't block wireless admin.  It does let me set what port the router listens to for admin, so I'd be content if I could block wireless clients from hitting certain ports, but alas this option doesn't exist.  Admittedly, I haven't tried it out yet, so I can't attest to how good of a router it is in general.
That said, I haven't tried:
  • Linksys E4200: I have a feeling this has the setting I want, but it really seems to draw the ire of online review writers everywhere.  That said, it appears to have been succeeded by the...
  • Linksys EA4500: yet another Linksys product that is drawing some serious negative attention.  I generally consider anything that has over 50 reviews and a score below 4 stars to be of mediocre quality, and generally unacceptable for a mission critical device like a router.  This is managing 3 to 3.5 star reviews on both Amazon and Newegg (though the crowd on Best Buy appears to be happy), with issues ranging from constant connectivity issues requiring reboots, to outright failures within a few months.  Also, it sounds like the GUI was updated, so who knows if my precious wifi admin banhammer still exists.
I'm open to suggestions from others for routers I should try or avoid.  I'm a self confessed pedant and elitist when it comes to consumer electronics (which is likely evident from the above post), so don't be offended if I end up not being a fan.