Sunday, November 2, 2014

a java developer's interpretation of depression as one's own heap and garbage collector

this will be more personal than usual

Most of my posts are in the style of "here's how you accomplish x using y", but this post is going to be a significant departure. I, like many other people, suffer from depression; more so in the past few months than I have before in my life. I won't go into all the details as to why because not only are they irrelevant to the post but they're also not something I would dump on my audience.

Instead, I'm going to draw some similarities to the Java Virtual Machine, and how the heap and garbage collector, in my mind at least, closely resemble the ways I struggle to deal with depression.

if you have no idea what the heap, garbage collector, or JVM is...

Maybe you found this post by accident, but you were searching articles about depression. Maybe you use a different programming language. Maybe you're not a developer at all. In these cases, I'm going to give you an abstract, 10,000 foot view of what it is I'm associating to depression. If you already know these concepts, skip ahead to the next section. If you don't or want an entertaining analogy, read on.

If you indeed found this article because you're struggling with depression as well, then I'm sorry to hear that. Depression sucks, and it's crazy how debilitating something invisible to others can truly be.

I want you to imagine a house, filled with the typical things a house has, and a person living in it. To associate the technical ideas I'm going to delve into below, the house is the Java Virtual Machine, the person is a Java Application, and the things are the resources the person (or application) uses.

Some things in the house are quite large: sofas, beds, appliances. These are things people tend to keep for a long time, and are usually things a person uses frequently. Some things are small and get used and discarded frequently: food, packaging, disposable wares, etc; they're short lived possessions. A person's house is only so big, and eventually they can run out of space to put things. The more stuff you have, the longer it can take to clean your house as well. Bigger houses also take longer to clean (or are at least more expensive to clean; I suppose you could hire an army of cleaning people).

Some large items in your house may be things that offer little value that you decide to hold on to: that pool table you never use or that drum set you played a handful of times and abandoned after realizing the learning curve. Sometimes you buy a lot of new stuff that you plan on keeping around for a while, but there's too much large old stuff in the way to use it how you want to. It's even possible to get to a point where you don't really have room for new stuff at all and it's imperative you clear some stuff out, which can take serious time and effort.

All of this is analogous to how a Java Application runs: it's a person in a house with limited space, it needs some large things that are used frequently, but also takes in many small things that are consumed and discarded quickly or new large things that require space to exist. The problems are real too: you can run out of space and spend an awful lot of time trying to juggle what you have in a way that makes sense. In the examples below, the heap is the space you have, and the garbage collector is the process of clearing up space when you need to. In a Java application, the garbage collector will pause the application to reclaim those resources.

similarities to my own mind

I spent a fair amount of time thinking about this analogy, and it both amused and bothered me how much it fit the problem of depression.

I have plenty of experience troubleshooting Java applications that are performing poorly, often due to memory issues. Tell me what's happening, give me access to the server it's running on, and I can pull the problem apart and come up with what's wrong and how to fix it. At the risk of sounding pompous, as long as I have the right tools at my disposal, I'm really good at it too. Sometimes the application was holding on to large things that prevented new things from coming in. Sometimes the application had way too much disposable stuff coming in, or wasn't letting go of stuff it should dispose of. Whatever the case may be, these issues prevented the application from accomplishing what it needed to do, and I was able to identify the problem and correct the behavior once I understood the root cause.

Depression can have similar effects on my mind. There are large things that take up a significant amount of space, but really offer little value; they're a burden on the system and can prevent new things from coming in. Sometimes there's an abundance of short lived stimuli entering your mind, and all you can do is focus on those, even if they're noise: sort of like a mental DDoS. Some things are actually very disposable but you end up holding on to them because of an issue with how they're processed; I think of it almost like being a thought hoarder. Ultimately all of these things can slow you down to a crawl and prevent you from accomplishing the things you need to do.

These things are surprisingly similar in my mind, which ultimately begs the question: why can I see it when it's a process on a server and be completely blindsided and overwhelmed by it when it's happening in my own head?

resource contention, overload, and thrashing

There's a key similarity here in how the problems manifest: the resources of the system under duress are unavailable. The application isn't sorting things out itself: it may have a way of complaining that something is wrong via a failed health check or some error but can't adjust. Ultimately you're the one coming in, analyzing the problem, and figuring out how to adjust what's happening to make those resources available again. Meanwhile, the application is struggling and can't fulfill what it's designed to do; in extreme cases it becomes completely nonresponsive.

When you break it down, resources are being contended for and overloaded, which means they're not available as they should be for all the normal functions the application has to perform. Thrashing takes place when what little resources are left end up being consumed trying to alleviate the stress placed on the resources that are overburdened. This is what can cause things to come to a screeching halt for considerable periods of time.

Applications are lucky though; they can be restarted usually at little cost, and a restart can alleviate the problem at least temporarily. Not so in humans. I don't know what would even constitute the idea of a restart in that case. Moving to a new place and getting a new job? Getting baptized in a new church and going through some kind of metaphysical restart? Who knows...

Still though, a restart is a stop gap. Usually restarting an application only provides time before the problem comes back; the problem still hasn't been solved. The same can be true in us: you can do whatever it is you think is a "restart" but only delay the inevitable.

What can we do?

the application has to change

If you're talking about fixing an application, it means changing it. True, the literal Java application can't change itself, but I think you can see where I'm going with this.

The application needs to be able to manage its resources more effectively. The application has to be able to identify when considerable resources are being used for things that offer no value. The application has to understand when there's an abundance of noise coming in and try to discard it instead of hold on to it. The application has to be able to accept new things. The application needs to manage its resources in a way that allows it to perform what it's designed to do.

Sometimes small design tweaks can provide big gains. Some design changes are enormous and take time. In between these changes, the application still has to run though: it performs an important duty and can't just shut down during construction so to speak.

so what has to change?

I keep using terms like "large things that offer no value" as well as "lots of noisy stimuli" and I'd like to expand on that, at a personal level, to provide context. I'm going to emphasize that I'm not a therapist, I'm an engineer, and that everyone's problems are different, so this is purely an example and not some kind of empirical data.

I struggle a lot with the unknown. I dwell on the "what ifs" and "should haves" and "don't knows", struggling to reconcile what I should or could have done differently or how I need to prepare for them. Most of them I can't do anything about though; life will always do unexpected things that you can't be prepared for. I could get sick. I could be robbed. I could lose someone close to me. Can I prevent any of it? Well, there are some things I can do like eating well or having a security system. The more important question is: can I control any of it? The answer to that is simply: no. Despite that answer, I allow myself to be overwhelmed. I research compulsively trying to find answers to questions I can't hope to answer, and I consume significant amounts of time, and really as a resource time means my life, on doing so, when instead I could spend that resource on enjoyment. The end result of this process, for me, is depression. Not only do I become depressed contemplating all these things, but the realization of how much time and effort I spent in futility depresses me as well.

That struggle corresponds to a few of the analogies I've been using. For me, these thoughts accumulate as a significant amount of noise; to use the term I shared before they're a "DDoS on the brain." They're also large in scope and consume a lot of resources, so much that my brain stops focusing on other large things. What do I mean by that? Sometimes I forget to eat because I'm so absorbed in dwelling on the unknown. Sometimes I spend so much time on it I don't go to the store to buy food, or talk to my friends. Sometimes I lose so much time on something of so little value I either go to sleep late or stay awake thinking about it. I would consider all of those vital functions of myself as an application that I'm unable to perform due to a lack of resources.

the metaphysical gc pause

I don't always manage to do this, but sometimes I'm able to realize just how much of my resources I'm consuming on, well, garbage, and I have an opportunity to get back in control. For me, I equate this to a GC pause.

Like I mentioned above for those unfamiliar, the Java garbage collector will pause the application to reclaim resources so that the application can continue to run. To do this, it has to identify what should stay and what should go, and remove the things that should go. I try to go through the same process. I look at the state of my mind and think "what is currently circulating in here that's not helping me do what I need to do?" and try to purge it. Some of it is still strongly referenced and I can't quite get rid of it yet. A fair amount of it is new enough though, that I can clear it out. I'm not saying this is easy to do; it's only something I've started to do recently and I'm slowly getting better at it. What I am saying though is that it helps me to pause, examine the context of what's happening in my head, and truly identify the garbage so that my resources are available both for what I need to do to keep running and to take new things in.

so who's the person looking at the heap dumps and profiling things?

One of the points I made before is that, with an application, I'm the one figuring out what is wrong with it rather than myself, and how that makes a huge difference in my ability to understand what's wrong.

If we can do that for computer programs, why can't computer programs do the same for us? I have a plan for this, and that's why I'm asking for a small donation to start my work towards creating an algorithm to implement a cerebral garbage collector that...

I'm kidding. :) (but be honest, you were thinking "man, wouldn't that be nice!")

In all seriousness, talking to a therapist has been helping me tremendously. Doing so isn't an easy step to take; in fact I avoided it for a long time and can honestly say that in my case it was due to a bunch of irrational fears. It's important to find a therapist you click with as well; who seems to have a decent grasp on who you are as a person and can communicate with you beyond just words. I was lucky in that I knew someone who recommended a therapist who was perfect for me, and I know others may not be so lucky.

Having a therapist has been hugely beneficial because, to come up with another programming metaphor, I was able to hand over all of my runtime statistics to him, and he was able to identify the bottlenecks and resource contention that was causing me not to run properly. Gradually I've been able to adjust to account for this and have made improvements, but I know it's a long process, and that it's important to fix the bigger problems that require major design changes and not just rely on the small tweaks.

I should mention too that having a therapist doesn't imply taking medication. In my case it was the total opposite: I told my therapist from the start that medication was only on the table if I arrive at an impasse that I absolutely must get past but can't without a prescription, and he was completely open minded and empathetic towards my position.. That's not a one-size-fits-all situation for anyone, nor would I ever judge anyone for agreeing or disagreeing to medicate. I have my reasons for why that works for me; I wanted to call it out for those who may fear a therapist for this reason as I once did. You have choices in how you get better, and data will help you with those choices. The most important choice though is deciding you want to get better, and taking steps to get there.

the road ahead

I know the "life's too short" expression is something people have heard to death by now, but I wanted to call out it here. It's not worth suffering and making yourself miserable if you can avoid it. I plan on continuing to give my therapist the data he needs, to get feedback on where the resource contention is the worst, to figure out what data is garbage, and to tune my garbage collection to make the most use of the resources I have. I know that someday, like all other "applications", I will terminate (it feels like there's a joke there about developers being nicer to operations people). When I do, I want to be remembered as an application that, while it may never run perfectly, accomplished many great things and improved the health and function of the collective system overall. I don't want to be the application that doesn't produce much because its resources are tied up doing things that don't benefit it or anything else. I don't want to be the application that stops running, but others don't notice because it prevented itself from really standing out because it was its own worst enemy.

I guess what I'm trying to say is: I don't want to be Internet Explorer.

And no, this article wasn't just a massive run towards an Internet Explorer joke. It just felt good to end on a comedic note.

Thursday, October 30, 2014

how do I select message data from my iPhone backup?

don't worry, it's easy (promise)

Messages on the iPhone are stored in a sqllite file inside of the backup. If you have no idea what a sqllite file is, well, then you have a little more homework to do, namely install sqllite so you can... read a sqllite file. :)

Once you have that installed, you need to grab the file within the latest subdirectory you can find inside the directory Library/Application\ Support/MobileSync/Backup/ called 3d0d7e5fb2ce288813306e4d4636395e047a3d28. This file has your texts in it. I suggest you copy it somewhere else in case you corrupt it by mistake using sqllite. Once you have it, just run sqllite [your backup filename] to load the data.

getting the data

The following query will retrieve the texts stored in this file, in a nice pipe-delimited format:

If there's more data you're looking for, this will tell you the columns in the table (sqllite doesn't use desc or describe):

The output you get should be usable in Excel or any other number of tools if you want a different backup of your texts.

Monday, July 14, 2014

underscan problems using hdmi with your macbook pro retina? here's a potential fix

an answer buried in the middle of nowhere

Just went to hook up my monitors to my Macbook Pro Retina, one with DVI to Display Port, the other with HDMI, and my problem was immediately apparently: I was in underscan town...

Luckily I happened to stumble on an answer online, but it was a little obscure. Mind you, my monitor had the same option as the poster's monitor (a Samsung in my case) in the post below, so your results may vary, but it fixed the issue:

Okay, I was having the same problem... Trying to connect Macbook Pro to a 24in monitor... And I was SO disappointed I had to use the "underscanning" feature in the Displays box to make the bigger screen slightly smaller so it fit everything in there... This was horrible to me because it also made things less clear.

BUT I FIXED THE PROBLEM!

Go into your little settings within your ACTUAL MONITOR... And there's a section that talks about "Input"... Mine says "Input Select".. And where it mentions "HDMI", it gives an option of PC or AV... It was set to AV!! So I set it to PC and life became wonderful again! Spread the word. This seems to be a common problem but not everyone realizes it's the monitor.

The answer was found at http://forums.macrumors.com/showthread.php?t=1313094. I figured I'd toss a post up to try and make the solution a little more SEO relevant to those suffering from the same problem.

Saturday, May 31, 2014

caching in java: getting better performance and precision from your golden hammer

inspiration

This post is actually a written form of a presentation I did at a corporate tech convention. The inspiration for it came from a common tendency for caching to be treated as a golden hammer for performance: if we were executing a database query frequently, we were caching it.

To be blunt, it's just not that simple. Sure, memory is relatively cheap, and it may make things go faster out the door, but that doesn't mean you're using your resources well. It also doesn't mean your caching approach makes sense in the long run either: how effective is your hit ratio when that data grows? Sometimes people don't think about acceptable levels of staleness in data either and just blindly cache for hours when it can negatively impact the experience of your application.

All of these data points mean there is no simple answer to caching. This isn't just a one-size-fits-all solution: you have to be precise and think about how your data is going to be used, how it will grow over time, and how stale it can get. To reference Idiocracy, I view caching as a toilet water solution. If you haven't seen the movie, here's the gist (spoilers ahead): 500 years in the future, a man from the past wakes up from cryostasis and is the smartest man in the world. He finds out the crops are dead (I won't say why) and suggests that people start watering them instead of what they were doing. In this dystopian future, people only associate water with toilets, so the response from everyone is "Like from the toilet?" After water solves the problem, discussion of other problems leads people to suggest "Hey, let's try putting toilet water on it!" The moral of the story is this: you can't blindly apply the same solution to every problem just because it solved one that sounds similar.

This post is designed to suggest questions to ask rather than provide answers for everything. At the end I'll present a checklist you can try out to see if it helps you.

what are the costs of caching in java?

Java memory management is a massive topic by itself, so we're going to only go skin deep here. If you're not familiar with how the heap is organized, here's a diagram (note: permanent generation is going away in Java 8 as a result of being collapsed into tenured):

We're not going to worry about eden versus survivor space in this post. What we're most interested in is the difference in young vs tenured, and how things end up getting stored over time. So...

  • The young generation stores short lived objects, and is garbage collected (or GC'd) very frequently. For an example of short lived objects, think of all the objects you create during a request in a web application that are unnecessary after the request is complete. Young generally uses far less space than the tenured generation, because most applications don't create so many short lived objects that you need a massive young generation.
  • As the garbage collector scans the objects in the young generation, if it sees objects that are still in use after checking several times, it will eventually move that object over to the tenured generation.
  • As your tenured generation has more data added, the amount of memory you use will grow to the maximum number you specified in your -Xmx setting for your JVM instance. Once you've reached that point, the garbage collector will collect from the tenured generation when necessary.
  • Different garbage collectors have different approaches when it comes to the following statement, but at a high level, once something has to move out of young generation and into tenured, if there isn't enough space in tenured, the garbage collector has to clear some. This means your application will suffer from some pausing, and this pausing will increase (sometimes dramatically) in either frequency, duration, or both with the size of your heap. The garbage collector has to rummage through the objects in the tenured generation, see if they're still being used by your application (strongly reachable), and remove them if they're not. Sometimes it has to shift the objects that remain (compacting vs non-compacting garbage collection) if there are several small blocks of memory free, but a single large object needs to move.
  • Generally, data you cached will live indefinitely in the tenured generation. I won't get into other cases here (though if you're interested check out my posts on other reference types and off-heap storage), because mainstream stuff like Guava Cache and ehcache will put stuff here.

So there you have it: ultimately the garbage collector is going to have to manage what you cache, which isn't necessarily a bad thing, but can become one if you're not careful. If you're going to force the garbage collector to do more work, you need to get value out of keeping that data in memory.

so what should we cache?

To the best of our ability: things that are frequently used and/or expensive to load. The former is the difference between a high and a low cache hit ratio, and we want a high cache hit ratio if possible. We need to know what data is accessed frequently or most expensive to load though in order to understand what to cache. Here are some criteria I encourage people to apply to their data:

  • Is there some data that is explicitly more popular? For example: if we're a retail website and there's a set of products we know we're going to promote in a highly visible place, that data is going to be explicitly popular because we're directing people to it.
  • Is there some data that is implicitly more popular? For example: if we're a social media site, we probably have a bias towards new content rather than old due to the way we present it. In that case, we can prioritize newer content since we know that, at least for some period of time, it will be popular until newer content replaces it.
  • If you were to graph traffic against some unique identifier in your data, where does the tail start? Often times the tail of the graph starts early on in the data set, and should give you an indication of what data is very valuable to cache, and what data will often be a cache miss if you were to cache it.
  • Is your cache distributed or local? If it's local, you're going to increase the cache misses to be equal to the number of hosts running that cache for any given entry. If it's distributed you may be able to cache more of the tail effectively.
  • Is the data expensive to load? If it is you may want to consider reloading that data asynchronously once it expires, rather than blocking on a reload. I've written more about blocking and non-blocking caches in another post.

Bear in mind the cost of loading data and preventative measures in terms of exposing yourself to a DDoS, or Distributed Denial of Service attack. Chances are you can't cache all of your data, and you should be careful to not put yourself in a position where cache misses and/or expensive load times can make your application unavailable if hit frequently enough. This subject is beyond the scope of this post, though I may write about it at some point in the future.

why will data live indefinitely if cached?

Interestingly, I had a discussion with a colleague about this just a few weeks ago, and it revealed a massive misconception that he and probably other people have about caching.

Usually, caches don't eagerly expire data by default; they do so lazily depending on their caching algorithm. Here's an example: say we have a cache that can hold 10 items, and that the max age is 30 minutes. If we load 10 items into the cache around the same time, and see how many items are in the cache 40 minutes later without having accessed them at all in between, we'll still have 10 items in the cache. By default, with technologies like ehcache and Guava Cache, there's no thread running in the background checking for expired entries. Since the entries are stored in such a way that they're strongly reachable, they're not eligible for garbage collection and will always stay in your tenured generation.

You may think this isn't a big deal, but it can be in practice. Let's say someone didn't pay attention when they set up a cache and decided it could hold 10,000 entries. Maybe the data set in the backing storage was small at first and no one saw a problem. Let's say that, over time, the data set grows to 10,000 entries, and that those entries are gradually hit over the runtime of the application, but only 100 or so are hit with any reasonable frequency. In that case, you're going to shove the entire data set in memory over time, and you're never going to get that memory back, even if no one accesses that data again for as long as the application runs.

Guava allows you to create an expiry thread that will make old data eligible for garbage collection (I haven't found the same in ehcache; comment if you're aware of this feature). Still though, you're probably better off sizing your cache more effectively for your traffic and data than you are eagerly evicting, depending on what problem you're trying to solve.

Bottom line: think about how much stuff you're tossing in memory, how much space it will take up, and how valuable it is to have it in memory.

wrapping up for now

This post is less of a deep dive into caching and more of a contextual overview of the impact of caching. I hope it helps clear up some confusion that I know I've run into multiple times. The goal of this post is really to identify why caching can actually go wrong over time, and why all of us really need to put thought into how we're using memory. The golden hammer principle can often apply to caching, and if you're not careful, you can really hurt yourself if you're swinging that hammer around a little too freely. :)

Wednesday, April 30, 2014

pro tip for developers: take some time to code up some well known data structures and algorithms yourself

remember all that stuff you learned in school?

It's probably stuff you take for granted when you use data structures built into higher level languages like Java, Scala and C# or libraries such as Apache Commons, Google Guava, and the like. Ultimately that stuff is the foundation of your knowledge as a developer: you don't think about it all the time but the fundamentals are ingrained in you.

That said, I bet if you cracked open a textbook today and decided to read about one of the classics, it may stump you for a bit. Do you remember the details of Dijkstra's algorithm? You may remember it's used for shortest paths in a graph, but you may not remember how it works. What about hash based data structures? Do you remember what makes a good hash function, or different strategies for handling hash collisions? How about string sorting or searching approaches?

If some of these aren't fresh in your head, you shouldn't feel bad. You may have a notion of how they work on a high level still, along with the computational and storage impacts, but you may not remember how to implement them offhand. Going back a few sentences, if you've ever reread these from a textbook, I'm willing to bet you didn't grasp all of it on the first pass. It's pretty dense material, and usually the best algorithms somehow cheat or have some type of trick that gets around some of the computational expense of the problem that you need to get your brain in sync with again.

Now you may be thinking "what's the point of all of this? I almost never have to be hands on with this in my day to day job, that's why it's not at the ready." This is probably the case, but it very well might not be if you're in an interview. High caliber companies will probably ask you about this kind of stuff, and may very well ask you indirectly as a word problem instead of saying "show me how a trie works."

why you should take the time to write some of this code yourself

Ultimately, we all learn a little bit differently, and we speak about problems in ways that only we understand. How much easier do you find your own code to read and comprehend versus someone else's?

For that reason, I'd recommend writing up some solutions yourself, with your own comments and flair, so that if you ever need to refer to it, you don't end up cracking open a big textbook and try to relearn from someone else's words. I had to do this in recent history for a task, we'll call it "homework", where having my own implementation helped me pull the solution out of the archived storage in my head and back into my brain's cache. Everything was familiar to me once I read my own code and my own comments; I even called out the things that tripped me up the first time I revisited the subject matter that tripped me up again when I looked at my textbook.

Big players will ask you this kind of stuff; Google is notorious for this and will typically suggest study guides to candidates, and there's also a famous blog post about how to prepare. I've known people who studied for months for Google interviews, and I'm willing to bet that they wrote out these algorithms themselves to prepare. So, given that, why not be ahead of the game?

stuff worth focusing on

Perhaps I've sold you on this idea. If I have, here's what I'd suggest for resources and problems to focus on to help you.

First, you should buy Algorithms by Robert Sedgewick and Kevin Wayne. It's one of the better algorithm books I've read (still dense, but reasonable), and has a lot of code samples and illustrations to help you understand.

Second, I'd recommend dabbling in the following as focus points:

  • Time-order complexity and Big-O notation (document the code you write with this, as fine grained as you can)
  • Sorting
  • Hashing and hash based data structures
  • Trees (binary/red-black)
  • Graphs and graph traversal
  • String sorting and searching
  • Data structures with interesting properties like min heaps and tries
  • Concurrency, for example map/reduce or a blocking work queue with a poison pill termination condition

All of these are pretty broad subjects, but they yield a lot in terms of the number of computer science problems related to them. Again, further emphasizing the point, writing code for the subjects above on your own gives you a persistent way to communicate to yourself in a very high fidelity capacity, and provides you some nice examples if you're tasked with having this knowledge at the ready.

I mean what I said about that textbook too; it's a very well written book that covers a wide variety of classic problems and algoritms, and you owe it to yourself to add it to your collection. :)

No matter what position you're in, I hope this advice helps you in your career in the long haul. If you have questions, or if you've done this and had it help you, please feel free to comment below!

Monday, March 31, 2014

using spring boot, spring websockets and stomp to create a browser chat service

when automagical becomes autotragical...

I have a love/hate relationship with frameworks that are automagical.

On one hand, they can save you a lot of time and effort, preventing you from writing tons of boilerplate code, and just less code in general (every line of code is one you have to support, right?).

There's a dark side to this however. Often times automagical frameworks have rules, usually modeled by conventions. If you violate these, the framework will slap you across the face with a stacktrace. Sometimes the stacktrace is meaningful. Sometimes it's outright esoteric. Sometimes it tells you something that you can't accurately discern without digging into the bowels of the code that you're trying to leverage without creating bowels yourself.

In this case, Spring was guilty of said crimes on several occasions.

Don't get me wrong; I'm quite fond of Spring. However, in this case the solution involved going roughly shoulder deep into Spring's bowels to find simple, yet not obvious solutions to problems. Here's a list of some of the problems I faced, in the event that someone out on the intarwebs is searching for a way to make sense of it all:

  • Spring Boot, at least when I wrote my code, seems to despise Tomcat 7.0.51 (though this appears to have been resolved now). This is particularly important, because in 7.0.51 they fixed a bug that prevented pure websockets from working correctly with... Spring (among other things no doubt).
  • When using Spring's WebSocket API, their scheduling API ends up hijacked. If you need to add your own scheduled bean, you need to override things (as shown in my code).
  • If you have a queue for your messages, and you don't have the URI start with 'queue', you can't use user specific message queues. Regular queues will work just fine.
  • The STOMP heartbeat configuration doesn't work for browser clients since you have to deterministically configure which clients you have that will need to support a heartbeat on the backend. I had to roll my own here.
  • You can't inject the SimpMessagingTemplate into your configuration class or it's bootstrapping path if you're using constructor injection. This can be important if you're trying to set up a ChannelInterceptor to filter out messages before they hit a resource in your application. (Also, how important is it to shave off the 'le' exactly? It really can't just be called SimpleMessagingTemplate???)
  • When I was using Spring 4.0.0.RELEASE, none of this was working correctly. Switching to 4.0.1.RELEASE magically fixed all my problems. I won't ask how or why.

There were several other issues that I've long since repressed. I wrote this code a while ago, and became so irritated with the number of gotchas that I didn't feel motivated at all to blog about it.

All of that aside, Spring's implementation is still leaps and bounds better than what Jetty is offering. For reasons I can't comprehend, the people who write the Jetty API seem to think that every class should be polymorphic to every other class, making it ridiculously confusing to set up. I cannot emphasize enough how many times during my use of their API that I thought "wait... this class is an instance of that?"

if you just want a working demo to play with

Then feel free to grab my code on Github. Did I mention that every example I found online was broken out of the box?

getting your configurations built

We're going to use four different features: web sockets, security, scheduling, and auto configuration. Each one will have its own configuration class, as shown below:

auto configuration, aka spring boot

security, because we need to log in to chat right?

scheduling, because we have a heartbeat to schedule with the web socket api

web sockets, because obviously

There are a few things worth pointing out in this configuration. The /chat and /activeUsers paths will correspond to Spring MVC controllers, while the /queue and /topic paths will correspond to the messaging API's pub/sub model.

designing the application

Before we get into the implementation, I'd like to address the design and talk about some of the details of using STOMP here.

First, you may be wondering what STOMP is. In this case, it's not an off-broadway production; it stands for Simple (or Streaming) Text Oriented Messaging Protocol. There's more about this on Wikipedia and the STOMP website, but in a nutshell it's a command based protocol for sending messages, interoperable with any provider or client implementing the STOMP spec. In the case of this example, it can use web sockets or not; it doesn't matter since the spec works either way.

STOMP and Spring allow us to set up queues, and more interestingly user specific queues. Semantically this is what you'd want in a chat application: each user can subscribe to a queue of messages coming to them, and the front end can route them into the correct chat window. In this case, our front end will simply subscribe to /user/queue/messages, and on the back end Spring will create a dynamic URL to map those messages to that matches the session of the user. The nice thing here is that each user subscribes to the same queue, but they only get their own messages. Even more sophisticated is that Spring can map multiple sessions for one user, so you can be signed in at multiple locations and get the same chat. All of this maps up with Spring Security as well, so as long as you have that configured all of your sessions will be mapped to the correct user queue, all messages received by the client will show up in a chat window corresponding to the other person in chat.

STOMP and Spring also allow us to set up topics, where every subscriber will receive the same message. This is going to be very useful for tracking active users. In the UI, each user subscribes to a topic that reports back which users are active, and in our example that topic will produce a message every 2 seconds. The client will reply to every message containing a list of users with its own heartbeat, which then updates the message being sent to other clients. If a client hasn't checked in for more than 5 seconds (i.e. missed two heartbeats), we consider them offline. This gives us near real time resolution of users being available to chat. Users will appear in a box on the left hand side of the screen, clicking on a name will pull up a chat window for them, and names with an envelope next to them have new messages.

writing an implementation

Since we're writing a chat client, we need some way of modeling a message, as shown below:

We also need a place to send messages to; in this case a controller just like you would create in Spring MVC:

There's some interesting stuff happening here. First, we're injecting an instance of SimpMessagingTemplate into our controller. This class is what allows us to send messages to individual user queues via the method convertAndSendToUser. In our controller we're also assigning the sender ourselves based on the session information that Spring Security has identified, as allowing the client to specify who the sender was produces the same problem the email spec currently has. It's worth noting here that the message is sent to both the sender and the recipient, indicating that the message has passed through the server before being seen in the sender's client's chat window. We ignore the case of sending a message to the recipient in case you're messaging yourself for some reason so that you don't get double messaging (though I should probably just ditch that case altogether).

Lastly, I'd like to draw your attention to the @MessageMapping annotation, which binds the method the annotation is on to the path we set up in our configuration class earlier.

That's actually all we need server side to send and receive messages believe it or not. Next is determining who's signed in.

We're going to start with a controller that receives a heartbeat from users who are signed into chat:

In this case, we're receiving a message that contains information in the header about who the user is, and mark their most recent heartbeat in a class called the ActiveUserService, which can be seen below:

When we call the mark method, we set an updated time for when the user last checked in, which is stored in a Google Guava LoadingCache instance. We also have a method that will aggregate the user names of all the active users, based on the metric of them checking in within the last 5 seconds. This is the information we want to send back to our topic, which is handled with the code below:

In this class, we combine the service and the template to send a message to a topic every two seconds, letting all clients know which users are available to chat. In return, users can reply when receiving a message, telling the server they're still active, creating a round trip heartbeat.

putting it all together with a user interface

And here's where I step out into the realm of things I actively try to avoid: creating a user interface. I am using jQuery here, which makes the task far easier, but I'm still admittedly not great in this department.

Before we go down that path, it's worth calling out that I used two JavaScript files that were present in the Spring guide for using their messaging API with websockets: sock.js and stomp.js. The first sets up the connection to the server, while the second implements the STOMP spec. They're both available for download (and originally found) at Github.

Warning and full disclosure: I'm a lousy JavaScript developer. The code below is likely horribly inefficient and could probably be rewritten by a UI developer in about 10 lines. That said, it does work, and that's good enough for me for now. If anyone reading this would like to advise me on improvements I'm open to feedback. I'll probably try to clean it up a bit over time.

connecting

Before we can do anything, we need to connect (this fires when the document is ready):

In this method, we do three very important things:

  1. We capture the user name for the session (this is used for figuring out who we're chatting with and for coloring text in the chat window).
  2. We subscribe to a queue that's bound to our user, and call showMessage when we receive one.
  3. We subscribe to a topic that sends out all the active user names, and call showActive when we get an update.

There's something slightly peculiar here worth calling out: notice that we construct the SockJS instance with /chat as the URL, yet you may notice that the other JavaScript below sends messages to /app/chat. Spring seems to have some kind of handshake here that I don't fully understand. If you change the value in this constructor, it will fail saying that /app/chat/info doesn't exist, so Spring appears to expose the resource /chat/info here to facilitate a handshake with SockJS.

showing a message

Whenever we receive a message, we want to render it in the chat window for that user, which is done in the code below:

First, we figure out which chat window we should load based on comparing the recipient to our username. We then create a span with the new message, colored to indicate the sender, append it to the window, and scroll to the bottom of the window to show the latest message. If we're updating a chat window that's not currently visible to the user, we render an envelope character next to their name in the user list to indicate that there are pending messages. Below is the code to create the envelope:

getting the chat window

In the example above we obtain a chat window to update with message, but we have to create it if it doesn't already exist, as shown below:

We create a div to wrap everything, create a div for displaying message, and then create a textarea for typing in new messages. We automatically hide created chat windows if another one already exists, because otherwise we'd interrupt the user's current chat with another person. We create binds for two events: hitting 'enter' and clicking 'Submit'. In both cases, since the event references a DOM object that has a unique id referencing the user you want to send the message to, I'm capturing that and using it to route. JavaScript's variable scoping is something I often mix up in practice, so I'm relying on the DOM to tell me who to send the message to instead. The method for doing this is shown below:

We send the message to /app/chat, which routes to our MessageController class, which will then produce a message to the appropriate queues. After sending the message, we clear our the textarea for the next message.

showing active users

As I mentioned above, whenever the server tells us about the active users, the client reports back that it's active, allowing us to have a heartbeat. The first method in the code below sends the message back to the server, while the second renders the active users:

There's a lot going on here, so let me break it down:

  • First, capture who was previously selected.
  • Second, capture which users had pending messages. We'll want to re-render this with the new list.
  • Create a new div for the next user list.
  • As we process users, preserve the one that was previously selected by assigning the appropriate CSS class (.user-selected)
  • Bind a click event to each user that will hide the current chat window, removed the select class from all users and show them as unselected, remove the pending messages status, pull up the chat window for that user, and mark that user as selected.
  • If a user was added to the new list that previously had messages pending, redisplay the envelope icon.

There's really almost no HTML to this example; virtually everything is generated as messages come in. I opted for a lot of re-rendering of data to avoid an overabundance of conditional checking, though I don't know if this is good in practice or not: again, I'm not particularly good with JavaScript

try it out!

Like I mentioned, this is available on Github for you to play with. I know there's a lot covered in this blog post, and I'll probably update it several times to fill in gaps as a read it and/or get feedback. I hope this helps you get started using Spring and STOMP for doing some pretty sophisticated messaging between the browser and the server.

Friday, February 28, 2014

using jackson mixins and modules to fix serialization issues on external classes

the problem

Let's say you have some classes coming from another library that you need to serialize into JSON using Jackson. You can't manipulate the source code of these classes one reason or another, and you have a problem:

These classes don't serialize correctly

There are a number of reasons this can happen, but in this post we're going to focus on two examples: two classes that have a recursive relationship to one another, and a class that doesn't conform to the bean spec.

dealing with recursive relationships

Let's say we have two classes, User and Thing, as shown below. User has a one to many relationship with Thing, and Thing has a many to one relationship back to its parent, User:

Given these classes, let's say in a unit test we create users using the following code, establishing the recursive relationship:

Now that we can create a user, let's try serializing:

If you run that method, your test will fail, and you'll see a nice CPU spike on your computer because this just happened:

fixing recursive relationship issues with mixins

As it turns out, Jackson has an awesome feature called mixins that can address this type of problem (remember, we're assuming User and Thing are not modifiable).

Mixins allow you to create another class that has additional Jackson annotations for special serialization handling, and Jackson will allow you to map that class to the class you want the annotations to apply to. Let's create a mixin for Thing that specifies a @JsonFilter:

Now you might be thinking, "What's that 'thing filter' string referencing?" We have to add this mixin to the object mapper, binding it to the Thing class, and then we have to create a filter called "thing filter" that exludes Thing's field of user, as shown in the test below:

If you run this test, you'll see that it passes.

dealing with a class that doesn't conform to the bean spec

Let's say we have two other classes, Widget and WidgetName. For some reason, the person who wrote WidgetName decided to not conform to the bean spec, meaning when we serialize an instance of Widget, we can't see data in WidgetName:

Let's say we're creating widgets using the code below:

If we try to create a widget like this and serialize it, we won't see the name. Here's a test that will serialize:

And here's the output:

fixing classes that don't conform to the bean spec with modules and customer serializers

We can address this problem pretty easily using a Jackson module that provides a custom serializer for WidgetName. The serializer can be seen below, and it uses the JsonGenerator instance to write the value from the WidgetName argument:

We now need to wire this up in order to use it. Below is a test that creates a SimpleModule instance, wires up our custom serializer to it, and registers the module within our ObjectMapper instance:

If you run this test, you can see the correct output for the serialization:

conclusion and resources

All the resources used in this example can be found on Github at https://github.com/theotherian/jackson-mixins-and-modules. If you end up having to serialize data that's outside of your control and is causing you problems, you should be able to get a lot of mileage out of these two solutions.

Thursday, January 23, 2014

writing your own tech blog part 1: the first 10k hits are the toughest

a milestone and a retrospective

Today, my blog exceeded 10,000 views. I'll concede that some of them are spam/bots, but most aren't. So, in a flagrant self endorsement, I'm going to write a blog post about writing blog posts. It's like Inception but your dead wife isn't trying to kill you in your subconscious.

I'm pretty excited that I've been able to not just hit that milestone, but that I've been keeping with it and getting some comments and feedback. I had a lot of motivation and ideas when I started writing, and I've had some opportunities to figure out what works, what doesn't, and some goals that work well. I wanted to share these in an effort to help others who are thinking about writing a blog, or have one and are stuck or not feeling motivated.

things that work

Let's start off on a positive note: ways to succeed personally and publicly with this. Here's a list of observations and advice that I found worked well for me and was reflected in the comments I received and the page views I've attracted.

  • Try picking topics or problems that are somewhat mainstream, but also something people tend to get stuck on. For example, the post I wrote about Jersey 2.0 resource filters gets a decent amount of traffic, and it's the kind of thing that people often set and forget or run into issues getting started with. I found the documentation around this feature of Jersey to be lacking and had to do a fair amount of trial and error to get things working correctly, so I figured others had the same problem. I also use Jersey a lot, which brings me to my next point...
  • Pick things you're interested in. This probably sounds obvious, but I want to emphasize it. If you write about something you're genuinely interested in, it'll help with your motivation. It can also help broaden your horizons, because as you're writing you may start to think of new directions and features you want to explore. Sometimes it'll help you solve a problem, which leads me to...
  • If you solved a problem that seemed tricky to figure out, like it was something you'll forget and need again, or can help someone else in the same boat, just write about it. Seriously; make yourself a quick set of rough notes in a text editor and write about it later. I was surprised at how often I came back to one of my posts by doing this, and it's helped me professionally in that I've referred coworkers to my blog for guides on how to do certain things. You may be seeing a pattern here, because my next point is:
  • When you write a post, try to be as complete as possible with your examples and resources. Usually I try to do things with Maven and try to include a pom file which makes it easier to reproduce my work on your machine should you choose to mess with it. Try to avoid leaving out steps, or better yet, after you post something, start from a blank slate in an IDE and try running through your example; you may be surprised by what you realize you left others to figure out on their own.
  • Spend some time on self promotion and SEO. I typically tweet my blog posts, often times to coworkers to help get feedback since I'm lucky enough to work with some incredibly smart people. I also look at what people search on that leads them to my site, and what I search for when I'm researching for a post. Sometimes I will custom craft the URL for a post to try and get the most lift. As one example of having strong SEO, if you search for Jersey 2.0 filters, my post is the 4th result after a java.net article and two dzone articles. Against titans like that, I'll consider 4th place an achievement. (On an amusing note, my colleague Sanjay's post is 8th in the results. I'll be sure to tease him about this tomorrow, hehe)

setting goals

There are really two goals that I would advocate you take into consideration when writing a blog. I'm sure you'll have lots of goals, but there are two I think are particularly important.

  1. In the words of Interpol, pace is the trick. Set some modest, achievable goals for yourself, and don't try to go at your blog at some frenetic pace. You don't want to burn yourself out; you want to find that balance of productivity and desire that gives you a steady flow of work. For me, I set a goal of writing one post a month. Some months I don't write one, some I write two. By doing that though, I feel like I can achieve that goal and never end up dreading it. I don't feel like I'm setting an unreasonable goal, and I don't feel like I'm being lax on myself: it's enough to keep things moving.
  2. Once you get into a groove and get some solid material up, your blog can be a massively important extension of your resume. I've had multiple interviews since putting my URL in the header of my resume, and in several of them one or more technical interviewers told me "I took some time to look at your blog, and I really like the work you shared." It's a serious advantage to your cause, because you're demonstrating expertise in your field. I see 8+ page resumes all the time, and they're a nightmare to deal with. Often times they're crafted to get past filters, listing every technology and buzzword possible, but really they're just a massive obfuscation of what the candidate is good at. If you can stick with a blog and show what you're made of, that 1-2 page resume can focus on a smaller, stronger set of accomplishments, and your many useful, well written blog posts can do the rest of the talking. More than just a manifestation of skill, it demonstrates that you care about your craft, and are contributing something that can help others.

these ways lead to madness

There are ways to put yourself on the road to failure as well. Some of these are the counter case to my points above and may have already been inferred by you, but I think they're still worth discussing.

  • Don't overdo it. Like I said before, find a pace that works for you, but more importantly recognize what pace doesn't. For a while I tried to do two posts a month, and I started to feel burdened with it. Luckily I was able to realize this and have since backed down (as I said I shoot for one a month), but I think had I not realized this I may have put myself off.
  • Avoid overcomplicated material or examples. If you want to do something large, break it up into smaller manageable parts. I did this with my multipart guide on Maven archetypes, because writing all of that at once would have been repulsive. I broke up the subject into three logical portions that weren't so short that they added no value but also not so long that people gave up on them.
  • Don't skimp on research for your posts. If you're writing something and you think someone may have already solved it in a different way or you feel like you're missing something, dig deeper. I've had multiple experiences where I was pretty far into writing a post and felt something was off where that turned out to be true. Often times I spend 10-20 hours researching something before I write about it to try and understand as much as possible before I publish. Throwing some half-baked scribble up just to add a post could discredit you in the eyes of your audience.
  • Try not to ignore, disregard, or fight in comments. I'll concede that I'm guilty of this in one case: I replied but didn't follow through on uploading more of my code to help someone. You want people to contribute back to you; sometimes they may offer additional resources you didn't know about, other times they may be struggling with something you wrote and it could help you realize that you glossed over some important details. If people read your blog and just see a bunch of comments with no answer they could think that you don't care or have abandoned the blog. You could get the occasional troll as well. If you do, try to keep things civil as your blog is a reflection of you. People may argue or even be combative; try to do your best to remain objective. If there's no resolution to the argument, there's no shame in calling that out. Saying "I think we just have two different ways of approaching this" or "I understand your point, but I don't agree with it. I do appreciate you commenting though" is a perfectly professional thing to say and can wrap things up just fine.

next stop: 100k (hopefully)

I hope sharing these observations and opinions helps. I'm a big fan of knowledge sharing and seeing developers help and contribute to one another, and I think blogging is a fantastic way to accomplish that. I encourage anyone who takes their career as a developer seriously to give writing their own tech blog a shot; you may be surprised just how much you'll learn in the process.

Monday, January 20, 2014

random things I learned about solr today

sometimes I just go where the search results take me

I was doing some research on SolrCloud tonight, and wound up learning about enough disparate things that I figured I'd put together a quick page summarizing what I'd read along with some links. If nothing else this post is going to just end up being notes for my own memory, but if somehow this helps someone else along the way all the better.

so, what did you learn?

regarding data consistency and cap theorem

SolrCloud (or distributed Solr) claims to use a CP model for data, which surprised me. CP means consistent and partition tolerant and refers to CAP theorem; if you aren't familiar with it you should read about it. The more I read about this though, I would disagree that "CP" is correct unless my understanding of CAP is flawed.

According to this, SolrCloud "is a CP system - in the face of partitions, we favor consistency over availability." This discussion gets things a little more clear, clarifying that SolrCloud "favors consistency over availability (mostly concerning writes)."

To expand on what this means, you need to have at least a high level understanding of Solr's sharding capabilities, which is about all I have at this point. When you shard, you have a leader for certain documents as well as replicas. When you go to update a document, Solr will route the request to the leader and then propagate the change to the replicas. If you happen to look up data from replicas as well as the leader, then you'll actually be using an eventual consistency model. One request that hits the replica can get a stale document compared to what the leader has if the leader hasn't finished distributing an update to the replicas in the event of a real time get.

The "A" is missing in this equation because it's possible that update requests will be rejected under certain conditions. SolrCloud uses ZooKeeper to elect a leader, and ZooKeeper will not allow a split brain condition to happen if part of the cluster goes down. If ZooKeeper doesn't agree on a leader due to a partition of the cluster and a potential split brain condition, update requests will be rejected, i.e. availability is sacrificed in favor of remaining consistent and being partition tolerant. However, availability is still maintained for read operations; the cluster will not reject those requests unless you've partitioned in such a way that there's no shard or replica corresponding to a particular document.

To wrap things up, I found the assertion of a CP model surprising when it's using the same eventual consistency model that AP data stores use such as CouchDB. To Solr's credit, changes should be distributed to replicas extremely fast and soft commits happen within seconds meaning the eventual consistency window is quite small, so the odds that it will create a problem are small.

soft commits, hard commits, real time gets and the transaction log

This is merely a terse summary of the documentation around real time gets and near real time searching, but since it falls under the "things I learned and may likely forget tomorrow morning" umbrella I'm writing about it.

First, it's important to call out that when you update a document in Solr that doesn't make it automatically available within searches. As of Solr 4, you can access a fresh version of a resource after it's been updated by using a real time get as long as you have the transaction log enabled. The transaction log is not unlike what databases use to track changes, and to be honest Solr can behave more like a database than I thought as a result of this feature. Enabling real time gets makes Solr behave more like a NoSQL database.

If you've updated a document, then you have two options to make the changes searchable: a hard commit or soft commit. A hard commit is expensive: it pushes changes to the file system (making them persistent) and has a significant performance impact. A soft commit is less expensive but not persistent. All updates are persistent if you have the transaction log enabled. According to Solr's documentation, it's reasonable to have soft commits automatically happen within seconds while hard commits are restricted to a much longer interval (maybe 10-15 minutes).

You need to be aware of a few things when using the transaction log, as documented here. First, all your updates are written to the transaction log before a successful response is returned to a client. Second, performing a hard commit will persist all changes in the transaction log. Third, not performing a hard commit periodically can result in having a huge transaction log that can potentially kick the crap out of your Solr instance on startup should it try to persist changes potentially on the order of gigs. So, keep an eye on how large you're allowing your transaction log to become, lest you send Solr into a tailspin on startup.

block joins make searching relational

If you've ever wanted a nice parent-child relationship on your documents, it's here. I'm not going to talk about this too much myself because I have a tenuous understanding of how to query this in Solr so far, and there are awesome resources here, here, here and here. One thing worth calling out is that apparently this won't work correctly in JSON until version 4.7 according to this jira ticket.

that's it for now

There's a lot more I'm planning on reading up on regarding Solr in the next few weeks, meaning there's a decent chance of more posts like this as well as in-depth follow ups to help people get started with certain features. In the meantime, feel free to share anything you think I or others should dedicate some time to learning about Solr next!

Wednesday, January 1, 2014

strong, soft, weak, and phantom references: the double rainbows of java

did you just make a double rainbow metaphor?

Yes, because like the double rainbow video, you may look at soft, weak or phantom references and say to yourself "what does it mean?!" It's easy to mix up soft vs weak if you're new to them, and it's also easy to be confused by soft references since they are often referred to as a "poor man's cache." Phantom references, on the other hand, offer such a different type of functionality that you'll probably never need to use them... unless you need to at which point they're literally the only thing that can do what they do in Java.

Hopefully, by the end of this post, you'll just look at them and say "full on!" (another video reference). This isn't even a triple rainbow; this is a quadruple rainbow we're dealing with. The goal is to help provide a clear but concise summary of what these do, when you would use them, and what their impact is on the JVM and garbage collector.

Before we continue though, a few words of warning about these types:

  • These types have a direct impact on Java's very sophisticated garbage collector. Use them wrong, and you can end up regretting it.
  • Since these expedite when something is eligible for garbage collection, you can end up getting null returned when you weren't previously.
  • These should only be used in specific cases, where you're absolutely positive you need the behavior they offer. You should be no means look at these and see a general replacement for what you're doing or a myriad number of ways to change your code.
  • If you're going to use these at all, do a code review with someone else, preferably a senior developer, principal developer, or architect. They're powerful, impactful, and easy to use incorrectly, so even if you quadruple checked your code and think to yourself "Nailed it!", have someone else go over it with you; a pair of fresh eyes on code can make all the difference in the world.

strong references, and a brief jvm crash course

Any reference (or object) you create is a strong reference; this is the default behavior of Java. Strong references exist in the heap for as long as they're reachable, which means some thread in the application can reach that reference (object) without actually using a Reference instance, and potentially longer depending on the necessity for a full GC cycle. Any reference you create (barring things that are interned, which is another discussion) is added to the heap, first in an area called the young generation. The garbage collector keeps an eye on the young generation all the time, since most objects (references) that get created are short lived and eligible to be garbage collected shortly after their creation. Once an object survives the young generation's GC cycles, it's promoted to the old generation and sticks around.

Once something ends up in the old generation, the garbage collection characteristics are different. Full GC cycles will free up memory in the old generation, but to do so they have to pause the application to know what can be freed up. There's a lot more to talk about here, but it's beyond the scope of this post. Full GC pauses can be very slow depending on the size of your heap/old generation, and generally only happen when its absolutely necessary to free up space (I say generally because the JVM's -client and -server options have an effect on this behavior). An object can exist in the old generation and no longer be strongly reachable in your application, but that doesn't mean it's necessarily going to be garbage collected if your application doesn't have to free up memory.

There are multiple reasons why the JVM may need to free up memory. You may need to move something from the young generation to the old, and you don't have enough space. You may have an old generation that's highly fragmented from many small objects being collected, and your application needs a larger block of contiguous space to store something. Whatever the reason, if you can't free up the space you need in the heap, you'll be in OutOfMemoryError town. Conversely, if you're low on memory you can also create conditions where you're creating a barrage of young and old garbage collection passes, often referred to as thrashing, which can tank the performance of your application.

weak references

I'm going to deviate from the canonical ordering in the title of this post and explain weak references before soft, because I think it's far easier to understand soft after you understand weak.

Think of a WeakReference as a way of giving a hint to the garbage collector that something is not particularly important and can be aggressively garbage collected. An object is considered "weakly reachable" if it's no longer strongly reachable and only reachable via the referent field of a WeakReference instance. You can wrap something inside of a WeakReference, which is then accessible via the get() method, as shown in the example below:

If the instance inside of value is only reachable via value, it's eligible for garbage collection. If it's garbage collected, then value.get() will return null. The garbage collector has a level of awareness of weak references (and for all Reference types for that matter) and can be more strategic about reclaiming memory as such.

Now, you may be asking yourself: "when would I use weak references?" Most of the other resources on the web say one of two things: WeakHashMap is an example of how to use them, and using them for canonicalized mappings. I think both of these are poor answers for a few reasons: WeakHashMap is dangerous to use if used incorrectly (read the JavaDoc), and I highly doubt that the average person who is just learning about weak references will read "use them for canonicalized mappings", slap their hand on their forehead and exclaim "Oh! Of course!"

That said, there's a very practical example of using weak references via WeakHashMap written by Brian Goetz that I will attempt to paraphrase. When you store a key-value pair in a Map, the key and value are strongly reachable as long as the map is. Let's say we have a case where, once the key is garbage collected, the value should be too: a clear example of this is a parent-child relationship where we don't need the children if we don't have the parent. If we use the parent as the key to a WeakHashMap instance, it ends up wrapped in a WeakReference, meaning that once the parent is no longer strongly reachable anywhere else in the application it can be garbage collected. The WeakHashMap can then go back and clean up the value stored with the key by using a ReferenceQueue, which I explain further down in this post.

Previous to that paragraph, I mentioned WeakHashMap can be dangerous, and I'd like to expand on that. It's not uncommon that someone may think a WeakHashMap is a good candidate for a cache, which is likely a recipe for problems. Usually a cache is used as a means to store data in memory that has a (potentially huge) cost to load, meaning the value is what you want to have long-lived and not necessarily the key, which is probably quite dynamic in nature. If you use a WeakHashMap without long-lived keys, you'll be purging stuff out of it quite often, and probably cause a large amount of overhead in your application. So, if you're going to use WeakHashMap, the first question you must ask yourself is: how long-lived is the key to this map?

soft references, sometimes referred to as the "poor man's cache"

The differences between a SoftReference and a WeakReference are straightforward on the surface but quite complex behind the scenes. Just like the definition of "weakly reachable", a reference is considered to be "softly reachable" if it's no longer strongly reachable and is only reachable via the referent field of a SoftReference instance. While a weak reference will be GC'd as aggressively as possible, a soft reference will be GC'd only if an OutOfMemoryError would be thrown if it wasn't reclaimed, or if it hasn't been used recently. The former case is pretty easy to understand: none of your strongly referenced objects are eligible for GC and you can't grow the heap any more, so you have to clear your soft references to keep your application running. The latter case is more complex: a SoftReference will actively record the time of the last garbage collection when you call get(), and the garbage collector itself records the last time a collection occurred inside of a global field in SoftReference. Recording these two points provides the garbage collector with a useful piece of information: how much time has passed from the GC before the value was last accessed versus when the most current GC occurred.

Here's an example of using a SoftReference:

The JVM also provides a tuning parameter related to soft references called -XX:SoftRefLRUPolicyMSPerMB=(some time in millis). This parameter (set to 1000ms by default) indicates how long the value in a SoftReference (also called the referent) may survive when it's no longer strongly reachable in the application, based on the number of megabytes of free memory. So, if you have 100MB of free memory, your "softly reachable" object may last an additional 100 seconds by default within the heap. The reason I say "may" is that it's completely subject to when garbage collection takes place. If the softly reachable referent kicked around for 120 seconds and then became strongly reachable again, that time would reset and the referent wouldn't be available for garbage collection until the conditions I've mentioned were met again.

Now, regarding the "poor man's cache" label...

Sometimes you'll find questions online where someone will ask about building a cache where data can be expired automatically, the topic of soft references will come up, and then some will be scolded and told that you should use a cache library that has Least Recently Used (LRU) semantics like ehcache or Guava cache. While both of those as well as many other caching libraries have far more sophisticated ways for managing data than just relying on soft references, that doesn't mean that soft references don't have value in regard to caching.

In fact, ehcache has a bit of a problem in this regard: everything it caches is strongly referenced, and while it does have LRU eviction, that eviction is lazy rather than eager. This means that you could have data that isn't being used sitting around in memory, strongly referenced and not eligible for GC, and not forced out of the cache because you haven't exceeded the maximum number of entries. Guava cache, on the other hand, has a builder method of CacheBuilder.softValues() that allows you to specify that values be wrapped in a SoftReference instances. If you're using a loading cache, the value can be repopulated if it's been garbage collected automatically. In this case, soft references play nicely with a robust caching solution since you have the advanced semantics of LRU and maximum capacity along with the lazy cleanup of values that aren't being used frequently by the garbage collector.

phantom references: the tool you'll never need until you need it

Think of phantom references as what the finalize() method should have been in the first place.

Similarly to WeakReference and SoftReference, you can wrap an object in a PhantomReference instance. However, unlike the other two types, the constructor for PhantomReference requires a ReferenceQueue instance as well as the instance you're wrapping. Also unlike the other two types, the get() method of a PhantomReference always returns null. So, why does get() always return null, and what does a ReferenceQueue do?

A phantom reference only serves one purpose: to provide a way to find out if its referent has been garbage collected. An object is said to be "phantom reachable" if it is no longer strongly reachable in the application and is only reachable via the referent field of a PhantomReference instance. When the referent is garbage collected, the phantom reference is put on the reference queue instance passed into its constructor. By polling the queue, you can find out if something has been garbage collected.

Extension of phantom references can be used to provide metadata about what was garbage collected. For example, let's say we have a CustomerPhantomReference class that has a referent of type Customer and also stores a numeric id for that customer. Let's also assume that we can do some resource clean up after a customer is no longer in memory in the application. By having a background thread poll the reference queue used in the CustomerPhantomReference instance, we can get the phantom reference back providing us the numeric id of the customer that was garbage collected and perform some cleanup based on that id. This may sound very similar to the example I provided with weak references at face value, so allow me to provide some clarification. In the case of weak references, we were making other data available to be GC'd. In this case, you may have some resource cleanup you want to perform that's functional in nature rather than just making something no longer strongly reachable.

Given that, it should be clear that the reason the constructor of a PhantomReference instance requires a ReferenceQueue is that a phantom reference is useless without the queue: the only thing it tells you is that something has been garbage collected. Still though, what about get() returning null?

One of the dangers of the finalize() method is that you can reintroduce strong reachability by leaking a reference to the instance the method is being executed from. Since PhantomReference will only return null from its get() method, it doesn't provide a way for you to make the referent strongly reachable again.

so what do reference queues do in regard to weak and soft references?

We already know that soft and weak references provide a way to have things garbage collected when they would normally be strongly reachable. We also know from phantom references use a reference queue as a way to provide feedback for when something is garbage collected, which is really the purpose of phantom references to begin with. So why would we want soft and weak references to be queued up too?

The reason is actually quite simple: your soft and weak references are still strongly referenced. That's right, you could potentially end up hitting an OutOfMemoryError because of an overabundance of now useless SoftReference or WeakReference instances which are strongly referenced though the value they effectively proxied was garbage collected.

Using a ReferenceQueue allows you to poll for any type of Reference that has been garbage collected and remove it (or set it to null). There's an example of this visible in WeakHashMap.expungeStaleEntries() where the map polls its ReferenceQueue whenever you call size() or whenever getTable() or resize() is called internally.

additional resources

Garbage Collection, by Bill Venners
Understanding Weak References by Ethan Nicholas
Memory Thrashing by Steven Haines
Understanding Java Garbage Collection by Sangmin Lee
WeakHashMap is Not a Cache! by Domingos Neto
Plugging Memory Leaks with Weak References by Brian Goetz
How Hotspot Decides to Clear Soft References by Jeremy Manson