Heap in the Cloud: Using Amazon EC2 To Analyze Large JVM Heaps

Earlier this week I had to analyze a heap bigger than 1GB recently and it was frustrating: several commercial and freeware profilers I’ve tried ran ouf of their own heap or crawled to a halt (the one notable exception was the Eclipse MAT – it performed surprisingly well when started with -Xmx1100m. My development machine has 3GB of RAM, but the maximum achievable size for the Java heap on a 32-bit system is somewhere lower than 1.5GB (for two reasons: Windows does not give more than 2GB to any process and the RAM where the heap is initialized must be contiguous, thus the maximum heap depends on the memory allocation). With all the in-memory data structures the profiler builds based on the heap being analyzed, using it all up is not hard to run out of memory. I noticed MAT saves the indexes on the disk so it might have a more efficient approach to handling large structures.

What it the heap is really big, though, such as one from a 64-bit production server? Without a 64-bit machine always at hand it might be impossible to analyze. Enter Amazon’s cloud. It costs US$0.8 an hour to have a 64-bit Linux machine with 15GB of RAM – and you get it up & running within a minute.

Here are the steps:

    • Sign up for Amazon WS. If you already bought stuff from amazon.com, go into My Account, you’ll see the option there.
    • Follow this excellent tutorial. In particular, rejoice that instead of using the WS API you can use the ElasticFox Firefox add-on. For offline reading, there is a PDF manual.
    • Decide which machine image (AMI) you want (you can also build your own but that is not necessary to begin with). I’ve used a pre-made base 64-bit Ubuntu 8.04 one prepared by Alestic. The screenshot below shows the list of images containing ‘ubuntu’:

Do not skip the step to set up the security (firewall) group mentioned in the setup. By default, instantiated machines are not open to Internet access. It’s very easy to add rules to allow SSH and HTTP access, and you can do it even after starting the instance. It’s all described in the above-mentioned tutorial.

  • Right-click on an image and instantiate it. Choose an instance type with enough memory, such as c1.xlarge. It takes less than a minute to have it running.
  • SSH to the instance (you are root) and install Java.
    root@domU-12-31-38-00-F1-31:~# apt-get update
    <snip>
    root@domU-12-31-38-00-F1-31:~# apt-get install -y sun-java6-jdk
  • Transfer the heap over SCP (compressed).
  • Run the jhat heap analysis tool comingwith the JDK. While not as developed as a full-blown profiler, it has a built-in web server and thus it can be run remotely. To speed the analysis even more, I disabled the calculation of the incoming references to objects.
    root@domU-12-31-38-00-F1-31:/tmp# jhat -J-Xmx7000m -refs false heapdump-1227559143095.hprof
    Reading from heapdump-1227559143095.hprof...
    Dump file created Mon Nov 24 20:39:10 UTC 2008
    Snapshot read, resolving...
    Resolving 19142623 objects...
    Snapshot resolved.
    Started HTTP server on port 7000
    Server is ready.
  • All that was left was to start the browser on my machine and point it to the instance, using the default port that jhat listens to (7000).
    As mentioned above, jhat is not claiming to be a full profiler, yet its object navigation and the nifty SQL-like Object Query Language (browse to server:7000/oql/ and server:7000/oqlhelp/, you can search for things like “all classes with more than 500,000 instances” or “all instances of class X larger than a given size” and so on) are pretty decent for a first foray into the heap. Small gotcha: make sure to type the final / in the URLs, jhat returns an error otherwise.
  • When everything is done, terminate the instance. If you want persistent storage, use Amazon S3.
  • Total bill as copied from the Amazon MyAccount: $1.65. The time shows as 2 hours because I fiddled around after creating the instance and because the 200MB data transfer of the heap to Amazon was slow – there was other network traffic through the link at the time.
    $0.80 per High-CPU Extra Large Instance (c1.xlarge) instance-hour (or partial hour) 2 Hrs         $1.60
    $0.100 per GB Internet Data Transfer - data transfer into Amazon EC2                0.297 GB      $0.03
    $0.170 per GB Internet Data Transfer - data transfer out of Amazon EC2              0.019 GB      $0.01
    $0.010 per GB Regional Data Transfer - in/out between Availability Zones
                             or when using public IP or Elastic IP addresses            0.000035 GB   $0.01
    
    TOTAL:                                                                                            $1.65

 

Happy clouding 🙂
Razvan

It's only fair to share...
Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Leave a Reply