32-bit or 64-bit JVM? How about a Hybrid? Tuesday, Oct 14 2008 

Before x86-64 came along, the decision on whether to use 32-bit or 64-bit mode for architectures that supported both was relatively simple: use 64-bit mode if the application requires the larger address space, 32-bit mode otherwise. After all, no point in reducing the amount of data that fits into the processor cache while increasing memory usage and bandwidth if the application doesn’t need the extra addressing space.

When it comes to x86-64, however, there’s also the fact that the number of named general-purpose registers has doubled from 8 to 16 in 64-bit mode. For CPU intensive apps, this may mean performance at the cost of extra memory usage. On the other hand, for memory intensive apps 32-bit mode might be better in if you manage to fit your application within the address space provided. Wouldn’t it be nice if there was a single JVM that would cover the common cases?

It turns out that the HotSpot engineers have been working on doing just that through a feature called Compressed oops. The benefits:

  • Heaps up to 32GB (instead of the theoretical 4GB in 32-bit that in practice is closer to 3GB)
  • 64-bit mode so we get to use the extra registers
  • Managed pointers (including Java references) are 32-bit so we don’t waste memory or cache space

The main disadvantage is that encoding and decoding is required to translate from/to native addresses. HotSpot tries to avoid these operations as much as possible and they are relatively cheap. The hope is that the extra registers give enough of a boost to offset the extra cost introduced by the encoding/decoding.

Compressed Oops have been included (but disabled by default) in the performance release JDK6u6p (requires you to fill a survey), so I decided to try it in an internal application and compare it with 64-bit mode and 32-bit mode.

The tested application has two phases, a single threaded one followed by a multi-threaded one. Both phases do a large amount of allocation so memory bandwidth is very important. All tests were done on a dual quad-core Xeon 5400 series with 10GB of RAM. I should note that a different JDK version had to be used for 32-bit mode (JDK6u10rc2) because there is no Linux x86 build of JDK6u6p. I chose the largest heap size that would allow the 32-bit JVM to run the benchmark to completion without crashing.

I started by running the application with a smaller dataset:

JDK6u10rc2 32-bit
Single-threaded phase: 6298ms
Multi-threaded phase (8 threads on 8 cores): 17043ms
Used Heap after full GC: 430MB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC

JDK6u6p 64-bit with Compressed Oops
Single-threaded phase: 6345
Multi-threaded phase (8 threads on 8 cores): 16348
Used Heap after full GC: 500MB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC -XX:+UseCompressedOops

The performance numbers are similar and the memory usage of the 64-bit JVM with Compressed Oops is 16% larger.

JDK6u6p 64-bit
Single-threaded phase: 6463
Multi-threaded phase (8 threads on 8 cores): 18778
Used Heap after full GC: 700MB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC

The performance is again similar, but the memory usage of the 64-bit JVM is much higher, over 60% higher than the 32-bit JVM one.

Let’s try the larger dataset now:

JDK6u10rc2 32-bit
Single-threaded phase: 14188ms
Multi-threaded phase (8 threads on 8 cores): 73451ms
Used Heap after full GC: 1.25GB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC

JDK6u6p 64-bit with CompressedOops
Single-threaded phase: 13742
Multi-threaded phase (8 threads on 8 cores): 76664ms
Used Heap after full GC: 1.45GB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC -XX:+UseCompressedOops

The performance difference and memory overhead are the same as with the smaller dataset. The benefit of Compressed Oops here is that we still have plenty of headroom while the 32-bit JVM is getting closer to its limits. This may not be apparent from the heap size after a full GC, but during the multi-threaded phase the peak memory usage is quite a bit larger and the fact that the allocation rate is high does not help. This becomes more obvious when we look at the results for the 64-bit JVM.

JDK6u6p 64-bit
Single-threaded phase: 14610
Multi-threaded phase (8 threads on 8 cores): 104992
Used Heap after full GC: 2GB
JVM Args: -XX:MaxPermSize=256m -Xms4224m -Xmx4224m -server -XX:+UseConcMarkSweepGC

I had to increase the Xms/Xmx to 4224m for the application to run to completion. Even so, the performance of the multi-threaded phase took a substantial performance hit when compared to the other two JVM configurations. All in all, the 64-bit JVM without compressed oops does not do well here.

In conclusion, it seems that compressed oops is a feature with a lot of promise and it allows the 64-bit JVM to be competitive even in cases that favour the 32-bit JVM. It might be interesting to test applications with different characteristics to compare the results. It’s also worth mentioning that since this is a new feature, it’s possible that performance will improve further before it’s integrated into the normal JDK releases. As it is though, it already hits a sweet spot and if it weren’t for the potential for instability, I would be ready to ditch my 32-bit JVM.

Update: The early access release of JDK 6 Update 14 also contains this feature.
Update 2: This feature is enabled by default since JDK 6 Update 23.

Advertisement

Local variables scope in HotSpot Saturday, Oct 11 2008 

Assume the following code:

  public void foo() {
    C c = new C();
    bar(c.baz); // assume that baz does not reference c
  }

I was under the impression that HotSpot would not garbage collect c before the local variable went out of scope although it is legally allowed to do so. I even heard of issues where unnecessary garbage was retained as a result (usual workarounds are to null the variable or to inline it).

According to this bug report (Server JIT optimization can cause objects to go out of scope prematurely), the Server VM is actually able to GC the object before the local variable goes out of scope. It would be interesting to know if it can detect such cases reliably.

AOT and JVM Wednesday, Oct 8 2008 

There have been a few AOT compilers for Java for some time. Two of the better known examples are GCJ and Excelsior JET. Even though Excelsior JET also has a JIT, the main focus was on the AOT aspect.

There have been many suggestions that HotSpot should cache the JIT-generated code to improve start-up performance, avoid the need for a warm-up phase on every invocation of the application and possibly share more data between applications. This is particularly relevant for desktop applications. HotSpot engineers claim that this is a complex task and that the benefits might not be as great as expected because AOT-generated code would be slower than JIT-generated one, so they implemented a simpler solution in JDK 5, Class Data Sharing. It’s pretty limited because it only works with the Client VM and the serial garbage collector and also because it only loads a set of classes from the system jar.

Interestingly, IBM did some work in this area in the IBM JRE for Java 5 and improved it further in the one for Java 6. This is described in some detail in a developerWorks article. It’s worth reading if you’re interested in this sort of thing, but I’ll list some points that I found interesting:

  • AOT code can be stored into a shared class cache allowing another JVM to use it to reduce startup time.
  • Class compression is used to increase the amount of classes that are stored in the cache.
  • AOT code is also subject to JIT compilation if it’s invoked often in order to optimise it further.
  • AOT code executed by a JVM is copied from the shared class cache so there is no direct footprint benefit but there are memory and CPU savings from being able to reuse this code rather than repeat the compilation.
  • AOT compilation is based on heuristics that select methods that are likely to improve startup time.
  • Eclipse without any additional plugins took 3.313 to start with AOT and shared classes versus 4.204 seconds without. Larger improvements could take place if there were more plugins installed.
  • Tomcat startup time improved from 1138ms to 851ms. Using shared classes without AOT caused the time to be 950ms, which means that both AOT and shared classes contributed to the improvement.

Downloading the IBM JRE/JDK requires registration and in case you need the Windows version, you must download a bundle that includes Eclipse. Links to the various downloads can be found in the developerWorks article.

These are interesting results, and it would be interesting to find out if HotSpot would also benefit from similar enhancements.