Code reuse by copy and paste Friday, Feb 13 2009 

I was investigating what would need to be done to implement camel-case code assist for the Scala plugin when I ran across the following gem in CharOperation (Eclipse JDT):

public static final boolean camelCaseMatch(char[] pattern, int patternStart, int patternEnd, char[] name, int nameStart, int nameEnd, boolean samePartCount) {

	/* !!!!!!!!!! WARNING !!!!!!!!!!
	 * The content of this method has been fully copied to
	 * SearchPattern#camelCaseMatch(String, int, int, String, int, int, boolean).
	 * 
	 * So, if current method is modified, do NOT forget to copy again its content
	 * to SearchPattern corresponding method!
	 */

And in SearchPattern:

public static final boolean camelCaseMatch(String pattern, int patternStart, int patternEnd, String name, int nameStart, int nameEnd, boolean samePartCount) {

	/* !!!!!!!!!! WARNING !!!!!!!!!!
	 * The algorithm of this method has been entirely copied from
	 * CharOperation#camelCaseMatch(char[], int, int, char[], int, int, boolean).
	 * Array lengths have been replaced with call to {@link String#length()} and
	 * array direct access have been replaced with call to {@link String#charAt(int)}.
	 * 
	 * So, do NOT modify this method directly to fix any bug but modify first the
	 * corresponding CharOperation method and do the copy again to be sure that
	 * these two methods are kept synchronized.
	 */

I am sure there is a reason why this was done, but how ugly…

Update: At least JDT made it easy to implement what I wanted.

Objects with no allocation overhead Wednesday, Dec 17 2008 

We have all heard about how HotSpot is really good at dealing with short-lived objects (both allocation and GC), but the truth is that object allocation is still pretty costly when compared to operations like addition or multiplication. Allocating an object for each step of an iteration over a large collection to make a simple computation might sound like the kind of thing no-one would ever do, but it’s actually quite common in languages like Scala (as described in a previous post). In Java-land, if you use the Function class from Google Collections with primitive wrappers, the same issue may occur. There are many JVM improvements that could help depending on the specific case (generic specialisation, value types, fixnums to name a few), but it’s unclear if/when we’ll get them.

So, what about that title? Escape analysis was introduced during Java 6, but the information gathered was only used for lock elision. However, this information can also be used for other interesting optimisations like scalar replacement and stack allocation. There have been doubts about the benefits of stack allocation (discussed in the comments) so the focus has been on scalar replacement so that the object is never in memory. At least that’s the theory.

Edward Lee started a couple of threads in the Hotspot-dev mailing list about scalar replacement here and here which reminded me to do some experiments. Note that this feature is still in development so the results posted here are preliminary and not indicative of how it will perform once it’s final. Still, it’s interesting to see how well it works at this time. I picked the latest JDK7 build (build 41) and ran a few tests with the following arguments passed to java “-XX:MaxPermSize=256m -Xms128m -Xmx3328m -server -XX:+UseConcMarkSweepGC” and either XX:-DoEscapeAnalysis or XX:+DoEscapeAnalysis.

I started by choosing the simplest test possible. Note that either testSimpleAllocation or testNoAllocation would be commented out.

class C(val a: Int, val b: Int)

object Test {
  def main(args: Array[String]) {
    for (i <- 1 to 10) testSimpleAllocation()
    //for (i <- 1 to 10) testNoAllocation()
  }
  
  def testSimpleAllocation() = {
    System.gc()
    var time = System.currentTimeMillis;
    var i = 0
    var sum = 0
    while (i < 1000000000) {
      sum += baz(new C(i + 1, i + 2))
      i += 1
    }
    println(sum)
    println(System.currentTimeMillis - time)
  }
  
  def testNoAllocation() = {
    System.gc()
    var time = System.currentTimeMillis;
    var i = 0
    var sum = 0
    while (i < 1000000000) {
      sum += baz(i + 1, i + 2)
      i += 1
    }
    println(sum)
    println(System.currentTimeMillis - time)
  }
  
  def baz(a: Int, b: Int): Int = a + b
  
  def baz(c: C): Int = c.a + c.b
}

The result were:


testNoAllocation: 403
testSimpleAllocation with EA: 1006
testSimpleAllocation without EA: 9190

As we can see, escape analysis has a tremendous effect and the method finishes in 11% of the time taken with it disabled. However, the version with no allocation is still substantially faster.

I decided to test a foreach method that takes a Function object next (this time in Java because it does less magic behind the scenes):

package test;

public class EscapeAnalysis {
  
  interface Function<T, R> {
    R apply(T value);
  }
  
  interface IntProcedure {
    void apply(int value);
  }
  
  static class BoxedArray {
    private final int[] array;
    
    public BoxedArray(int length) {
      array = new int[length];
    }
    
    public int length() {
      return array.length;
    }
    
    public void foreach(Function<Integer, Void> function) {
      for (int i : array)
        function.apply(new Integer(i));
    }
    
    public void foreach(IntFunction function) {
      for (int i : array)
        function.apply(i);
    }

    public void set(int index, int value) {
      array[index] = value;
    }

    public void foreachWithAutoboxing(Function<Integer, Void> function) {
      for (int i : array)
        function.apply(i);
    }
  }
  
  public static void main(String[] args) {
    BoxedArray array = new BoxedArray(100000000);
    /* We are careful not to restrict our ints to the ones in the Integer.valueOf cache */
    for (int i = 0; i < array.length(); i++)
      array.set(i, i);
    
    for (int i = 0; i < 10; i++)
      test(array);
  }

  private static void test(BoxedArray array) {
    System.gc();
    long time = System.currentTimeMillis();
    final int[] sum = new int[] { 0 };
    
    /* Uncomment the one that should be executed */
    testIntegerForeach(array, sum);
//    testIntegerForeachWithAutoboxing(array, sum);
//    testIntForeach(array, sum);

    System.out.println(System.currentTimeMillis() - time);
    System.out.println(sum[0]);
  }
  
  private static void testIntegerForeachWithAutoboxing(BoxedArray array, final int[] sum) {
    array.foreachWithAutoboxing(new Function<Integer, Void>() {
      public Void apply(Integer value) {
        sum[0] += value;
        return null;
      }
    });
  }
  
  private static void testIntegerForeach(BoxedArray array, final int[] sum) {
    array.foreach(new Function<Integer, Void>() {
      public Void apply(Integer value) {
        sum[0] += value;
        return null;
      }
    });
  }

  private static void testIntForeach(BoxedArray array, final int[] sum) {
    array.foreach(new IntFunction() {
      public void apply(int value) {
        sum[0] += value;
      }
    });
  }
}

The results were:


testIntForeach: 130
testIntegerForeachWithAutoboxing with EA: 1064
testIntegerForeach with EA: 224
testIntegerForeachWithAutoboxing without EA: 1039
testIntegerForeach without EA: 1024

This test shows something interesting, EA gives no improvement if Integer.valueOf (called by auto-boxing) is used instead of new Integer. Apart from that, the results are somewhat similar to the first ones (EA provides a substantial boost, but not enough to match the specialised implementation). After quickly testing that the boxing methods in ScalaRunTime had the same effect as Integer.valueOf, I decided that it was not worth testing more complex scenarios.

It seems like there’s a lot of potential for scalar replacement, but HotSpot needs to do a better job at detecting cases where it can be used safely. If nothing else, at least knowledge of the valueOf methods should be hardcoded into the system. I hope that a more general solution is found though because other languages on the JVM may use different methods (as mentioned earlier Scala uses methods in ScalaRunTime instead). It will also be interesting to see if the performance can get even closer to the no allocation case. Since the feature is still in development, we can hope. :)

Update: The early access release of JDK 6 Update 14 also contains this feature.
Update 2: This feature is enabled by default since JDK 6 Update 23.

JDK6u12, Java 7 and Devoxx Saturday, Dec 13 2008 

With JavaFX 1.0 out of the way, it seems like the Java 7 train has started moving again.

Concurrency plans posted by Doug Lea. Two points of interest are that the parallel collections API is probably not going to get included (but the core fork-join framework will) and that a low-level fences API might get included. Without closures, it’s hard to include the parallel collections API with good performance for primitives while avoiding interface explosion. The positive aspect is that it will continue to be developed outside the JDK giving it more flexibility in terms of API evolution.

Devoxx summary by Danny Coward containing various links covering JavaFX, JDK7 and Languages in the JVM.

Summary of Java 7 update given by Mark Reinhold written by Hamlet D’Arcy. Small language changes and Project Jigsaw had already been announced and many of the items described there had been known for a while, but some new ones are Null dereference expressions, XRender pipeline for Java 2D and the chance of a MVM-lite. The XRender pipeline is interesting because it was developed by Clemens Eisserer as part of the OpenJDK challenge and would be a nice example of an external contribution to OpenJDK.

One thing that is clear from the various announcements is that Sun intends to use OpenJDK as the place where work for Java 7 will take place (even if JSRs are also planned). Some people expressed concern that this is bad because it circumvents the JCP and that OpenJDK is a project controlled by Sun since the vast majority of the committers are from Sun. Given how opaque the JCP is, I don’t see how OpenJDK could be worse, but my hope is that Sun will run these projects transparently as promised where the end-result is diversification of committers due to more external participation.

Finally, for the people who still use the proprietary JDK from Sun, there’s an early access release of JDK6u12 that includes a 64-bit Java Plugin.

Not just for academics? Friday, Dec 5 2008 

At least ClariFI seems to think so. Paul Chiusano announced in the Scala mailing list that ClariFI is looking to hire developers “with a strong background in functional programming to do a mixture of Scala and Java programming”. The mailing list thread contains more details including a link to the official job posting.

It will be interesting to see if posts like that will become a common occurrence in the future.

32-bit or 64-bit JVM? How about a Hybrid? Tuesday, Oct 14 2008 

Before x86-64 came along, the decision on whether to use 32-bit or 64-bit mode for architectures that supported both was relatively simple: use 64-bit mode if the application requires the larger address space, 32-bit mode otherwise. After all, no point in reducing the amount of data that fits into the processor cache while increasing memory usage and bandwidth if the application doesn’t need the extra addressing space.

When it comes to x86-64, however, there’s also the fact that the number of named general-purpose registers has doubled from 8 to 16 in 64-bit mode. For CPU intensive apps, this may mean performance at the cost of extra memory usage. On the other hand, for memory intensive apps 32-bit mode might be better in if you manage to fit your application within the address space provided. Wouldn’t it be nice if there was a single JVM that would cover the common cases?

It turns out that the HotSpot engineers have been working on doing just that through a feature called Compressed oops. The benefits:

  • Heaps up to 32GB (instead of the theoretical 4GB in 32-bit that in practice is closer to 3GB)
  • 64-bit mode so we get to use the extra registers
  • Managed pointers (including Java references) are 32-bit so we don’t waste memory or cache space

The main disadvantage is that encoding and decoding is required to translate from/to native addresses. HotSpot tries to avoid these operations as much as possible and they are relatively cheap. The hope is that the extra registers give enough of a boost to offset the extra cost introduced by the encoding/decoding.

Compressed Oops have been included (but disabled by default) in the performance release JDK6u6p (requires you to fill a survey), so I decided to try it in an internal application and compare it with 64-bit mode and 32-bit mode.

The tested application has two phases, a single threaded one followed by a multi-threaded one. Both phases do a large amount of allocation so memory bandwidth is very important. All tests were done on a dual quad-core Xeon 5400 series with 10GB of RAM. I should note that a different JDK version had to be used for 32-bit mode (JDK6u10rc2) because there is no Linux x86 build of JDK6u6p. I chose the largest heap size that would allow the 32-bit JVM to run the benchmark to completion without crashing.

I started by running the application with a smaller dataset:

JDK6u10rc2 32-bit
Single-threaded phase: 6298ms
Multi-threaded phase (8 threads on 8 cores): 17043ms
Used Heap after full GC: 430MB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC

JDK6u6p 64-bit with Compressed Oops
Single-threaded phase: 6345
Multi-threaded phase (8 threads on 8 cores): 16348
Used Heap after full GC: 500MB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC -XX:+UseCompressedOops

The performance numbers are similar and the memory usage of the 64-bit JVM with Compressed Oops is 16% larger.

JDK6u6p 64-bit
Single-threaded phase: 6463
Multi-threaded phase (8 threads on 8 cores): 18778
Used Heap after full GC: 700MB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC

The performance is again similar, but the memory usage of the 64-bit JVM is much higher, over 60% higher than the 32-bit JVM one.

Let’s try the larger dataset now:

JDK6u10rc2 32-bit
Single-threaded phase: 14188ms
Multi-threaded phase (8 threads on 8 cores): 73451ms
Used Heap after full GC: 1.25GB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC

JDK6u6p 64-bit with CompressedOops
Single-threaded phase: 13742
Multi-threaded phase (8 threads on 8 cores): 76664ms
Used Heap after full GC: 1.45GB
JVM Args: -XX:MaxPermSize=256m -Xms3328m -Xmx3328m -server -XX:+UseConcMarkSweepGC -XX:+UseCompressedOops

The performance difference and memory overhead are the same as with the smaller dataset. The benefit of Compressed Oops here is that we still have plenty of headroom while the 32-bit JVM is getting closer to its limits. This may not be apparent from the heap size after a full GC, but during the multi-threaded phase the peak memory usage is quite a bit larger and the fact that the allocation rate is high does not help. This becomes more obvious when we look at the results for the 64-bit JVM.

JDK6u6p 64-bit
Single-threaded phase: 14610
Multi-threaded phase (8 threads on 8 cores): 104992
Used Heap after full GC: 2GB
JVM Args: -XX:MaxPermSize=256m -Xms4224m -Xmx4224m -server -XX:+UseConcMarkSweepGC

I had to increase the Xms/Xmx to 4224m for the application to run to completion. Even so, the performance of the multi-threaded phase took a substantial performance hit when compared to the other two JVM configurations. All in all, the 64-bit JVM without compressed oops does not do well here.

In conclusion, it seems that compressed oops is a feature with a lot of promise and it allows the 64-bit JVM to be competitive even in cases that favour the 32-bit JVM. It might be interesting to test applications with different characteristics to compare the results. It’s also worth mentioning that since this is a new feature, it’s possible that performance will improve further before it’s integrated into the normal JDK releases. As it is though, it already hits a sweet spot and if it weren’t for the potential for instability, I would be ready to ditch my 32-bit JVM.

Update: The early access release of JDK 6 Update 14 also contains this feature.
Update 2: This feature is enabled by default since JDK 6 Update 23.

Local variables scope in HotSpot Saturday, Oct 11 2008 

Assume the following code:

  public void foo() {
    C c = new C();
    bar(c.baz); // assume that baz does not reference c
  }

I was under the impression that HotSpot would not garbage collect c before the local variable went out of scope although it is legally allowed to do so. I even heard of issues where unnecessary garbage was retained as a result (usual workarounds are to null the variable or to inline it).

According to this bug report (Server JIT optimization can cause objects to go out of scope prematurely), the Server VM is actually able to GC the object before the local variable goes out of scope. It would be interesting to know if it can detect such cases reliably.

AOT and JVM Wednesday, Oct 8 2008 

There have been a few AOT compilers for Java for some time. Two of the better known examples are GCJ and Excelsior JET. Even though Excelsior JET also has a JIT, the main focus was on the AOT aspect.

There have been many suggestions that HotSpot should cache the JIT-generated code to improve start-up performance, avoid the need for a warm-up phase on every invocation of the application and possibly share more data between applications. This is particularly relevant for desktop applications. HotSpot engineers claim that this is a complex task and that the benefits might not be as great as expected because AOT-generated code would be slower than JIT-generated one, so they implemented a simpler solution in JDK 5, Class Data Sharing. It’s pretty limited because it only works with the Client VM and the serial garbage collector and also because it only loads a set of classes from the system jar.

Interestingly, IBM did some work in this area in the IBM JRE for Java 5 and improved it further in the one for Java 6. This is described in some detail in a developerWorks article. It’s worth reading if you’re interested in this sort of thing, but I’ll list some points that I found interesting:

  • AOT code can be stored into a shared class cache allowing another JVM to use it to reduce startup time.
  • Class compression is used to increase the amount of classes that are stored in the cache.
  • AOT code is also subject to JIT compilation if it’s invoked often in order to optimise it further.
  • AOT code executed by a JVM is copied from the shared class cache so there is no direct footprint benefit but there are memory and CPU savings from being able to reuse this code rather than repeat the compilation.
  • AOT compilation is based on heuristics that select methods that are likely to improve startup time.
  • Eclipse without any additional plugins took 3.313 to start with AOT and shared classes versus 4.204 seconds without. Larger improvements could take place if there were more plugins installed.
  • Tomcat startup time improved from 1138ms to 851ms. Using shared classes without AOT caused the time to be 950ms, which means that both AOT and shared classes contributed to the improvement.

Downloading the IBM JRE/JDK requires registration and in case you need the Windows version, you must download a bundle that includes Eclipse. Links to the various downloads can be found in the developerWorks article.

These are interesting results, and it would be interesting to find out if HotSpot would also benefit from similar enhancements.

Scala in the JVM Languages Summit Tuesday, Sep 30 2008 

Since I’ve seen no mention of this in Planet Scala (although it was discussed in #scala), here are some Scala-related links from the JVM Languages summit:

  • Optimizing Higher-Order Functions in Scala by Iulian Dragos. I haven’t tried the various optimisation flags in scalac yet, but I am curious if they will have any effect in the primitive collections with scala benchmark from a previous blog entry. These flags have been a bit buggy in the past, but hopefully recent fixes have solved the problems.
  • Scalify: Automatic translation from Java to Scala by Paul Phillips. This tool looks very interesting because the approach used is a lot more sophisticated than jatran. I have been meaning to take a look at it, but all of my code uses generics and it doesn’t support that yet.
  • Cliff Click’s post-Summit blog entry. It includes a link to the slides for his talk, “Fast Bytecodes for Funny Languages”. Scala is one of the languages covered (although very briefly) with the results coming from the translation of a simple Java micro-benchmark to Scala available here (in the comments). As expected, Scala can generate bytecode that is pretty much the same as Java if low-level constructs are used, but the performance suffers quite a bit if idiomatic Scala is used while dealing with primitives (at least with the default compilation flags).

For more general coverage of the summit:

Ricky Clarkson mentioned on IRC that videos of the talks will be made available on InfoQ and Google Video.

Update: John Rose’s post-summit blog entry is a nice round-up.

HotSpot JIT “Optimization” Saturday, Sep 27 2008 

I noticed Scala ticket #1377 the other day. Even though I think the bug is valid, it’s for different reasons to the ones supplied by the reporter. I had my doubts that the casts would have any measurable cost and the informal benchmark provided looked suspect. This is all described in the ticket, so I won’t go over it again here. The point of this blog is some weird behaviour I noticed in my version of the benchmark.

The benchmark looks like:

  def f(i: Iterator[_]) = i.hasNext

  def main(args: Array[String]) {
    test()
  }
    
  def test() {
    var i = 0
    while (i < 10) {
      val time = System.currentTimeMillis
      val result = inner()
      i += 1
      println("Time: " + (System.currentTimeMillis - time))
      println("Value: " + result)
    }
  }
    
  def inner() = {
    //val empty = Iterator.empty // checkcast occurs within loop
    val empty: Iterator&#91;_&#93; = Iterator.empty // checkcast outside loop
    var i = 0L
    while (i < 10000000000L) {
      f(empty)
      i += 1
    }
    i
  }
}
&#91;/sourcecode&#93;

According to this version of the benchmark the extra casts don't cause any performance difference, so I will just focus on the one without the extra casts. The results with JDK 6 Update 10 RC1 64-bit were:

<blockquote>
Time: 4903
Time: 4883
Time: 7213
Time: 7197
Time: 7203
Time: 7212
Time: 7185
Time: 7190
Time: 7210
Time: 7188
</blockquote>

That is odd. Instead of getting faster, the benchmark gets slower from the third iteration onwards. With JITs like these, we're better off with interpreters. ;) Ok, that's an exaggeration but let's investigate this further.

Before we do so, I should clarify two things. The ones paying attention might wonder where the "Value" output went. The purpose of that is just to make sure HotSpot does not optimize the inner loop away, so I trimmed it from the output. The second point is that the usage of a 64-bit JVM is important because the problem does not occur on the 32-bit version of HotSpot. I initially used the 64-bit version because it's much faster when performing operations on longs and I had to use a long index in the loop to allow the benchmark to take a reasonable amount of time.

Ok, so first step is to re-run the benchmark with -Xbatch and -XX:+PrintCompilation. The results of that were:

<blockquote>
  1   b   test.PerfTest$::f (7 bytes)
  2   b   scala.Iterator$$anon$5::hasNext (2 bytes)
  1%  b   test.PerfTest$::inner @ 12 (35 bytes)
Time: 4938
  3   b   test.PerfTest$::inner (35 bytes)
Time: 4861
Time: 7197
Time: 7199
Time: 7241

</blockquote>

Ok, so it seems like the inner loop got JIT'd a second time and that made it slower than the previous version, which sounds like a bug. I converted the code into Java and it turns out that we don't need much more than a loop in the inner method to reproduce the problem:


static long inner() {
  long i = 0L;
  for (; i < 10000000000L; ++i);
  return i;
}
&#91;/sourcecode&#93;

Before reporting the bug to Sun, I was curious if -XX:+UnlockDiagnosticVMOptions -XX:+LogCompilation would give any interesting information. I pasted some parts that someone that is not a HotSpot engineer might understand with the help of <a href="http://wikis.sun.com/display/HotSpotInternals/LogCompilation+overview">this</a>.


<nmethod compile_id='1' compile_kind='osr' compiler='C2' entry='0x000000002ee0d000' size='520' address='0x000000002ee0ced0' relocation_offset='264' code_offset='304' stub_offset='400' consts_offset='420' scopes_data_offset='424' scopes_pcs_offset='456' dependencies_offset='504' oops_offset='512' method='test/PerfTest inner ()J' bytes='19' count='1' backedge_count='14563' iicount='1' stamp='0.054'/>
<writer thread='12933456'/>
<task_queued compile_id='1' method='test/PerfTest inner ()J' bytes='19' count='2' backedge_count='5000' iicount='2' blocking='1' stamp='4.965' comment='count' hot_count='2'/>
<writer thread='43477328'/>
  1   b   test.PerfTest::inner (19 bytes)
<nmethod compile_id='1' compiler='C2' entry='0x000000002ee0d240' size='488' address='0x000000002ee0d110' relocation_offset='264' code_offset='304' stub_offset='368' consts_offset='388' scopes_data_offset='392' scopes_pcs_offset='424' dependencies_offset='472' oops_offset='480' method='test/PerfTest inner ()J' bytes='19' count='2' backedge_count='5000' iicount='2' stamp='4.966'/>
<...>

<task compile_id='1' compile_kind='osr' method='test/PerfTest inner ()J' bytes='19' count='1' backedge_count='14563' iicount='1' osr_bci='5' blocking='1' stamp='0.050'>
<...>
<task_done success='1' nmsize='120' count='1' backedge_count='14563' stamp='0.054'/>

<task compile_id='1' method='test/PerfTest inner ()J' bytes='19' count='2' backedge_count='5000' iicount='2' blocking='1' stamp='4.965'>
<...>
<task_done success='1' nmsize='88' count='2' backedge_count='5000' stamp='4.966'/>

So, it seems like the inner method was first JIT’d through on-stack-replacement (i.e. OSR). Usually, on-stack-replacement does not produce the best code, so HotSpot recompiles the method again once it gets the chance. Unfortunately, it generates slower code for some reason even though the compiled method size is smaller (88 instead of 120).

We could go deeper in this investigation by using a debug JVM like Kohsuke Kawaguchi did here, but I decided to just file a bug and let the HotSpot engineers handle it. :) I will update the blog once a bug number is assigned (I wonder when Sun is going to fix their bug database so that the bug id becomes available after submission of a bug…).

Update: The bug is now available in the Sun database.

Array variance and method overloading Sunday, Sep 21 2008 

One of these days I used jatran to translate the code for a simple Swing application from Java to Scala. Jatran’s translation had a few issues (some of which have been fixed since I tried it) so there were quite a few compiler errors after it finished. I fixed them and launched the application.

It was at this point that I noticed a change in behaviour and it was not clear why it was happening. After some investigation, it turned out that it was caused by the fact that variance of arrays in Scala is different from Java combined with poor usage of method overloading.

TreePath has two constructors that do different things, but one of them takes a supertype of the other:

public TreePath(Object[] path)
public TreePath(Object singlePath)

Because arrays are nonvariant in Scala, it means that scalac chooses the constructor that takes an Object for the following code:

def pathToRoot: Array[TreeNode] = ...
val treePath = new TreePath(pathToRoot)

On the other hand, arrays are covariant in Java so the constructor that takes an Object[] is chosen by javac for the equivalent Java code. One way to force scalac to choose the correct constructor is to cast the result of pathToRoot:

def pathToRoot: Array[TreeNode] = ...
val treePath = new TreePath(pathToRoot.asInstanceOf[Array[AnyRef]])

It’s a subtle issue that may affect other people automatically converting Java code to Scala, so I thought I’d bring it up in case it saves them some time. :)

« Previous PageNext Page »