There was a race between these two functions such that one thread A
would run dispose on thread B just before thread B finishes exit, with
the result that Thread::lock and/or Thread::systemThread would be
disposed twice, resulting in a crash.
Due to encoding limitations, the immediate operand of conditional
branches can be no more than 32KB forward or backward. Since the
JIT-compiled form of some methods can be larger than 32KB, and we also
do conditional jumps to code outside the current method in some cases,
we must work around this limitation.
The strategy of this commit is to provide inline, intermediate jump
tables where necessary. A given conditional branch whose target is
too far for a direct jump will instead point to an unconditional
branch in the nearest jump table which points to the actual target.
Unconditional immediate branches are also limited on PowerPC, but this
limit is 32MB, which is not an impediment in practice. If it does
become a problem, we'll need to encode such branches using multiple
instructions.
Thread.yield is not enough to ensure that the tracing thread does not
starve the test thread on some QEMU VMs, so we use wait/notifyAll to
make sure both threads have opportunities to run and the test actually
finishes.
The VM uses Integer and Long instances internally to wrap the results
of dynamic method invocations, but Method.invoke should use the
correct, specific type for the primitive (e.g. Character for char).
While we can use Linux's jni.h to cross compile the i386 Mac OS build,
that doesn't work for the PowerPC one. Now we use the proper
/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Headers/jni.h
from the sysroot instead.
While we can use Linux's jni.h to cross compile the i386 Mac OS build,
that doesn't work for the PowerPC one. Now we use the proper
/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Headers/jni.h
from the sysroot instead.
On PowerPC and ARM, we can't rely on the return address having already
been saved on the stack on entry to a thunk, so we must look for it in
the link register instead.
My previous attempt at this was incomplete; it did not address
Java->native->Java->native call sequences, nor did it address
continuations. This commit takes care of both.
This requires reducing HeapCapacity and CodeCapacity back to 128MB and
30MB respectively. I had set them to larger values to test
non-ProGuard'ed OpenJDK bootimage builds, which naturally needed a lot
more space. However, such builds aren't really useful in the real
world, and the compiler currently can't handle jumps or calls spanning
more than the maximum size of an immediate branch offset on ARM or
PowerPC, so I'm lowering them back down to more realistic values.
We can't rely on the C++ compiler to save the return address in a
known location on entry to each function we might call from Java
(although GCC 4.5 seems to do so consistently, which is why I hadn't
realized the unwinding code was relying on that assumption), so we
must store it explicitly in MyThread::ip in each thunk. For PowerPC
and x86, we continue saving it on the stack as always, since the
calling convention guarantees its location relative to the stack
pointer.
On Mac OS, signals sent using pthread_kill are never delivered if the
target thread is blocked (e.g. acquiring a lock or waiting on a
condition), so we can't rely on it and must use the Mach-specific
thread execution API instead to implement Thread.getStackTrace.
For Thread.interrupt, we must not only use pthread_kill but also
pthread_cond_signal to ensure the thread is woken up.
The stack mapping code was broken for cases of stack slots being
reused to hold primitives or addresses within subroutines after
previously being used to hold object references. We now bitwise "and"
the stack map upon return from the subroutine with the map as it
existed prior to calling the subroutine, which has the effect of
clearing map locations previously marked as GC roots where
appropriate.
This test covers the case where a local stack slot is first used to
store an object reference and later to store a subroutine return
address. Unfortunately, this confuses the VM's stack mapping code;
I'll be working on a fix for that next.
The new test requires generating bytecode from scratch, since there's
no reliable way to get javac to generate the code we want. Since we
already had primitive bytecode construction code in Proxy.java, I
factored it out so we can reuse it in Subroutine.java.
It seems that older versions of GCC (4.0 and older, at least) generate
assembly files with duplicate symbols for function templates which
differ only by the attributes of the templated types. Newer versions
have no such problem, but we need to support both, hence the
workaround in this commit of using a dedicated, non-template "alias"
function where we previously used "cast<alias_t>".
We use a template function called "cast" to get raw access to fields
in in the VM. In particular, we use this function in util.cpp to
treat reference fields as intptr_t fields so we can use the least
significant bit as the red/black flag in red/black tree nodes.
Unfortunately, this runs afoul of the type aliasing rules in C/C++,
and the compiler is permitted to optimize in a way that assumes such
aliasing cannot occur. Such optimization caused all the nodes in the
tree to be black, leading to extremely unbalanced trees and thus slow
performance.
The fix in this case is to use the __may_alias__ attribute to tell the
compiler we're doing something devious. I've also used this technique
to avoid other potential aliasing problems. There may be others
lurking, so a complete audit of the VM might be a good idea.