Per a recent request for help on the Google group, I spent some time
this past weekend trying to debug the MSYS build. The conclusion I
came to is that MSYS is not worth supporting anymore. The most recent
release is almost three years old, and I've been unable to find any
variant with a 64-bit-target compiler that will actually install. If
someone else cares enough about building Avian on MSYS to maintain it
themselves, patches are welcome.
Meanwhile, let's all just use Cygwin. Perhaps someone will build a
cool package manager on top of WSL and suddenly make both Cygwin and
MSYS obsolete.
These expressions are tricky because they rely on invokedynamic, which
normally implies runtime code generation. However, since lambdas
don't actually use the "dynamicness" of invokedynamic, we can convert
them into static calls to synthetic classes at compile time.
Since I had already written code to synthesize such classes in Java
and I didn't want to rewrite it in C++, I needed to add support for
running Java code to the bootimage generator. And since the primary
VM used by the generator is purpose-built to generate AOT-compiled
code for a specific target architecture and is not capable of
generating or running JIT-compiled code for the host architecture, I
added support for loading a second, independent, host-specific VM for
running Java code.
The rest of the patch handles the fact that each method compilation
might cause new, synthetic classes to be created, so we need to make
sure those classes and their methods are included in the final heap
and code images. This required breaking some giant code blocks out of
makeCodeImage into their own methods, which makes the diff look
scarier than it really is.
This would theoretically break compatibility with apps using embedded
classpaths, on big-endian architectures - because of the size type
extension. However, we don't currently support any big-endian
architectures, so it shouldn't be a problem.
Lots has changed since we forked Android's libcore, so merging the
latest upstream code has required extensive changes to the
Avian/Android port.
One big change is that we now use Avian's versions of
java.lang.Object, java.lang.Class, java.lang.ClassLoader, some
java.lang.reflect.* classes, etc. instead of the Android versions.
The main reason is that the Android versions have become very
Dex/Dalvik-specific, and since Avian is based on Java class files, not
dex archives, that code doesn't make sense here. This has the side
benefit that we can share more native code with classpath-avian.cpp
and reduce the amount of Java/C++ code duplication.
Posix_getaddrinfo needs to access fields in libcore.io.StructAddrinfo
via JNI, so we tell ProGuard to preserve them.
This commit also includes a minor indentation tweek in README.md and
removes -fno-rtti from lzma-build-cflags to avoid a warning from GCC.
So there I was, planning to just fix one little bug: Thread.holdsLock
and Thread.yield were missing for the Android class library. Easy
enough, right? So, I added a test, got it passing, and figured I'd go
ahead and run ci.sh with all three class libraries. Big mistake.
Here's the stuff I found:
* minor inconsistency in README.md about OpenSSL version
* untested, broken Class.getEnclosingMethod (reported by Josh)
* JNI test failed for tails=true Android build
* Runtime.nativeExit missing for Android build
* obsolete assertion in CallEvent broke tails=true Android build
* obsolete superclass field offset padding broke bootimage=true Android build
* runtime annotation parsing broke bootimage=true Android build
(because we couldn't modify Addendum.annotationTable for classes in
the heap image)
* ci.sh tried building with both android=... and openjdk=..., which
the makefile rightfully balked at
Sorry this is all in a single commit; I didn't expect so many
unrelated issues, and I'm too lazy to break them apart.
Previously, we used "lzma:", which worked fine on Windows (where the
path separator is ";") but not on Unix-style OSes (where the path
separator is ":"). In the latter case, the VM would parse
"[lzma:bootJar]" as a path containing two elements, "[lzma" and
"bootJar]", which is not what was intended. So now we use "lzma." as
the prefix, which works on all OSes.
armv7 and later provide weaker cache coherency models than armv6 and
earlier, so we cannot just implement memory barriers as no-ops. This
patch uses the DMB instruction (or the equivalent OS-provided barrier
function) to implement barriers. This should fix concurrency issues
on newer chips such as the Apple A6 and A7.
If you still need to support ARMv6 devices, you should pass
"armv6=true" to make when building Avian. Ideally, the VM would
detect what kind of CPU it was executing on at runtime and direct the
JIT compiler accordingly, but I don't know how to do that on ARM.
Patches are welcome, though!
Apparently we have readytalk.github.io set up to 301 redirect all URLs
to oss.readytalk.com, which confuses curl. Rather than figure out how
to make curl do the right thing, I've just changed the URL to point to
oss.readytalk.com.
Per https://github.com/ReadyTalk/avian/issues/53, Avian should build
against a standard AOSP checkout, which means we should look for
subprojects in the directories the repo utility would place them.
With corresponding changes to libcore, all the tests are passing
except Datagrams, which fails with a NPE in
NetworkInterface.getNetworkInterfacesList due to OS X not having
/sys/class/net. Porting that class to OS X looks like a non-trivial
task.