fix a couple of subtle Thread.getStackTrace bugs

The first problem was that, on x86, we failed to properly keep track
of whether to expect the return address to be on the stack or not when
unwinding through a frame.  We were relying on a "stackLimit" pointer
to tell us whether we were looking at the most recently-called frame
by comparing it with the stack pointer for that frame.  That was
inaccurate in the case of a thread executing at the beginning of a
method before a new frame is allocated, in which case the most recent
two frames share a stack pointer, confusing the unwinder.  The
solution involves keeping track of how many frames we've looked at
while walking the stack.

The other problem was that compareIpToMethodBounds assumed every
method was followed by at least one byte of padding before the next
method started.  That assumption was usually valid because we were
storing the size following method code prior to the code itself.
However, the last method of an AOT-compiled code image is not followed
by any such method header and may instead be followed directly by
native code with no intervening padding.  In that case, we risk
interpreting that native code as part of the preceding method, with
potentially bizarre results.

The reason for the compareIpToMethodBounds assumption was that methods
which throw exceptions as their last instruction generate a
non-returning call, which nonetheless push a return address on the
stack which points past the end of the method, and the unwinder needs
to know that return address belongs to that method.  A better solution
is to add an extra trap instruction to the end of such methods, which
is what this patch does.
This commit is contained in:
Joel Dice
2012-05-04 18:35:13 -06:00
parent 58691a7fdb
commit ea4e0a2f5d
10 changed files with 114 additions and 64 deletions

View File

@ -136,6 +136,7 @@ inline int ble(int offset) { return SETCOND(b(offset), LE); }
inline int bge(int offset) { return SETCOND(b(offset), GE); }
inline int blo(int offset) { return SETCOND(b(offset), CC); }
inline int bhs(int offset) { return SETCOND(b(offset), CS); }
inline int bkpt() { return 0xe1200070; } // todo: macro-ify
}
const uint64_t MASK_LO32 = 0xffffffff;
@ -1616,6 +1617,12 @@ return_(Context* c)
emit(c, bx(LinkRegister));
}
void
trap(Context* c)
{
emit(c, bkpt());
}
void
memoryBarrier(Context*) {}
@ -1629,7 +1636,7 @@ argumentFootprint(unsigned footprint)
void
nextFrame(ArchitectureContext* c, uint32_t* start, unsigned size UNUSED,
unsigned footprint, void* link, void*,
unsigned footprint, void* link, bool,
unsigned targetParameterFootprint UNUSED, void** ip, void** stack)
{
assert(c, *ip >= start);
@ -1703,6 +1710,7 @@ populateTables(ArchitectureContext* c)
zo[LoadBarrier] = memoryBarrier;
zo[StoreStoreBarrier] = memoryBarrier;
zo[StoreLoadBarrier] = memoryBarrier;
zo[Trap] = trap;
uo[index(c, LongCall, C)] = CAST1(longCallC);
@ -1922,12 +1930,12 @@ class MyArchitecture: public Assembler::Architecture {
}
virtual void nextFrame(void* start, unsigned size, unsigned footprint,
void* link, void* stackLimit,
void* link, bool mostRecent,
unsigned targetParameterFootprint, void** ip,
void** stack)
{
::nextFrame(&c, static_cast<uint32_t*>(start), size, footprint, link,
stackLimit, targetParameterFootprint, ip, stack);
mostRecent, targetParameterFootprint, ip, stack);
}
virtual void* frameIp(void* stack) {