mirror of
https://github.com/corda/corda.git
synced 2025-01-19 03:06:36 +00:00
fix concurrency bugs on ios+arm64 in enter
At first, it might look like the atomicIncrement operations here, since they resolve to OSAtomicCompareAndSwap32Barrier, ought to provide all the memory barrier guarantees we need; however, it turns out it's not quite sufficient. Here's why: Apple's OSAtomic*Barrier operations guarantee that memory operations *before* the atomic op won't be reordered with memory operations *after* the atomic op - but makes no guarantee that the atomic op itself won't be reordered with other operations before or after it. The key is that the atomic operation is not really atomic, but rather consists of separate read, check and write steps - in a loop, no less. Here, presumably, the read of t->m->exclusive is hoisted by the out-of-order processor to between the read and write steps of the "atomic" increment of t->m->activeCount. Longer term, it probably makes sense to replace this with the c11 <stdatomic.h> operations or the c++11 <atomic> types. We ought to actually use the atomic increment operations provided there. As it is, our atomicIncrement looks substantially less efficient than those, since it's actually implemented on arm64 as two nested loops (one in atomicIncrement and one in the compare-and-swap) instead of one. We should also evaluate whether the various instances of atomic operations actually need as strong of barriers as we're giving them. FWIW, the gcc __sync_* builtins seem to have the same problem, at least on arm64.
This commit is contained in:
parent
7be8d8551a
commit
ba9b85f7c3
@ -4040,6 +4040,8 @@ void enter(Thread* t, Thread::State s)
|
||||
|
||||
t->state = s;
|
||||
|
||||
STORE_LOAD_MEMORY_BARRIER;
|
||||
|
||||
if (t->m->exclusive) {
|
||||
ACQUIRE_LOCK;
|
||||
|
||||
@ -4091,6 +4093,8 @@ void enter(Thread* t, Thread::State s)
|
||||
|
||||
t->state = s;
|
||||
|
||||
STORE_LOAD_MEMORY_BARRIER;
|
||||
|
||||
if (t->m->exclusive) {
|
||||
// another thread has entered the exclusive state, so we
|
||||
// return to idle and use the slow path to become active
|
||||
|
Loading…
Reference in New Issue
Block a user