`KillFlowTest` is failing quite often. This is probably due to issues in ordering when taking and releasing locks. By using `CountDownLatch` in places instead of `Semaphore`s should reduce the likelihood of tests failing.
* CORDA-3936: Add a fallback mechanism for Enums incorrectly serialised using their toString() method.
* Backport missing piece of Enum serializer from Corda 4.6.
If an error occurs when creating a transition (a.k.a anything inside of
`TopLevelTransition`) then resume the flow with the error that occurred.
This is needed, because the current code is swallowing all errors thrown
at this point and causing the flow to hang.
This change will allow better debugging of errors since the real error
will be thrown back to the flow and will get handled and logged by the
normal error code path.
Extra logging has been added to `processEventsUntilFlowIsResumed`, just
in case an exception gets thrown out of the normal code path. We do not
want this exception to be swallowed as it can make it impossible to
debug the original error.
Save the exception for flows that fail during session init when they are
kept for observation.
Change the exception tidy up logic to only update the flow's status if
the exception was removed.
Add `CordaRPCOps.reattachFlowWithClientId` to allow clients to reattach
to an existing flow by only providing a client id. This behaviour is the
same as calling `startFlowDynamicWithClientId` for an existing
`clientId`. Where it differs is `reattachFlowWithClientId` will return
`null` if there is no flow running or finished on the node with the same
client id.
Return `null` if record deleted from race-condition
Update the compatible flag in the DB if the flowstate cannot be deserialised.
The most common cause of this problem is if a CorDapp has been upgraded
without draining flows from the node.
`RUNNABLE` and `HOSPITALISED` flows are restored on node startup so
the flag is set for these then. The flag can also be set when a flow
retries for some reason (see retryFlowFromSafePoint) in this case the
problem has been caused by another reason.
Added a newpause event to the statemachine which returns an Abort
continuation and causes the flow to be moved into the Paused flow Map.
Flows can receive session messages whilst paused.
Add a lock to `StateMachineState`, allowing every flow to lock
themselves when performing a transition or when an external thread (such
as `killFlow`) tries to interact with a flow from occurring at the same
time.
Doing this prevents race-conditions where the external threads mutate
the database or the flow's state causing an in-flight transition to
fail.
A `Semaphore` is used to acquire and release the lock. A `ReentrantLock`
is not used as it is possible for a flow to suspend while locked, and
resume on a different thread. This causes a `ReentrantLock` to fail when
releasing the lock because the thread doing so is not the thread holding
the lock. `Semaphore`s can be used across threads, therefore bypassing
this issue.
The lock is copied across when a flow is retried. This is to prevent
another thread from interacting with a flow just after it has been
retried. Without copying the lock, the external thread would acquire the
old lock and execute, while the fiber thread acquires the new lock and
also executes.
* Remove use of Thread.sleep() FROM FlowReloadAfterCheckpointTest, instead relying on CountdownLatch to wait until the target number has been hit or a timeout occurs, so the thread can continue as soon as the target is hit.
* Replace use of hashmaps to a concurrent queue, to mitigate risk of complex threading issues.
Integrate YAML profile support, and the eagle-eyed will notice that the plugin no longer needs to be applied at the very bottom of the build.gradle file!
Other features include:
* Implicit upgrade to docker-remote-api plugin v5.3.0
* Fixing a ClassGraph-related memory leak by closing the ScanResult objects after use.
* More logging of any exceptions from Kubenetese.
* The gradlecache volume is now created with a hostPath of "/gradle/$podName/$podIdx-$taskForExecuteName", which should allow having multiple pods on a single node.
Enhance rpc acknowledgement method (`removeClientId`) to remove checkpoint
from all checkpoint database tables.
Optimize `CheckpointStorage.removeCheckpoint` to not delete from all checkpoint
tables if not needed. This includes excluding the results (`DBFlowResult`) and
exceptions (`DBFlowException`) tables.
* Increase timeout to provide more of an error margin, after seeing a test failure in Jenkins.
* Move shared strings to constants.
* Extract chain building code into recursive function.