mirror of
https://github.com/corda/corda.git
synced 2025-04-07 11:27:01 +00:00
Minor updates following RGB feedback.
This commit is contained in:
parent
f43906b973
commit
f7e475b8a1
@ -617,12 +617,12 @@ which is described below.
|
||||
|
||||
Once a transaction has been notarised and its input states consumed by the flow initiator (eg. sender), should the participant(s) receiving the
|
||||
transaction fail to verify it, or the receiving flow (the finality handler) fails due to some other error, we then have a scenario where not
|
||||
all parties have the correct up to date view of the ledger (a condition defined as `eventual consistency <https://en.wikipedia.org/wiki/Eventual_consistency>`_
|
||||
in distributed systems terminology). To recover from this scenario, the receivers finality handler will automatically be sent to the
|
||||
:doc:`node-flow-hospital` where it's suspended and retried from its last checkpoint upon node restart, or according to other conditional retry rules
|
||||
explained in :ref:`flow hospital runtime behaviour <flow-hospital-runtime>`. This gives the node operator the opportunity to recover from the error.
|
||||
Until the issue is resolved the node will continue to retry the flow on each startup. Upon successful completion by the receivers finality flow,
|
||||
the ledger will become fully consistent once again.
|
||||
all parties have the correct up to date view of the ledger (a condition where eventual consistency between participants takes longer than is
|
||||
normally the case under Corda's `eventual consistency model <https://en.wikipedia.org/wiki/Eventual_consistency>`_). To recover from this scenario,
|
||||
the receiver's finality handler will automatically be sent to the :doc:`node-flow-hospital` where it's suspended and retried from its last checkpoint
|
||||
upon node restart, or according to other conditional retry rules explained in :ref:`flow hospital runtime behaviour <flow-hospital-runtime>`.
|
||||
This gives the node operator the opportunity to recover from the error. Until the issue is resolved the node will continue to retry the flow
|
||||
on each startup. Upon successful completion by the receiver's finality flow, the ledger will become fully consistent once again.
|
||||
|
||||
.. warning:: It's possible to forcibly terminate the erroring finality handler using the ``killFlow`` RPC but at the risk of an inconsistent view of the ledger.
|
||||
|
||||
|
@ -6,12 +6,13 @@ Overview
|
||||
|
||||
The **flow hospital** refers to a built-in node service that manages flows that have encountered an error.
|
||||
|
||||
This service is responsible for recording, tracking, diagnosis, recovery and retry. It determines whether errored flows should be retried
|
||||
This service is responsible for recording, tracking, diagnosing, recovering and retrying. It determines whether errored flows should be retried
|
||||
from their previous checkpoints or have their errors propagate. Flows may be recoverable under certain scenarios (eg. manual intervention
|
||||
may be required to install a missing contract JAR version). For a given errored flow, the flow hospital service determines the next course of
|
||||
action towards recovery and retry.
|
||||
|
||||
.. note:: The flow hospital will never terminate a flow, but will propagate its error back to the state machine, and ultimately, end user code to handle.
|
||||
.. note:: The flow hospital will never terminate a flow, but will propagate its error back to the state machine, and ultimately, end user code to handle
|
||||
if it ultimately proves impossible to resolve automatically.
|
||||
|
||||
This concept is analogous to *exception management handling* associated with enterprise workflow software, or
|
||||
*retry queues/stores* in enterprise messaging middleware for recovering from failure to deliver a message.
|
||||
|
Loading…
x
Reference in New Issue
Block a user