From f7e475b8a1f9cb0c23c76aff0f712073c772eb32 Mon Sep 17 00:00:00 2001 From: josecoll Date: Wed, 27 Mar 2019 14:39:20 +0000 Subject: [PATCH] Minor updates following RGB feedback. --- docs/source/api-flows.rst | 12 ++++++------ docs/source/node-flow-hospital.rst | 5 +++-- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/source/api-flows.rst b/docs/source/api-flows.rst index 5338b921f8..79c406bbf7 100644 --- a/docs/source/api-flows.rst +++ b/docs/source/api-flows.rst @@ -617,12 +617,12 @@ which is described below. Once a transaction has been notarised and its input states consumed by the flow initiator (eg. sender), should the participant(s) receiving the transaction fail to verify it, or the receiving flow (the finality handler) fails due to some other error, we then have a scenario where not -all parties have the correct up to date view of the ledger (a condition defined as `eventual consistency `_ -in distributed systems terminology). To recover from this scenario, the receivers finality handler will automatically be sent to the -:doc:`node-flow-hospital` where it's suspended and retried from its last checkpoint upon node restart, or according to other conditional retry rules -explained in :ref:`flow hospital runtime behaviour `. This gives the node operator the opportunity to recover from the error. -Until the issue is resolved the node will continue to retry the flow on each startup. Upon successful completion by the receivers finality flow, -the ledger will become fully consistent once again. +all parties have the correct up to date view of the ledger (a condition where eventual consistency between participants takes longer than is +normally the case under Corda's `eventual consistency model `_). To recover from this scenario, +the receiver's finality handler will automatically be sent to the :doc:`node-flow-hospital` where it's suspended and retried from its last checkpoint +upon node restart, or according to other conditional retry rules explained in :ref:`flow hospital runtime behaviour `. +This gives the node operator the opportunity to recover from the error. Until the issue is resolved the node will continue to retry the flow +on each startup. Upon successful completion by the receiver's finality flow, the ledger will become fully consistent once again. .. warning:: It's possible to forcibly terminate the erroring finality handler using the ``killFlow`` RPC but at the risk of an inconsistent view of the ledger. diff --git a/docs/source/node-flow-hospital.rst b/docs/source/node-flow-hospital.rst index 1fae2f769d..02a86ce92f 100644 --- a/docs/source/node-flow-hospital.rst +++ b/docs/source/node-flow-hospital.rst @@ -6,12 +6,13 @@ Overview The **flow hospital** refers to a built-in node service that manages flows that have encountered an error. -This service is responsible for recording, tracking, diagnosis, recovery and retry. It determines whether errored flows should be retried +This service is responsible for recording, tracking, diagnosing, recovering and retrying. It determines whether errored flows should be retried from their previous checkpoints or have their errors propagate. Flows may be recoverable under certain scenarios (eg. manual intervention may be required to install a missing contract JAR version). For a given errored flow, the flow hospital service determines the next course of action towards recovery and retry. -.. note:: The flow hospital will never terminate a flow, but will propagate its error back to the state machine, and ultimately, end user code to handle. +.. note:: The flow hospital will never terminate a flow, but will propagate its error back to the state machine, and ultimately, end user code to handle + if it ultimately proves impossible to resolve automatically. This concept is analogous to *exception management handling* associated with enterprise workflow software, or *retry queues/stores* in enterprise messaging middleware for recovering from failure to deliver a message.