Update note at the top of the Kafka notary design doc

We don't currently plan to ship this notary.
This commit is contained in:
Mike Hearn 2018-12-11 11:16:06 +01:00 committed by GitHub
parent 7172048735
commit b14f3d61b3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,6 +1,6 @@
# High Performance CFT Notary Service
.. important:: This design document describes a feature of Corda Enterprise.
.. important:: This design document describes a prototyped but not shipped feature of Corda Enterprise. There are presently no plans to ship this notary.
## Overview
@ -27,10 +27,6 @@ Out-of-scope:
- Validating notary service.
- Byzantine fault-tolerance.
## Timeline
No strict delivery timeline requirements, depends on client throughput needs. Estimated delivery by end of Q3 2018.
## Requirements
The notary service should be able to:
@ -237,4 +233,4 @@ We have to use a single partition for global transaction ordering guarantees, bu
* Have a single-partition `transactions` topic where all worker nodes send only the transaction id.
* Have a separate _partitioned_ `payload` topic where workers send the entire notarisation request content: transaction id, inputs states, request signature. A single request can be around 1KB in size).
Workers would need to consume from the `transactions` partition to obtain the ordering, and from all `payload` partitions for the actual notarisation requests. A request will not be processed until its global order is known. Since Kafka tries to distribute leaders for different partitions evenly across the cluster, we would avoid a single Kafka broker handling all of the traffic. Load-wise, nothing changes from the worker node's perspective it still has to process all requests but a larger number of worker nodes could be supported.
Workers would need to consume from the `transactions` partition to obtain the ordering, and from all `payload` partitions for the actual notarisation requests. A request will not be processed until its global order is known. Since Kafka tries to distribute leaders for different partitions evenly across the cluster, we would avoid a single Kafka broker handling all of the traffic. Load-wise, nothing changes from the worker node's perspective it still has to process all requests but a larger number of worker nodes could be supported.