mirror of
https://github.com/corda/corda.git
synced 2025-01-16 01:40:17 +00:00
Wording updates
This commit is contained in:
parent
4e5a9e924e
commit
5c62f9b243
@ -30,25 +30,35 @@ The role of the 'float' is to meet the requirements of organisations that will n
|
||||
|
||||
### Background
|
||||
|
||||
#### Current P2P State
|
||||
#### Current state of peer-to-peer messaging in Corda
|
||||
|
||||
The diagram below illustrates the current mechanism for peer-to-peer messaging between Corda nodes.
|
||||
|
||||
![Current P2P State](./current-p2p-state.png)
|
||||
|
||||
1. Flow has message for existing peer.
|
||||
2. Check queue for existence. Finds it exists and submits and continues after acknowledgement.
|
||||
3. Pre-existing core bridge picks up message and transfers over TLS socket to inbox of destination node.
|
||||
4. Flow receives message from peer and acknowledged consumption on bus when the flow has checkpointed this progress.
|
||||
5. Flow has message for new peer.
|
||||
6. Flow needs to create a queue as this is a new peer. The name encodes the identity of the intended recipient.
|
||||
7. When the queue creation has completed the node sends the message to the queue.
|
||||
8. The hosted artemis server in the node has a queue creation hook which is called.
|
||||
9. The queue name is used to lookup the remote connection details and a new bridge is registered.
|
||||
10. The client certificate of the peer is compared to the expected legal identity X500 Name. If this is ok message flow is as for a pre-existing link step 3.
|
||||
When a flow running on a Corda node triggers a requirement to send a message to a peer node, it first checks for pre-existence of an applicable message queue for that peer.
|
||||
|
||||
**If the relevant queue exists:**
|
||||
|
||||
1. The node submits the message to the queue and continues after receiving acknowledgement.
|
||||
2. The Core Bridge picks up the message and transfers it via a TLS socket to the inbox of the destination node.
|
||||
3. A flow on the recipient receives message from peer and acknowledged consumption on bus when the flow has checkpointed this progress.
|
||||
|
||||
**If the queue does not exist (messaging a new peer):**
|
||||
|
||||
1. The flow triggers creation of a new queue with a name encoding the identity of the intended recipient.
|
||||
2. When the queue creation has completed the node sends the message to the queue.
|
||||
3. The hosted Artemis server within the node has a queue creation hook which is called.
|
||||
4. The queue name is used to lookup the remote connection details and a new bridge is registered.
|
||||
5. The client certificate of the peer is compared to the expected legal identity X500 Name. If this is OK, message flow proceeds as for a pre-existing queue (above).
|
||||
|
||||
## Scope
|
||||
|
||||
* Goals: Allow connection to a corda node wihout requiring direct incoming connections from external participants.
|
||||
* Non-goals (eg. out of scope)
|
||||
* Reference(s) to similar or related work
|
||||
* Goals:
|
||||
* Allow connection to a Corda node wihout requiring direct incoming connections from external participants.
|
||||
* Allow connections to a Corda node without requiring the node itself to have a public IP address. Separate TLS connection handling from the MQ broker.
|
||||
* Non-goals (out of scope):
|
||||
* Support for MQ brokers other than Apache Artemis
|
||||
|
||||
## Timeline
|
||||
For delivery by end Q1 2018.
|
||||
@ -57,15 +67,15 @@ For delivery by end Q1 2018.
|
||||
Allow connectivity in compliance with DMZ constraints commonly imposed by modern financial institutions; namely:
|
||||
1. Firewalls required between the internet and any device in the DMZ, and between the DMZ and the internal network.
|
||||
2. Only identified IPs and ports are permitted to access devices in the DMZ; this include communications between devices colocated in the DMZ.
|
||||
2. Any DMZ machine is typically multi-homed, with separate network cards handling traffic through the institutional firewall vs. to the Internet. (There is usually a further hidden management interface card accessed via a jump box for managing the box and shipping audit trail information). This requires that our software can bind listening ports to the correct network card not just to 0.0.0.0.
|
||||
3. No connections to be initiated by DMZ devices towards the internal network. Communications should be initiated from the internal network to form a bidirectional channel with the proxy process.
|
||||
4. No business data should be persisted on the DMZ box.
|
||||
5. An audit log of all connection events is required to track breaches. Latency information should also be tracked to facilitate management of connectivity issues.
|
||||
6. Processes on DMZ devices run as local accounts with no relationship to internal permission systems, or ability to enumerate devices on the internal network.
|
||||
7. Communications in the DMZ should yse modern TLS, often with local-only certificates/keys that hold no value outside of use in predefined links.
|
||||
8. TLS is commonly terminated on the firewall which has an associated HSM for the private keys. This means that we do not necessarily have the certificates of the connection, but hopefully for now we can insist on receiving the connection directly onto the float proxy, although we have to ask how we might access an HSM.
|
||||
9. It is usually assumed that there is an HA/load balancing pair (or more) of proxies for resilience. Often the firewalls are also combined with hardware load balancer functionality.
|
||||
10. Any business data passing through the proxy should be separately encrypted, so that no data is in the clear of the program memory if the DMZ box is compromised.
|
||||
3. Any DMZ machine is typically multi-homed, with separate network cards handling traffic through the institutional firewall vs. to the Internet. (There is usually a further hidden management interface card accessed via a jump box for managing the box and shipping audit trail information). This requires that our software can bind listening ports to the correct network card not just to 0.0.0.0.
|
||||
4. No connections to be initiated by DMZ devices towards the internal network. Communications should be initiated from the internal network to form a bidirectional channel with the proxy process.
|
||||
5. No business data should be persisted on the DMZ box.
|
||||
6. An audit log of all connection events is required to track breaches. Latency information should also be tracked to facilitate management of connectivity issues.
|
||||
7. Processes on DMZ devices run as local accounts with no relationship to internal permission systems, or ability to enumerate devices on the internal network.
|
||||
8. Communications in the DMZ should yse modern TLS, often with local-only certificates/keys that hold no value outside of use in predefined links.
|
||||
9. TLS is commonly terminated on the firewall which has an associated HSM for the private keys. This means that we do not necessarily have the certificates of the connection, but hopefully for now we can insist on receiving the connection directly onto the float proxy, although we have to ask how we might access an HSM.
|
||||
10. It is usually assumed that there is an HA/load balancing pair (or more) of proxies for resilience. Often the firewalls are also combined with hardware load balancer functionality.
|
||||
11. Any business data passing through the proxy should be separately encrypted, so that no data is in the clear of the program memory if the DMZ box is compromised.
|
||||
|
||||
## Design Decisions
|
||||
1. AMQP vs. custom P2p - see Alternatives section below
|
||||
@ -75,21 +85,28 @@ Allow connectivity in compliance with DMZ constraints commonly imposed by modern
|
||||
|
||||
## Target Solution
|
||||
|
||||
The proposed solution introduces a reverse proxy component ("**float**") which may be sited in the DMZ, as illustrated in the diagram below.
|
||||
|
||||
![Full Float Implementation](./full-float.png)
|
||||
|
||||
1. The float is a listener only and does not enable outgoing bridges (see Design Decisions, above). The internal portion of the bridge is allowed to initiate through the firewall (possibly via a SOCKS proxy).
|
||||
2. Implementation is based on the AMQP Bridge Manager code.
|
||||
3. The float is not mandatory; interoperability with older nodes, even those using direct AMQP from bridges in the node, is supported.
|
||||
4. The link between the internal AMQP Bridge Manager and the DMZ Float process is a single AMQP/TLS connection, which can contain multiple logical AMQP links. This link is initiated at the socket level by the Bridge Manager towards the DMZ.
|
||||
5. The DMZ float only needs to receive incoming connections initiated remote peers. No state will be serialized, although suitably protected logs will be recorded of all float activities.
|
||||
6. The main role of the DMZ float is to forward incoming AMQP link packets from authenticated TLS links to the AMQP Bridge Manager then echoes back the final delivery acknowledgements once the Bridge Manager has successfully inserted the messages. The bridge manager is responsible for rejecting inbound packets on queues that are not local inboxes e.g. no way of cheating messages onto management topics, or faking outgoing messages.
|
||||
7. Outgoing bridge formation and message sending come directly from the internal Bridge Manager (possibly via a SOCKS 4/5 proxy, which is easy enough to enable in netty, or directly through the corporate firewall. It could be initiated from the float, but this just seems insecure.)
|
||||
8. End-to-end encryption of the payload is not delivered through this design (see Design Decisions, above). For current purposes, a header field indicating plaintext/encrypted payload is employed as a placeholder.
|
||||
9. HA is enabled (this should be easy as the bridge manager can choose which float to make active). Only fully connected DMZ floats should activate their listening port.
|
||||
The main role of the float is to forward incoming AMQP link packets from authenticated TLS links to the AMQP Bridge Manager, then echo back final delivery acknowledgements once the Bridge Manager has successfully inserted the messages. The Bridge Manager is responsible for rejecting inbound packets on queues that are not local inboxes to prevent e.g. 'cheating' messages onto management topics, faking outgoing messages etc.
|
||||
|
||||
The float is linked to the internal AMQP Bridge Manager via a single AMQP/TLS connection, which can contain multiple logical AMQP links. This link is initiated at the socket level by the Bridge Manager towards the float.
|
||||
|
||||
### Bridge Control Protocol
|
||||
The bridge control is designed to be as stateless as possible. Thus, nodes and bridges restarting must re-request/broadcast information to each other. The messages should be sent to a 'bridge.control' address in Artemis and be sent as non-persistent messages with a non-durable queue. Each message should contain a duplicate message ID, which is also re-used as the correlation id in replies. The scenarios are:
|
||||
The float is a **listener only** and does not enable outgoing bridges (see Design Decisions, above). Outgoing bridge formation and message sending come directly from the internal Bridge Manager (possibly via a SOCKS 4/5 proxy, which is easy enough to enable in netty, or directly through the corporate firewall. Initiating from the float gives rise to security concerns.)
|
||||
|
||||
The float is **not mandatory**; interoperability with older nodes, even those using direct AMQP from bridges in the node, is supported.
|
||||
|
||||
**No state will be serialized on the float**, although suitably protected logs will be recorded of all float activities.
|
||||
|
||||
**End-to-end encryption** of the payload is not delivered through this design (see Design Decisions, above). For current purposes, a header field indicating plaintext/encrypted payload is employed as a placeholder.
|
||||
|
||||
**HA** is enabled (this should be easy as the bridge manager can choose which float to make active). Only fully connected DMZ floats should activate their listening port.
|
||||
|
||||
Implementation of the float is expected to be based on existing AMQP Bridge Manager code - see Implementation Plan, below, for expected work stages.
|
||||
|
||||
### Bridge control protocol
|
||||
The bridge control is designed to be as stateless as possible. Thus, nodes and bridges restarting must re-request/broadcast information to each other. Messages are sent to a 'bridge.control' address in Artemis as non-persistent messages with a non-durable queue. Each message should contain a duplicate message ID, which is also re-used as the correlation id in replies. Relevant scenarios are described below:
|
||||
|
||||
#### On bridge start-up, or reconnection to Artemis
|
||||
1. The bridge process should subscribe to the 'bridge.control'.
|
||||
@ -135,10 +152,7 @@ The point to point links would be standard TLS and the network certificates woul
|
||||
|
||||
## Final recommendation
|
||||
|
||||
Proposed solution (if more than one option presented)
|
||||
Proceed direct to implementation
|
||||
Proceed to Technical Design stage
|
||||
Proposed Platform Technical team(s) to implement design (if not already decided)
|
||||
Implement the Target Solution described above according to the implementation plan described below.
|
||||
|
||||
# IMPLEMENTATION PLAN
|
||||
|
||||
@ -148,14 +162,14 @@ Proposed Platform Technical team(s) to implement design (if not already decided)
|
||||
3. We need to define a bridge control protocol, so that we can have an out of process float/bridge. The current process is that on message send the node checks the target address to see if the target queue already exists. If the queue doesn't exist it creates a new queue which includes an encoding of the PublicKey in its name. This is picked up by a wrapper around the Artemis Server which is also hosted inside the node and can ask the network map cache for a translation to a target host and port. This in turn allows a new bridge to be provisioned. At node restart the re-population of the network map cache is followed to re-create the bridges to any unsent queues/messages.
|
||||
4. My proposal for a bridge control protocol is partly influenced by the fact that AMQP does not have a built-in mechanism for queue creation/deletion/enumeration. Also, the flows cannot progress until they are sure that there is an accepting queue. Finally, if one runs a local broker it should be fine to run multiple nodes without any bridge processes. Therefore, I will leave the queue creation as the node's responsibility. Initially we can continue to use the existing CORE protocol for this. The requirement to initiate a bridge will change from being implicit signalling via server queue detection to being an explicit pub-sub message that requests bridge formation. This doesn't need durability, or acknowledgements, because when a bridge process starts it should request a refresh of the required bridge list. The typical create bridge messages should contain:
|
||||
1. The queue name (ideally with the sha256 of the PublicKey, not the whole PublicKey as that may not work on brokers with queue name length constraints).
|
||||
2. The expected X500Name for the remote TLS certificate.
|
||||
2. The expected X500Name for the remote TLS certificate.
|
||||
3. The list of host and ports to attempt connection to. See separate section for more info.
|
||||
5. Once we have the bridge protocol in place and a bridge out of process the broker can move out of process too, which is a requirement for clustering anyway. We can then start work on floating the bridge and making our broker pluggable.
|
||||
1. At this point the bridge connection to the local queues should be upgraded to also be AMQP client, rather than CORE protocol, which will give the ability for the P2P bridges to work with other broker products.
|
||||
2. An independent task is to look at making the Bridge process HA, probably using a similar hot-warm mastering solution as the node, or atomix.io. The inactive node should track the control messages, but obviously doesn't initiate any bridges.
|
||||
3. Another potentially parallel piece of development is to start to build a float, which is essentially just splitting the bridge in two and putting in an intermediate hop AMQP/TLS link. The thin proxy in the DMZ zone should be as stateless as possible in this.
|
||||
4. Finally, the node should use AMQP to talk to its local broker cluster, but this will have to remain partly tied to Artemis, as queue creation will require sending management messages to the Artemis core, but we should be able to abstract this. Bridge Management Protocol.
|
||||
|
||||
|
||||
## Float evolution
|
||||
|
||||
### In-Process AMQP Bridging
|
||||
|
Loading…
Reference in New Issue
Block a user