ENT-2168: Node internal docs tweaks (#3624)

This commit is contained in:
Andrius Dagys 2018-07-16 17:13:43 +01:00 committed by Michele Sollecito
parent d262c88c0b
commit a1cb184a1d
3 changed files with 46 additions and 99 deletions

View File

@ -31,10 +31,10 @@ regular time interval for network map and applies any related changes locally.
Nodes do not automatically deregister themselves, so (for example) nodes going offline briefly for maintenance are retained
in the network map, and messages for them will be queued, minimising disruption.
Additionally, on every restart and on daily basis nodes submit signed `NodeInfo`s to the map service. When network map gets
signed, these changes are distributed as new network data. `NodeInfo` republishing is treated as a heartbeat from the node,
Additionally, on every restart and on daily basis nodes submit signed ``NodeInfo`` s to the map service. When network map gets
signed, these changes are distributed as new network data. ``NodeInfo`` republishing is treated as a heartbeat from the node,
based on that network map service is able to figure out which nodes can be considered as stale and removed from the network
map document after `eventHorizon` time.
map document after ``eventHorizon`` time.
Message queues
--------------
@ -61,10 +61,10 @@ for maintenance and other minor purposes.
corresponding bridge is used to forward the message to an advertising peer's p2p queue. Once a peer is picked the
session continues on as normal.
:``rpc.requests``:
:``rpc.server``:
RPC clients send their requests here, and it's only open for sending by clients authenticated as RPC users.
:``clients.$user.rpc.$random``:
:``rpc.client.$user.$random``:
RPC clients are given permission to create a temporary queue incorporating their username (``$user``) and sole
permission to receive messages from it. RPC requests are required to include a random number (``$random``) from
which the node is able to construct the queue the user is listening on and send the response to that. This mechanism
@ -80,7 +80,7 @@ Clients attempting to connect to the node's broker fall in one of four groups:
they are given full access to all valid queues, otherwise they are rejected.
#. Anyone connecting with the username ``SystemUsers/Peer`` is treated as a peer on the same Corda network as the node. Their
TLS root CA must be the same as the node's root CA - the root CA is the doorman of the network and having the same root CA
TLS root CA must be the same as the node's root CA -- the root CA is the doorman of the network and having the same root CA
implies we've been let in by the same doorman. If they are part of the same network then they are only given permission
to send to our ``p2p.inbound.$identity`` queue, otherwise they are rejected.
@ -98,20 +98,19 @@ this to determine what permissions the user has.
The broker also does host verification when connecting to another peer. It checks that the TLS certificate subject matches
with the advertised X.500 legal name from the network map service.
Implementation details
----------------------
~~~~~~~~~~~~~~~~~~~~~~
The components of the system that need to communicate and authenticate each other are:
- The Artemis P2P broker (Currently runs inside the Nodes JVM process, but in the future it will be able to run as a separate server)
* opens Acceptor configured with the doorman's certificate in the truststore and the node's ssl certificate in the keystore
- The Artemis RPC broker (Currently runs inside the Nodes JVM process, but in the future it will be able to run as a separate server)
* opens "Admin" Acceptor configured with the doorman's certificate in the truststore and the node's ssl certificate in the keystore
* opens "Client" Acceptor with the ssl settings configurable. This acceptor does not require ssl client-auth.
- The current node hosting the brokers
* connects to the P2P broker using the ``SystemUsers/Node`` user and the node's keystore and trustore
* connects to the "Admin" Acceptor of the RPC broker using the ``SystemUsers/NodeRPC`` user and the node's keystore and trustore
- RPC clients ( Third party applications that need to communicate with the Node. )
* connect to the "Client" Acceptor of the RPC broker using the username/password provided by the node's admin. The client verifies the node's certificate using a truststore provided by the node's admin.
- Peer nodes (Other nodes on the network)
* connect to the P2P broker using the ``SystemUsers/Peer`` user and a doorman signed certificate. The authentication is performed based on the root CA.
- The Artemis P2P broker (currently runs inside the node's JVM process, but in the future it will be able to run as a separate server):
* Opens Acceptor configured with the doorman's certificate in the trustStore and the node's SSL certificate in the keyStore.
- The Artemis RPC broker (currently runs inside the node's JVM process, but in the future it will be able to run as a separate server):
* Opens "Admin" Acceptor configured with the doorman's certificate in the trustStore and the node's SSL certificate in the keyStore.
* Opens "Client" Acceptor with the SSL settings configurable. This acceptor does not require SSL client-auth.
- The current node hosting the brokers:
* Connects to the P2P broker using the ``SystemUsers/Node`` user and the node's keyStore and trustStore.
* Connects to the "Admin" Acceptor of the RPC broker using the ``SystemUsers/NodeRPC`` user and the node's keyStore and trustStore.
- RPC clients (third party applications that need to communicate with the node):
* Connect to the "Client" Acceptor of the RPC broker using the username/password provided by the node's admin. The client verifies the node's certificate using a trustStore provided by the node's admin.
- Peer nodes (other nodes on the network):
* Connect to the P2P broker using the ``SystemUsers/Peer`` user and a doorman signed certificate. The authentication is performed based on the root CA.

View File

@ -14,27 +14,15 @@ The node services represent the various sub functions of the Corda node.
Some are directly accessible to contracts and flows through the
``ServiceHub``, whilst others are the framework internals used to host
the node functions. Any public service interfaces are defined in the
``:core`` gradle project in the
``src/main/kotlin/net/corda/core/node/services`` folder. The
``ServiceHub`` interface exposes functionality suitable for flows.
The implementation code for all standard services lives in the gradle
``:node`` project under the ``src/main/kotlin/net/corda/node/services``
folder. The ``src/main/kotlin/net/corda/node/services/api`` folder
contains declarations for internal only services and for interoperation
between services.
``net.corda.core.node.services`` package. The ``ServiceHub`` interface exposes
functionality suitable for flows.
The implementation code for all standard services lives in the ``net.corda.node.services`` package.
All the services are constructed in the ``AbstractNode`` ``start``
method (and the extension in ``Node``). They may also register a
shutdown handler during initialisation, which will be called in reverse
order to the start registration sequence when the ``Node.stop``
is called.
method. They may also register a shutdown handler during initialisation,
which will be called in reverse order to the start registration sequence when the ``Node.stop`` is called.
For unit testing a number of non-persistent, memory only services are
defined in the ``:node`` and ``:test-utils`` projects. The
``:test-utils`` project also provides an in-memory networking simulation
to allow unit testing of flows and service functions.
The roles of the individual services are described below.
The roles of the individual services are described below.
Key management and identity services
------------------------------------
@ -43,15 +31,15 @@ InMemoryIdentityService
~~~~~~~~~~~~~~~~~~~~~~~
The ``InMemoryIdentityService`` implements the ``IdentityService``
interface and provides a store of remote mappings between ``CompositeKey``
interface and provides a store of remote mappings between ``PublicKey``
and remote ``Parties``. It is automatically populated from the
``NetworkMapCache`` updates and is used when translating ``CompositeKey``
``NetworkMapCache`` updates and is used when translating ``PublicKey``
exposed in transactions into fully populated ``Party`` identities. This
service is also used in the default JSON mapping of parties in the web
server, thus allowing the party names to be used to refer to other nodes'
legal identities. In the future the Identity service will be made
persistent and extended to allow anonymised session keys to be used in
flows where the well-known ``CompositeKey`` of nodes need to be hidden
flows where the well-known ``PublicKey`` of nodes need to be hidden
to non-involved parties.
PersistentKeyManagementService and E2ETestKeyManagementService
@ -95,7 +83,7 @@ of this component, because the ``ArtemisMessagingServer`` is responsible
for configuring the network ports (based upon settings in ``node.conf``)
and the service configures the security settings of the ``ArtemisMQ``
middleware and acts to form bridges between node mailbox queues based
upon connection details advertised by the ``NetworkMapService``. The
upon connection details advertised by the ``NetworkMapCache``. The
``ArtemisMQ`` broker is configured to use TLS1.2 with a custom
``TrustStore`` containing a Corda root certificate and a ``KeyStore``
with a certificate and key signed by a chain back to this root
@ -105,13 +93,13 @@ each other it is essential that the entire set of nodes are able to
authenticate against each other and thus typically that they share a
common root certificate. Also note that the address configuration
defined for the server is the basis for the address advertised in the
NetworkMapService and thus must be externally connectable by all nodes
``NetworkMapCache`` and thus must be externally connectable by all nodes
in the network.
NodeMessagingClient
~~~~~~~~~~~~~~~~~~~
P2PMessagingClient
~~~~~~~~~~~~~~~~~~
The ``NodeMessagingClient`` is the implementation of the
The ``P2PMessagingClient`` is the implementation of the
``MessagingService`` interface operating across the ``ArtemisMQ``
middleware layer. It typically connects to the local ``ArtemisMQ``
hosted within the ``ArtemisMessagingServer`` service. However, the
@ -133,7 +121,7 @@ services of authorised nodes provided by the remote
specific advertised services e.g. a Notary service, or an Oracle
service. Also, this service allows mapping of friendly names, or
``Party`` identities to the full ``NodeInfo`` which is used in the
``StateMachineManager`` to convert between the ``CompositeKey``, or
``StateMachineManager`` to convert between the ``PublicKey``, or
``Party`` based addressing used in the flows/contracts and the
physical host and port information required for the physical
``ArtemisMQ`` messaging layer.
@ -141,13 +129,6 @@ physical host and port information required for the physical
Storage and persistence related services
----------------------------------------
StorageServiceImpl
~~~~~~~~~~~~~~~~~~
The ``StorageServiceImpl`` service simply hold references to the various
persistence related services and provides a single grouped interface on
the ``ServiceHub``.
DBCheckpointStorage
~~~~~~~~~~~~~~~~~~~
@ -247,40 +228,7 @@ a reference to the state that triggered the event. The flow can then
begin whatever action is required. Note that the scheduled activity
occurs in all nodes holding the state in their Vault, it may therefore
be required for the flow to exit early if the current node is not
the intended initiator.
Notary flow implementation services
-----------------------------------
PersistentUniquenessProvider, InMemoryUniquenessProvider and RaftUniquenessProvider
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These variants of ``UniquenessProvider`` service are used by the notary
flows to track consumed states and thus reject double-spend
scenarios. The ``InMemoryUniquenessProvider`` is for unit testing only,
the default being the ``PersistentUniquenessProvider`` which records the
changes to the DB. When the Raft based notary is active the states are
tracked by the whole cluster using a ``RaftUniquenessProvider``. Outside
of the notary flows themselves this service should not be accessed
by any CorDapp components.
NotaryService (SimpleNotaryService, ValidatingNotaryService, RaftValidatingNotaryService)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``NotaryService`` is an abstract base class for the various concrete
implementations of the Notary server flow. By default, a node does
not run any ``NotaryService`` server component. For that you need to specify the ``notary`` config.
The node may then participate in controlling state uniqueness when contacted by nodes
using the ``NotaryFlow.Client`` ``subFlow``. The
``SimpleNotaryService`` only offers protection against double spend, but
does no further verification. The ``ValidatingNotaryService`` checks
that proposed transactions are correctly signed by all keys listed in
the commands and runs the contract verify to ensure that the rules of
the state transition are being followed. The
``RaftValidatingNotaryService`` further extends the flow to operate
against a cluster of nodes running shared consensus state across the
RAFT protocol (note this requires the additional configuration of the
``notaryClusterAddresses`` property).
the intended initiator.
Vault related services
----------------------

View File

@ -6,9 +6,9 @@ that can be easily queried and worked with.
The vault keeps track of both unconsumed and consumed states:
* Unconsumed (or unspent) states represent fungible states available for spending (including spend-to-self transactions)
* **Unconsumed** (or unspent) states represent fungible states available for spending (including spend-to-self transactions)
and linear states available for evolution (eg. in response to a lifecycle event on a deal) or transfer to another party.
* Consumed (or spent) states represent ledger immutable state for the purpose of transaction reporting, audit and archival, including the ability to perform joins with app-private data (like customer notes)
* **Consumed** (or spent) states represent ledger immutable state for the purpose of transaction reporting, audit and archival, including the ability to perform joins with app-private data (like customer notes).
By fungible we refer to assets of measurable quantity (eg. a cash currency, units of stock) which can be combined
together to represent a single ledger state.
@ -40,15 +40,15 @@ The following diagram illustrates the breakdown of the vault into sub-system com
Note the following:
* the vault "On Ledger" store tracks unconsumed state and is updated internally by the node upon recording of a transaction on the ledger
(following successful smart contract verification and signature by all participants)
* the vault "Off Ledger" store refers to additional data added by the node owner subsequent to transaction recording
* the vault performs fungible state spending (and in future, fungible state optimisation management including merging, splitting and re-issuance)
* vault extensions represent additional custom plugin code a developer may write to query specific custom contract state attributes.
* customer "Off Ledger" (private store) represents internal organisational data that may be joined with the vault data to perform additional reporting or processing
* a :doc:`Vault Query API </api-vault-query>` is exposed to developers using standard Corda RPC and CorDapp plugin mechanisms
* a vault update API is internally used by transaction recording flows.
* the vault database schemas are directly accessible via JDBC for customer joins and queries
* The vault "On Ledger" store tracks unconsumed state and is updated internally by the node upon recording of a transaction on the ledger
(following successful smart contract verification and signature by all participants).
* The vault "Off Ledger" store refers to additional data added by the node owner subsequent to transaction recording.
* The vault performs fungible state spending (and in future, fungible state optimisation management including merging, splitting and re-issuance).
* Vault extensions represent additional custom plugin code a developer may write to query specific custom contract state attributes.
* Customer "Off Ledger" (private store) represents internal organisational data that may be joined with the vault data to perform additional reporting or processing.
* A :doc:`Vault Query API </api-vault-query>` is exposed to developers using standard Corda RPC and CorDapp plugin mechanisms.
* A vault update API is internally used by transaction recording flows.
* The vault database schemas are directly accessible via JDBC for customer joins and queries.
Section 8 of the `Technical white paper`_ describes features of the vault yet to be implemented including private key management, state splitting and merging, asset re-issuance and node event scheduling.