mirror of
https://github.com/corda/corda.git
synced 2025-01-31 16:35:43 +00:00
[ENT-1955] Documentation fixes (#3417)
This commit is contained in:
parent
66b67b231a
commit
3125ec9f73
@ -81,7 +81,7 @@ command to accept the new parameters file and then restarting the node. Node own
|
||||
time effectively stop being a part of the network.
|
||||
|
||||
**Signature constraints.** These are not yet supported, but once implemented they will allow a state to require a JAR
|
||||
signed by a specified identity, via the regular Java jarsigner tool. This will be the most flexible type
|
||||
signed by a specified identity, via the regular Java ``jarsigner`` tool. This will be the most flexible type
|
||||
and the smoothest to deploy: no restarts or contract upgrade transactions are needed.
|
||||
|
||||
**Defaults.** The default constraint type is either a zone constraint, if the network parameters in effect when the
|
||||
@ -135,18 +135,18 @@ constraint placeholder is useful.
|
||||
FinalityFlow
|
||||
------------
|
||||
|
||||
It's possible to encounter contract contraint issues when notarising transactions with the ``FinalityFlow`` on a network
|
||||
containing multiple versions of the same CorDapp. This will happen when using hash contraints or with zone contraints
|
||||
It's possible to encounter contract constraint issues when notarising transactions with the ``FinalityFlow`` on a network
|
||||
containing multiple versions of the same CorDapp. This will happen when using hash constraints or with zone constraints
|
||||
if the zone whitelist has missing CorDapp versions. If a participating party fails to validate the **notarised** transaction
|
||||
then we have a scenerio where the members of the network do not have a consistent view of the ledger.
|
||||
then we have a scenario where the members of the network do not have a consistent view of the ledger.
|
||||
|
||||
Therfore, if the finality handler flow (which is run on the counterparty) errors for any reason it will always be sent to
|
||||
Therefore, if the finality handler flow (which is run on the counter-party) errors for any reason it will always be sent to
|
||||
the flow hospital. From there it's suspended waiting to be retried on node restart. This gives the node operator the opportunity
|
||||
to recover from those errors, which in the case of contract constraint voilations means either updating the CorDapp or
|
||||
to recover from those errors, which in the case of contract constraint violations means either updating the CorDapp or
|
||||
adding its hash to the zone whitelist.
|
||||
|
||||
.. note:: This is a temporary issue in the current version of Corda, until we implement some missing features which will
|
||||
enable a seemless handling of differences in CorDapp versions.
|
||||
enable a seamless handling of differences in CorDapp versions.
|
||||
|
||||
CorDapps as attachments
|
||||
-----------------------
|
||||
|
@ -583,7 +583,7 @@ Only one party has to call ``FinalityFlow`` for a given transaction to be record
|
||||
|
||||
Because the transaction has already been notarised and the input states consumed, if the participants when receiving the
|
||||
transaction fail to verify it, or the receiving flow (the finality handler) fails due to some other error, we then have
|
||||
the scenerio where not all parties have the correct up to date view of the ledger. To recover from this the finality handler
|
||||
the scenario where not all parties have the correct up to date view of the ledger. To recover from this the finality handler
|
||||
is automatically sent to the flow hospital where it's suspended and retried from its last checkpoint on node restart.
|
||||
This gives the node operator the opportunity to recover from the error. Until the issue is resolved the node will continue
|
||||
to retry the flow on each startup.
|
||||
|
@ -25,7 +25,7 @@ There are three kinds of breaking change:
|
||||
* Removal or modification of existing API, i.e. an existing class, method or field has been either deleted or renamed, or
|
||||
its signature somehow altered.
|
||||
* Addition of a new method to an interface or abstract class. Types that have been annotated as ``@DoNotImplement`` are
|
||||
excluded from this check. (This annotation is also inherited across subclasses and subinterfaces.)
|
||||
excluded from this check. (This annotation is also inherited across subclasses and sub-interfaces.)
|
||||
* Exposure of an internal type via a public API. Internal types are considered to be anything in a ``*.internal.`` package
|
||||
or anything in a module that isn't in the stable modules list :ref:`here <internal-apis-and-stability-guarantees>`.
|
||||
|
||||
@ -49,7 +49,7 @@ Updating the API
|
||||
As a rule, ``api-current.txt`` should only be updated by the release manager for each Corda release.
|
||||
|
||||
We do not expect modifications to ``api-current.txt`` as part of normal development. However, we may sometimes need to adjust
|
||||
the public API in ways that would not break developers' CorDapps but which would be blocked by the API Stabilty check.
|
||||
the public API in ways that would not break developers' CorDapps but which would be blocked by the API Stability check.
|
||||
For example, migrating a method from an interface into a superinterface. Any changes to the API summary file should be
|
||||
included in the PR, which would then need explicit approval from either `Mike Hearn <https://github.com/mikehearn>`_, `Rick Parker <https://github.com/rick-r3>`_ or `Matthew Nesbit <https://github.com/mnesbit>`_.
|
||||
|
||||
|
@ -52,7 +52,7 @@ Configuring the ``MockNetwork``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``MockNetwork`` is configured automatically. You can tweak its configuration using a ``MockNetworkParameters``
|
||||
object, or by using named paramters in Kotlin:
|
||||
object, or by using named parameters in Kotlin:
|
||||
|
||||
.. container:: codeset
|
||||
|
||||
|
@ -118,7 +118,7 @@ All ``QueryCriteria`` implementations are composable using ``and`` and ``or`` op
|
||||
All ``QueryCriteria`` implementations provide an explicitly specifiable set of common attributes:
|
||||
|
||||
1. State status attribute (``Vault.StateStatus``), which defaults to filtering on UNCONSUMED states.
|
||||
When chaining several criterias using AND / OR, the last value of this attribute will override any previous
|
||||
When chaining several criteria using AND / OR, the last value of this attribute will override any previous
|
||||
2. Contract state types (``<Set<Class<out ContractState>>``), which will contain at minimum one type (by default this
|
||||
will be ``ContractState`` which resolves to all state types). When chaining several criteria using ``and`` and
|
||||
``or`` operators, all specified contract state types are combined into a single set
|
||||
|
@ -81,7 +81,7 @@ The deployment process will start and typically takes 8-10 minutes to complete.
|
||||
|
||||
Once deployed click 'Resources Groups', select the resource group you defined in Step 1 above and click 'Overview' to see the virtual machine details. The names of your VMs will be pre-fixed with the resource prefix value you defined in Step 1 above.
|
||||
|
||||
The Newtork Map Service node is suffixed nm0. The Notary node is suffixed not0. Your Corda participant nodes are suffixed node0, node1, node2 etc. Note down the **Public IP address** for your Corda nodes. You will need these to connect to UI screens via your web browser:
|
||||
The Network Map Service node is suffixed nm0. The Notary node is suffixed not0. Your Corda participant nodes are suffixed node0, node1, node2 etc. Note down the **Public IP address** for your Corda nodes. You will need these to connect to UI screens via your web browser:
|
||||
|
||||
.. image:: resources/azure_ip.png
|
||||
:width: 300px
|
||||
|
@ -11,8 +11,10 @@ CorDapps
|
||||
cordapp-build-systems
|
||||
building-against-master
|
||||
corda-api
|
||||
serialization
|
||||
serialization-index
|
||||
secure-coding-guidelines
|
||||
flow-cookbook
|
||||
vault
|
||||
soft-locking
|
||||
cheat-sheet
|
||||
building-a-cordapp-samples
|
||||
building-a-cordapp-samples
|
||||
|
@ -5,7 +5,7 @@ Here's a summary of what's changed in each Corda release. For guidance on how to
|
||||
release, see :doc:`upgrade-notes`.
|
||||
|
||||
Unreleased
|
||||
==========
|
||||
----------
|
||||
|
||||
* Introduced a grace period before the initial node registration fails if the node cannot connect to the Doorman.
|
||||
It retries 10 times with a 1 minute interval in between each try. At the moment this is not configurable.
|
||||
@ -15,7 +15,7 @@ Unreleased
|
||||
* H2 database changes:
|
||||
* The node's H2 database now listens on ``localhost`` by default.
|
||||
* The database server address must also be enabled in the node configuration.
|
||||
* A new ``h2Settings`` configuration block supercedes the ``h2Port`` option.
|
||||
* A new ``h2Settings`` configuration block supersedes the ``h2Port`` option.
|
||||
|
||||
* Improved documentation PDF quality. Building the documentation now requires ``LaTex`` to be installed on the OS.
|
||||
|
||||
@ -26,13 +26,13 @@ Unreleased
|
||||
|
||||
* Introducing the flow hospital - a component of the node that manages flows that have errored and whether they should
|
||||
be retried from their previous checkpoints or have their errors propagate. Currently it will respond to any error that
|
||||
occurs during the resolution of a received transaction as part of ``FinalityFlow``. In such a scenerio the receiving
|
||||
occurs during the resolution of a received transaction as part of ``FinalityFlow``. In such a scenario the receiving
|
||||
flow will be parked and retried on node restart. This is to allow the node operator to rectify the situation as otherwise
|
||||
the node will have an incomplete view of the ledger.
|
||||
|
||||
* Fixed an issue preventing out of process nodes started by the ``Driver`` from logging to file.
|
||||
|
||||
* Fixed an issue with ``CashException`` not being able to deserialise after the introduction of AMQP for RPC.
|
||||
* Fixed an issue with ``CashException`` not being able to deserialize after the introduction of AMQP for RPC.
|
||||
|
||||
* Removed -Xmx VM argument from Explorer's Capsule setup. This helps avoiding out of memory errors.
|
||||
|
||||
@ -86,13 +86,13 @@ Unreleased
|
||||
* ``SerializedBytes`` is serialised by materialising the bytes into the object it represents, and then serialising that
|
||||
object into YAML/JSON
|
||||
* ``X509Certificate`` is serialised as an object with key fields such as ``issuer``, ``publicKey``, ``serialNumber``, etc.
|
||||
The encoded bytes are also serialised into the ``encoded`` field. This can be used to deserialise an ``X509Certificate``
|
||||
The encoded bytes are also serialised into the ``encoded`` field. This can be used to deserialize an ``X509Certificate``
|
||||
back.
|
||||
* ``CertPath`` objects are serialised as a list of ``X509Certificate`` objects.
|
||||
* ``WireTransaction`` now nicely outputs into its components: ``id``, ``notary``, ``inputs``, ``attachments``, ``outputs``,
|
||||
``commands``, ``timeWindow`` and ``privacySalt``. This can be deserialised back.
|
||||
``commands``, ``timeWindow`` and ``privacySalt``. This can be deserialized back.
|
||||
* ``SignedTransaction`` is serialised into ``wire`` (i.e. currently only ``WireTransaction`` tested) and ``signatures``,
|
||||
and can be deserialised back.
|
||||
and can be deserialized back.
|
||||
|
||||
* ``fullParties`` boolean parameter added to ``JacksonSupport.createDefaultMapper`` and ``createNonRpcMapper``. If ``true``
|
||||
then ``Party`` objects are serialised as JSON objects with the ``name`` and ``owningKey`` fields. For ``PartyAndCertificate``
|
||||
@ -250,7 +250,7 @@ Version 3.0
|
||||
:doc:`corda-configuration-file` for more details.
|
||||
|
||||
* Introducing the concept of network parameters which are a set of constants which all nodes on a network must agree on
|
||||
to correctly interop. These can be retrieved from ``ServiceHub.networkParameters``.
|
||||
to correctly interoperate. These can be retrieved from ``ServiceHub.networkParameters``.
|
||||
|
||||
* One of these parameters, ``maxTransactionSize``, limits the size of a transaction, including its attachments, so that
|
||||
all nodes have sufficient memory to validate transactions.
|
||||
@ -336,7 +336,7 @@ Version 3.0
|
||||
* A new function ``checkCommandVisibility(publicKey: PublicKey)`` has been added to ``FilteredTransaction`` to check
|
||||
if every command that a signer should receive (e.g. an Oracle) is indeed visible.
|
||||
|
||||
* Changed the AMQP serialiser to use the oficially assigned R3 identifier rather than a placeholder.
|
||||
* Changed the AMQP serializer to use the officially assigned R3 identifier rather than a placeholder.
|
||||
|
||||
* The ``ReceiveTransactionFlow`` can now be told to record the transaction at the same time as receiving it. Using this
|
||||
feature, better support for observer/regulator nodes has been added. See :doc:`tutorial-observer-nodes`.
|
||||
@ -459,7 +459,7 @@ Release 1.0
|
||||
the only functionality left. You also need to rename your services resource file to the new class name.
|
||||
An associated property on ``MockNode`` was renamed from ``testPluginRegistries`` to ``testSerializationWhitelists``.
|
||||
|
||||
* Contract Upgrades: deprecated RPC authorisation / deauthorisation API calls in favour of equivalent flows in ContractUpgradeFlow.
|
||||
* Contract Upgrades: deprecated RPC authorization / deauthorization API calls in favour of equivalent flows in ContractUpgradeFlow.
|
||||
Implemented contract upgrade persistence using JDBC backed persistent map.
|
||||
|
||||
* Vault query common attributes (state status and contract state types) are now handled correctly when using composite
|
||||
@ -467,7 +467,7 @@ Release 1.0
|
||||
|
||||
* Cash selection algorithm is now pluggable (with H2 being the default implementation)
|
||||
|
||||
* Removed usage of Requery ORM library (repalced with JPA/Hibernate)
|
||||
* Removed usage of Requery ORM library (replaced with JPA/Hibernate)
|
||||
|
||||
* Vault Query performance improvement (replaced expensive per query SQL statement to obtain concrete state types
|
||||
with single query on start-up followed by dynamic updates using vault state observable))
|
||||
@ -659,7 +659,7 @@ Milestone 14
|
||||
in the ``test-utils`` modules.
|
||||
|
||||
* In Java, ``QueryCriteriaUtilsKt`` has moved to ``QueryCriteriaUtils``. Also ``and`` and ``or`` are now instance methods
|
||||
of ``QueryCrtieria``.
|
||||
of ``QueryCriteria``.
|
||||
|
||||
* ``random63BitValue()`` has moved to ``CryptoUtils``
|
||||
|
||||
@ -670,12 +670,12 @@ Milestone 14
|
||||
in the ``core`` module begin with ``net.corda.core``.
|
||||
|
||||
* ``FinalityFlow`` can now be subclassed, and the ``broadcastTransaction`` and ``lookupParties`` function can be
|
||||
overriden in order to handle cases where no single transaction participant is aware of all parties, and therefore
|
||||
overridden in order to handle cases where no single transaction participant is aware of all parties, and therefore
|
||||
the transaction must be relayed between participants rather than sent from a single node.
|
||||
|
||||
* ``TransactionForContract`` has been removed and all usages of this class have been replaced with usage of
|
||||
``LedgerTransaction``. In particular ``Contract.verify`` and the ``Clauses`` API have been changed and now take a
|
||||
``LedgerTransaction`` as passed in parameter. The prinicpal consequence of this is that the types of the input and output
|
||||
``LedgerTransaction`` as passed in parameter. The principal consequence of this is that the types of the input and output
|
||||
collections on the transaction object have changed, so it may be necessary to ``map`` down to the ``ContractState``
|
||||
sub-properties in existing code.
|
||||
|
||||
@ -938,7 +938,7 @@ Milestone 11.0
|
||||
such as ``Certificate``.
|
||||
|
||||
* There is no longer a need to transform single keys into composite - ``composite`` extension was removed, it is
|
||||
imposible to create ``CompositeKey`` with only one leaf.
|
||||
impossible to create ``CompositeKey`` with only one leaf.
|
||||
|
||||
* Constructor of ``CompositeKey`` class is now private. Use ``CompositeKey.Builder`` to create a composite key.
|
||||
Keys emitted by the builder are normalised so that it's impossible to create a composite key with only one node.
|
||||
@ -1343,7 +1343,7 @@ New features in this release:
|
||||
* Compound keys have been added as preparation for merging a distributed RAFT based notary. Compound keys
|
||||
are trees of public keys in which interior nodes can have validity thresholds attached, thus allowing
|
||||
boolean formulas of keys to be created. This is similar to Bitcoin's multi-sig support and the data model
|
||||
is the same as the InterLedger Crypto-Conditions spec, which should aid interop in future. Read more about
|
||||
is the same as the InterLedger Crypto-Conditions spec, which should aid interoperate in future. Read more about
|
||||
key trees in the ":doc:`api-core-types`" article.
|
||||
* A new tutorial has been added showing how to use transaction attachments in more detail.
|
||||
|
||||
@ -1501,7 +1501,7 @@ Highlights of this release:
|
||||
* Upgrades to the notary/consensus service support:
|
||||
|
||||
* There is now a way to change the notary controlling a state.
|
||||
* You can pick between validating and non-validating notaries, these let you select your privacy/robustness tradeoff.
|
||||
* You can pick between validating and non-validating notaries, these let you select your privacy/robustness trade-off.
|
||||
|
||||
* A new obligation contract that supports bilateral and multilateral netting of obligations, default tracking and
|
||||
more.
|
||||
|
@ -382,7 +382,7 @@ the error handler upon subscription to an ``Observable``. The call to this ``onE
|
||||
happens then the code will terminate existing subscription, closes RPC connection and recursively calls ``performRpcReconnect``
|
||||
which will re-subscribe once RPC connection comes back online.
|
||||
|
||||
Client code if fed with instances of ``StateMachineInfo`` using call ``clientCode(it)``. Upon re-connec, this code receives
|
||||
Client code if fed with instances of ``StateMachineInfo`` using call ``clientCode(it)``. Upon re-connecting, this code receives
|
||||
all the items. Some of these items might have already been delivered to client code prior to failover occurred.
|
||||
It is down to client code in this case handle those duplicate items as appropriate.
|
||||
|
||||
|
@ -34,7 +34,7 @@ that doesn't mean it's always better. In particular:
|
||||
bugs, but over-used it can make code that has to adjust fields of an immutable object (in a clone) hard to read and
|
||||
stress the garbage collector. When such code becomes a widespread pattern it can lead to code that is just generically
|
||||
slow but without hotspots.
|
||||
* The tradeoffs between various thread safety techniques are complex, subtle, and no technique is always superior to
|
||||
* The trade-offs between various thread safety techniques are complex, subtle, and no technique is always superior to
|
||||
the others. Our code uses a mix of locks, worker threads and messaging depending on the situation.
|
||||
|
||||
1.1 Line Length and Spacing
|
||||
@ -68,7 +68,7 @@ told by the code are best deleted. Comments should:
|
||||
|
||||
* Explain what the code is doing at a higher level than is obtainable from just examining the statement and
|
||||
surrounding code.
|
||||
* Explain why certain choices were made and the tradeoffs considered.
|
||||
* Explain why certain choices were made and the trade-offs considered.
|
||||
* Explain how things can go wrong, which is a detail often not easily seen just by reading the code.
|
||||
* Use good grammar with capital letters and full stops. This gets us in the right frame of mind for writing real
|
||||
explanations of things.
|
||||
|
@ -108,5 +108,5 @@ as ``@DoNotImplement``. While we undertake not to remove or modify any of these
|
||||
functionality, the annotation is a warning that we may need to extend them in future versions of Corda.
|
||||
Cordapp developers should therefore just use these classes "as is", and *not* attempt to extend or implement any of them themselves.
|
||||
|
||||
This annotation is inherited by subclasses and subinterfaces.
|
||||
This annotation is inherited by subclasses and sub-interfaces.
|
||||
|
||||
|
@ -19,13 +19,13 @@ If you specify both command line arguments at the same time, the node will fail
|
||||
|
||||
Format
|
||||
------
|
||||
The Corda configuration file uses the HOCON format which is superset of JSON. Please visit
|
||||
The Corda configuration file uses the HOCON format which is a superset of JSON. Please visit
|
||||
`<https://github.com/typesafehub/config/blob/master/HOCON.md>`_ for further details.
|
||||
|
||||
Please do NOT use double quotes (``"``) in configuration keys.
|
||||
|
||||
Node setup will log `Config files should not contain \" in property names. Please fix: [key]` as error
|
||||
when it founds double quotes around keys.
|
||||
Node setup will log `Config files should not contain \" in property names. Please fix: [key]` as an error
|
||||
when it finds double quotes around keys.
|
||||
This prevents configuration errors when mixing keys containing ``.`` wrapped with double quotes and without them
|
||||
e.g.:
|
||||
The property `"dataSourceProperties.dataSourceClassName" = "val"` in ``reference.conf``
|
||||
|
@ -74,7 +74,7 @@ Each of the above certificates will specify a CRL allowing the certificate to be
|
||||
(primarily R3) will be required to maintain this CRL for the lifetime of the process.
|
||||
|
||||
TLS certificates will remain issued under Node CA certificates (see [decision: TLS trust
|
||||
root](./decisions/tls-trust-root.html)).
|
||||
root](./decisions/tls-trust-root.md)).
|
||||
|
||||
Nodes will be able to specify CRL(s) for TLS certificates they issue; in general, they will be required to such CRLs for
|
||||
the lifecycle of the TLS certificates.
|
||||
|
@ -37,7 +37,7 @@ MN presented a high level summary of the options:
|
||||
|
||||
- Zookeeper (recommended option): industry standard widely used and trusted. May be able to leverage clients' incumbent Zookeeper infrastructure
|
||||
- Positive: has flexibility for storage and a potential for future proofing; good permissioning capabilities; standalone cluster of Zookeeper servers allows 2 nodes solution rather than 3
|
||||
- Negative: adds deployment complexity due to need for Zookeeper cluster split across datacentres
|
||||
- Negative: adds deployment complexity due to need for Zookeeper cluster split across data centers
|
||||
Wrapper library choice for Zookeeper requires some analysis
|
||||
|
||||
|
||||
@ -87,7 +87,7 @@ MH: how does failover work with HSMs?
|
||||
MN: can replicate realm so failover is trivial
|
||||
|
||||
JC: how do we document Enterprise features? Publish design docs? Enterprise fact sheets? R3 Corda marketing material?
|
||||
Clear seperation of documentation is required. GT: this is already achieved by having docs.corda.net for open source
|
||||
Clear separation of documentation is required. GT: this is already achieved by having docs.corda.net for open source
|
||||
Corda and docs.corda.r3.com for enterprise R3 Corda
|
||||
|
||||
|
||||
|
@ -26,7 +26,7 @@ MO set out ground rules for the meeting. RGB asked everyone to confirm they had
|
||||
|
||||
MN outlined the motivation for a Float as responding to organisation’s expectation for a‘fire break’ protocol termination in the DMZ where manipulation and operation can be checked and monitored.
|
||||
|
||||
The meetingwas briefly interrupted by technical difficulties with the GoToMeetingconferencing system.
|
||||
The meeting was briefly interrupted by technical difficulties with the GoToMeeting conferencing system.
|
||||
|
||||
MN continued to outline how the design was constrained by expected DMZ rules and influenced by currently perceived client expectations – e.g. making the float unidirectional. He gave a prelude to certain design decisions e.g. the use ofAMQP from the outset.
|
||||
|
||||
@ -38,7 +38,7 @@ JC questioned where the TLS connection would terminate. MN outlined the pros and
|
||||
|
||||
MH contended that the need to propagate TLS headers etc. through to the node (for reinforcing identity checks etc.) implied a need to terminate on the float. MN agreed but noted that in practice the current node design did not make much use of that feature.
|
||||
|
||||
JCquestioned how users would provision a TLS cert on a firewall – MN confirmedusers would be able to do this themselves and were typically familiar withdoing so.
|
||||
JC questioned how users would provision a TLS cert on a firewall – MN confirmed users would be able to do this themselves and were typically familiar with doing so.
|
||||
|
||||
RGB highlighted the distinction between the signing key for the TLS vs. identity certificates, and that this needed to be made clear to users. MN agreed that TLS private keys could be argued to be less critical from a security perspective, particularly when revocation was enabled.
|
||||
|
||||
@ -52,11 +52,11 @@ RGB concluded the bridge must effectively trust the firewall or bridge on the or
|
||||
|
||||
JC queried whether SASL would allow passing of identity and hence termination at the firewall;MN confirmed this.
|
||||
|
||||
MH contented that the TLS implementation was specific to Corda in several ways which may challenge implementation using firewalls, and that typical firewalls(using old OpenSSL etc.) were probably not more secure than R3’s own solutions. RGB pointed out that the design was ultimately driven by client perception ofsecurity (MN: “security theatre”) rather than objective assessment. MH added that implementations would be firewall-specific and not all devices would support forwarding, support for AMQP etc.
|
||||
MH contented that the TLS implementation was specific to Corda in several ways which may challenge implementation using firewalls, and that typical firewalls(using old OpenSSL etc.) were probably not more secure than R3’s own solutions. RGB pointed out that the design was ultimately driven by client perception of security (MN: “security theatre”) rather than objective assessment. MH added that implementations would be firewall-specific and not all devices would support forwarding, support for AMQP etc.
|
||||
|
||||
RGB proposed messaging to clients that the option existed to terminate on the firewall if it supported the relevant requirements.
|
||||
|
||||
MN re-raised the question of key management. RGB asked about the risk implied from the threat of a compromised float. MN said an attacker who compromised a float could establish TLS connections in the name of the compromised party, and couldinspect and alter packets including readable busness data (assuming AMQP serialisation). MH gave an example of a MITM attack where an attacker could swap in their own single-use key allowing them to gain control of (e.g.) a cash asset; the TLS layer is the only current protection against that.
|
||||
MN re-raised the question of key management. RGB asked about the risk implied from the threat of a compromised float. MN said an attacker who compromised a float could establish TLS connections in the name of the compromised party, and could inspect and alter packets including readable business data (assuming AMQP serialisation). MH gave an example of a MITM attack where an attacker could swap in their own single-use key allowing them to gain control of (e.g.) a cash asset; the TLS layer is the only current protection against that.
|
||||
|
||||
RGB queried whether messages could be signed by senders. MN raised potential threat of traffic analysis, and stated E2E encryption was definitely possible but not for March-April.
|
||||
|
||||
@ -88,7 +88,7 @@ MN described alternative options involving onion-routing etc.
|
||||
|
||||
JoC questioned whether this would also allow support for load balancing; MN advised this would be too much change in direction in practice.
|
||||
|
||||
MH outlined his original reasoning for AMQP (lots of e.g. manageability features, not allof which would be needed at the outset but possibly in future) vs. other options e.g. MQTT.
|
||||
MH outlined his original reasoning for AMQP (lots of e.g. manageability features, not all of which would be needed at the outset but possibly in future) vs. other options e.g. MQTT.
|
||||
|
||||
MO questioned whether the broker would imply performance limitations.
|
||||
|
||||
@ -110,7 +110,7 @@ JC queried whether broker providers could be asked to deliver the feature. AB me
|
||||
|
||||
JoC noted a distinction in scope for P2P and/or RPC.
|
||||
|
||||
There was discussion of replacing the core protocol with JMS + plugins. RGB drew focus tothe question of when to do so, rather than how.
|
||||
There was discussion of replacing the core protocol with JMS + plugins. RGB drew focus to the question of when to do so, rather than how.
|
||||
|
||||
AB noted Solace have functionality with conceptual similarities to the float, and questioned to what degree the float could be considered non-core technology. MH argued the nature of Corda as a P2P network made the float pretty core to avoiding dedicated network infrastructure.
|
||||
|
||||
@ -124,7 +124,7 @@ DL sought confirmation that the group was happy with the float to act as a Liste
|
||||
|
||||
MH requested more detailed proposals going forward on:
|
||||
|
||||
1) To what degree logs from different components need to be integrated (consensus wasno requirement at this stage)
|
||||
1) To what degree logs from different components need to be integrated (consensus was no requirement at this stage)
|
||||
|
||||
2) Bridge control protocols.
|
||||
|
||||
|
@ -28,7 +28,7 @@ End-to-end encryption is a desirable potential design feature for the [float](..
|
||||
|
||||
#### Disadvantages
|
||||
|
||||
1. Doesn’t actually provide E2E, or define what an encrypted payloadlooks like.
|
||||
1. Doesn’t actually provide E2E, or define what an encrypted payload looks like.
|
||||
2. Doesn’t address any crypto features that target protecting the AMQP headers.
|
||||
|
||||
### 3. Implement end-to-end encryption
|
||||
|
@ -16,7 +16,7 @@ Under this option, P2P messaging will follow the [Advanced Message Queuing Proto
|
||||
|
||||
1. As we have described in our marketing materials.
|
||||
2. Well-defined standard.
|
||||
3. Supportfor packet level flow control and explicit delivery acknowledgement.
|
||||
3. Support for packet level flow control and explicit delivery acknowledgement.
|
||||
4. Will allow eventual swap out of Artemis for other brokers.
|
||||
|
||||
#### Disadvantages
|
||||
@ -25,7 +25,7 @@ Under this option, P2P messaging will follow the [Advanced Message Queuing Proto
|
||||
2. No support for secure MAC in packets frames.
|
||||
3. No defined encryption mode beyond creating custom payload encryption and custom headers.
|
||||
4. No standardised support for queue creation/enumeration, or deletion.
|
||||
5. Use of broker durable queues and autonomousbridge transfers does not align with checkpoint timing, so that independent replication of the DB and Artemis data risks causing problems. (Writing to the DB doesn’t work currently and is probably also slow).
|
||||
5. Use of broker durable queues and autonomous bridge transfers does not align with checkpoint timing, so that independent replication of the DB and Artemis data risks causing problems. (Writing to the DB doesn’t work currently and is probably also slow).
|
||||
|
||||
### 2. Develop a custom protocol
|
||||
|
||||
|
@ -38,7 +38,7 @@ pre-existence of an applicable message queue for that peer.
|
||||
## Scope
|
||||
|
||||
* Goals:
|
||||
* Allow connection to a Corda node wihout requiring direct incoming connections from external participants.
|
||||
* Allow connection to a Corda node without requiring direct incoming connections from external participants.
|
||||
* Allow connections to a Corda node without requiring the node itself to have a public IP address. Separate TLS connection handling from the MQ broker.
|
||||
* Non-goals (out of scope):
|
||||
* Support for MQ brokers other than Apache Artemis
|
||||
@ -50,7 +50,7 @@ For delivery by end Q1 2018.
|
||||
Allow connectivity in compliance with DMZ constraints commonly imposed by modern financial institutions; namely:
|
||||
1. Firewalls required between the internet and any device in the DMZ, and between the DMZ and the internal network
|
||||
2. Data passing from the internet and the internal network via the DMZ should pass through a clear protocol break in the DMZ.
|
||||
3. Only identified IPs and ports are permitted to access devices in the DMZ; this include communications between devices colocated in the DMZ.
|
||||
3. Only identified IPs and ports are permitted to access devices in the DMZ; this include communications between devices co-located in the DMZ.
|
||||
4. Only a limited number of ports are opened in the firewall (<5) to make firewall operation manageable. These ports must change slowly.
|
||||
5. Any DMZ machine is typically multi-homed, with separate network cards handling traffic through the institutional
|
||||
firewall vs. to the Internet. (There is usually a further hidden management interface card accessed via a jump box for
|
||||
|
@ -16,9 +16,9 @@ The potential use of a crash shell is relevant to high availability capabilities
|
||||
#### Disadvantages
|
||||
|
||||
1. Won’t reliably work if the node is in an unstable state
|
||||
2. Not practical for running hundreds of nodes as our customers arealready trying to do.
|
||||
2. Not practical for running hundreds of nodes as our customers already trying to do.
|
||||
3. Doesn’t mesh with the user access controls of the organisation.
|
||||
4. Doesn’t interface to the existing monitoring andcontrol systems i.e. Nagios, Geneos ITRS, Docker Swarm, etc.
|
||||
4. Doesn’t interface to the existing monitoring and control systems i.e. Nagios, Geneos ITRS, Docker Swarm, etc.
|
||||
|
||||
### 2. Delegate to external tools
|
||||
|
||||
@ -27,7 +27,7 @@ The potential use of a crash shell is relevant to high availability capabilities
|
||||
1. Doesn’t require change from our customers
|
||||
2. Will work even if node is completely stuck
|
||||
3. Allows scripted node restart schedules
|
||||
4. Doesn’t raise questions about access controllists and audit
|
||||
4. Doesn’t raise questions about access control lists and audit
|
||||
|
||||
#### Disadvantages
|
||||
|
||||
|
@ -30,7 +30,7 @@ Storage of messages by the message broker has implications for replication techn
|
||||
|
||||
#### Disadvantages
|
||||
|
||||
1. Doesn’t work on H2, or SQL Server. From my own testing LargeObject support is broken. The current Artemis code base does allow somepluggability, but not of the large object implementation, only of the SQLstatements. We should lobby for someone to fix the implementations for SQLServer and H2.
|
||||
1. Doesn’t work on H2, or SQL Server. From my own testing LargeObject support is broken. The current Artemis code base does allow some pluggability, but not of the large object implementation, only of the SQL statements. We should lobby for someone to fix the implementations for SQLServer and H2.
|
||||
2. Probably much slower, although this needs measuring.
|
||||
|
||||
## Recommendation and justification
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
## Background / Context
|
||||
|
||||
End-to-end encryption is a desirable potential design feature for the [high availability support](design).
|
||||
End-to-end encryption is a desirable potential design feature for the [high availability support](../design.md).
|
||||
|
||||
## Options Analysis
|
||||
|
||||
@ -30,7 +30,7 @@ End-to-end encryption is a desirable potential design feature for the [high avai
|
||||
#### Disadvantages
|
||||
|
||||
1. Have to write code to support it.
|
||||
2. Configuration more complicated and now the nodesare non-equivalent, so you can’t just copy the config to the backup.
|
||||
2. Configuration more complicated and now the nodes are non-equivalent, so you can’t just copy the config to the backup.
|
||||
3. Artemis has round robin and automatic failover, so we may have to expose a vendor specific config flag in the network map.
|
||||
|
||||
## Recommendation and justification
|
||||
|
@ -66,8 +66,8 @@ the node following failure.
|
||||
|
||||
### Non-goals (out of scope for this design document)
|
||||
|
||||
* Be able to distribute a node over more than two datacenters.
|
||||
* Be able to distribute a node between datacenters that are very far apart latency-wise (unless you don't care about performance).
|
||||
* Be able to distribute a node over more than two data centers.
|
||||
* Be able to distribute a node between data centers that are very far apart latency-wise (unless you don't care about performance).
|
||||
* Be able to tolerate arbitrary byzantine failures within a node cluster.
|
||||
* DR, specifically in the case of the complete failure of a site/datacentre/cluster or region will require a different
|
||||
solution to that specified here. For now DR is only supported where performant synchronous replication is feasible
|
||||
|
@ -38,7 +38,7 @@ The notary service should be able to:
|
||||
- Notarise more than 1,000 transactions per second, with average 4 inputs per transaction.
|
||||
- Notarise a single transaction within 1s (from the service perspective).
|
||||
- Tolerate single node crash without affecting service availability.
|
||||
- Tolerate single datacenter failure.
|
||||
- Tolerate single data center failure.
|
||||
- Tolerate single disk failure/corruption.
|
||||
|
||||
|
||||
@ -86,7 +86,7 @@ the specific network environment (e.g. the bottleneck could be the replication s
|
||||
One advantage of hosting the request log in a separate cluster is that it makes it easier to independently scale the
|
||||
number of worker nodes. If, for example, if transaction validation and resolution is required when receiving a
|
||||
notarisation request, we might find that a significant number of receivers is required to generate enough incoming
|
||||
traffic to the request log. On the flipside, increasing the number of workers adds additional consumers and load on the
|
||||
traffic to the request log. On the flip side, increasing the number of workers adds additional consumers and load on the
|
||||
request log, so a balance needs to be found.
|
||||
|
||||
## Design Decisions
|
||||
@ -97,7 +97,7 @@ state index.
|
||||
|
||||
| Heading | Recommendation |
|
||||
| ---------------------------------------- | -------------- |
|
||||
| [Replication framework](decisions/replicated_storage.md) | Option C |
|
||||
| [Replication framework](decisions/replicated-storage.md) | Option C |
|
||||
| [Index storage engine](decisions/index-storage.md) | Option A |
|
||||
|
||||
TECHNICAL DESIGN
|
||||
@ -183,7 +183,7 @@ Kafka provides various configuration parameters allowing to control producer and
|
||||
|
||||
RocksDB is highly tunable as well, providing different table format implementations, compression, bloom filters, compaction styles, and others.
|
||||
|
||||
Initial prototype tests showed up to *15,000* TPS for single-input state transactions, or *40,000* IPS (inputs/sec) for 1,000 input transactions. No performance drop observed even after 1.2m transactions were notarised. The tests were run on three 8 core, 28 GB RAM Azure VMs in separate datacenters.
|
||||
Initial prototype tests showed up to *15,000* TPS for single-input state transactions, or *40,000* IPS (inputs/sec) for 1,000 input transactions. No performance drop observed even after 1.2m transactions were notarised. The tests were run on three 8 core, 28 GB RAM Azure VMs in separate data centers.
|
||||
|
||||
With the recent introduction of notarisation request signatures the figures are likely to be much lower, as the request payload size is increased significantly. More tuning and testing required.
|
||||
|
||||
@ -210,8 +210,8 @@ Kafka exports a wide range of metrics via JMX. Datadog integration available.
|
||||
### Disaster recovery
|
||||
|
||||
Failure modes:
|
||||
1. **Single machine or datacenter failure**. No backup/restore procedures are needed – nodes can catch up with the cluster on start. The RocksDB-backed committed state index keeps a pointer to the position of the last applied Kafka record, and it can resume where it left after restart.
|
||||
2. **Multi-datacenter disaster leading to data loss**. Out of scope.
|
||||
1. **Single machine or data center failure**. No backup/restore procedures are needed – nodes can catch up with the cluster on start. The RocksDB-backed committed state index keeps a pointer to the position of the last applied Kafka record, and it can resume where it left after restart.
|
||||
2. **Multi-data center disaster leading to data loss**. Out of scope.
|
||||
3. **User error**. It is possible for an admin to accidentally delete a topic – Kafka provides tools for that. However, topic deletion has to be explicitly enabled in the configuration (disabled by default). Keeping that option disabled should be a sufficient safeguard.
|
||||
4. **Protocol-level corruption**. This covers scenarios when data stored in Kafka gets corrupted and the corruption is replicated to healthy replicas. In general, this is extremely unlikely to happen since Kafka records are immutable. The only such corruption in practical sense could happen due to record deletion during compaction, which would occur if the broker is misconfigured to not retrain records indefinitely. However, compaction is performed asynchronously and local to the broker. In order for all data to be lost, _all_ brokers have to be misconfigured.
|
||||
|
||||
@ -223,7 +223,7 @@ In both scenarios the most recent requests will be lost. If data loss only occur
|
||||
|
||||
## Security
|
||||
|
||||
* **Communication**. Kafka supports SSL for both client-to-server and server-to-server communication. However, Zookeeper only supports SSL in client-to-server, which means that running Zookeeper across datacenters will require setting up a VPN. For simplicity, we can reuse the same VPN for the Kafka cluster as well. The notary worker nodes can talk to Kafka either via SSL or the VPN.
|
||||
* **Communication**. Kafka supports SSL for both client-to-server and server-to-server communication. However, Zookeeper only supports SSL in client-to-server, which means that running Zookeeper across data centers will require setting up a VPN. For simplicity, we can reuse the same VPN for the Kafka cluster as well. The notary worker nodes can talk to Kafka either via SSL or the VPN.
|
||||
|
||||
* **Data privacy**. No transaction contents or PII is revealed or stored.
|
||||
|
||||
|
@ -15,9 +15,9 @@ CorDapp).
|
||||
|
||||
![MonitoringLoggingOverview](./MonitoringLoggingOverview.png)
|
||||
|
||||
In the above diagram, the left handside dotted box represents the components within scope for this design. It is
|
||||
In the above diagram, the left hand side dotted box represents the components within scope for this design. It is
|
||||
anticipated that 3rd party enterprise-wide system management solutions will closely follow the architectural component
|
||||
breakdown in the right handside box, and thus seamlessly integrate with the proposed Corda event generation and logging
|
||||
breakdown in the right hand side box, and thus seamlessly integrate with the proposed Corda event generation and logging
|
||||
design. The interface between the two is de-coupled and based on textual log file parsing and adoption of industry
|
||||
standard JMX MBean events.
|
||||
|
||||
@ -34,7 +34,7 @@ Corda currently exposes several forms of monitorable content:
|
||||
|
||||
* Industry standard exposed JMX-based metrics, both standard JVM and custom application metrics are exposed directly
|
||||
using the [Dropwizard.io](http://metrics.dropwizard.io/3.2.3/) *JmxReporter* facility. In addition Corda also uses the
|
||||
[Jolokia](https://jolokia.org/) framework to make these accesible over an HTTP endpoint. Typically, these metrics are
|
||||
[Jolokia](https://jolokia.org/) framework to make these accessible over an HTTP endpoint. Typically, these metrics are
|
||||
also collated by 3rd party tools to provide pro-active monitoring, visualisation and re-active management.
|
||||
|
||||
A full list of currently exposed metrics can be found in the appendix A.
|
||||
@ -48,7 +48,7 @@ The `ProgressTracker` component is used to report the progress of a flow through
|
||||
typically configured to report the start of a specific business workflow step (often before and after message send and
|
||||
receipt where other participants form part of a multi-staged business workflow). The progress tracking framework was
|
||||
designed to become a vital part of how exceptions, errors, and other faults are surfaced to human operators for
|
||||
investigation and resolution. It provides a means of exporting progress as a hierachy of steps in a way that’s both
|
||||
investigation and resolution. It provides a means of exporting progress as a hierarchy of steps in a way that’s both
|
||||
human readable and machine readable.
|
||||
|
||||
In addition, in-house Corda networks at R3 use the following tools:
|
||||
@ -129,7 +129,7 @@ design, either directly or through an integrated enterprise-wide systems managem
|
||||
The following design decisions are to be confirmed:
|
||||
|
||||
1. JMX for metric eventing and SLF4J for logging
|
||||
Both above are widely adopted mechanisms that enable pluggability and seamless inteoperability with other 3rd party
|
||||
Both above are widely adopted mechanisms that enable pluggability and seamless interoperability with other 3rd party
|
||||
enterprise-wide system management solutions.
|
||||
2. Continue or discontinue usage of Jolokia? (TBC - most likely yes, subject to read-only security lock-down)
|
||||
3. Separation of Corda Node and CorDapp log outputs (TBC)
|
||||
@ -140,7 +140,7 @@ There are a number of activities and parts to the solution proposal:
|
||||
|
||||
1. Extend JMX metric reporting through the Corda Monitoring Service and associated jolokia conversion to REST/JSON)
|
||||
coverage (see implementation details) to include all Corda services (vault, key management, transaction storage,
|
||||
network map, attachment storage, identity, cordapp provision) & subsytems components (state machine)
|
||||
network map, attachment storage, identity, cordapp provision) & sub-sytems components (state machine)
|
||||
|
||||
2. Review and extend Corda log4j2 coverage (see implementation details) to ensure
|
||||
|
||||
@ -251,7 +251,7 @@ The Health checker is a CorDapp which verifies the health and liveliness of the
|
||||
|
||||
3. Flow framework verification
|
||||
|
||||
Implement a simple flow that performs a simple "in-node" (no external messaging to 3rd party processes) roundtrip, and by doing so, exercises:
|
||||
Implement a simple flow that performs a simple "in-node" (no external messaging to 3rd party processes) round trip, and by doing so, exercises:
|
||||
|
||||
- flow checkpointing (including persistence to relational data store)
|
||||
- message subsystem verification (creation of a send-to-self queue for purpose of routing)
|
||||
@ -264,16 +264,16 @@ The Health checker is a CorDapp which verifies the health and liveliness of the
|
||||
Autotriggering of above flow using RPC to exercise the following:
|
||||
|
||||
- messaging subsystem verification (RPC queuing)
|
||||
- authenticaton and permissing checking (against underlying configuration)
|
||||
- authenticaton and permissions checking (against underlying configuration)
|
||||
|
||||
|
||||
The Health checker may be deployed as part of a Corda distribution and automatically invoked upoin start-up and/or manually triggered via JMX or the nodes associated Crash shell (using the startFlow command)
|
||||
The Health checker may be deployed as part of a Corda distribution and automatically invoked upon start-up and/or manually triggered via JMX or the nodes associated Crash shell (using the startFlow command)
|
||||
|
||||
Please note that the Health checker application is not responsible for determining the healthiness of a Corda Network. This is the responsibility of the network operator, and may include verification checks such as:
|
||||
|
||||
- correct functioning of Network Map Service (registration, discovery)
|
||||
- correct functioning of configured Notary
|
||||
- remote messaging subsytem (including bridge creation)
|
||||
- remote messaging sub-sytem (including bridge creation)
|
||||
|
||||
#### Metrics augmentation within Corda Subsystems and Components
|
||||
|
||||
@ -281,7 +281,7 @@ Please note that the Health checker application is not responsible for determini
|
||||
|
||||
- Gauge: is an instantaneous measurement of a value.
|
||||
- Counter: is a gauge for a numeric value (specifically of type `AtomicLong`) which can be incremented or decremented.
|
||||
- Meter: measures mean throughtput (eg. the rate of events over time, e.g., “requests per second”). Also measures one-, five-, and fifteen-minute exponentially-weighted moving average throughputs.
|
||||
- Meter: measures mean throughput (eg. the rate of events over time, e.g., “requests per second”). Also measures one-, five-, and fifteen-minute exponentially-weighted moving average throughputs.
|
||||
- Histogram: measures the statistical distribution of values in a stream of data (minimum, maximum, mean, median, 75th, 90th, 95th, 98th, 99th, and 99.9th percentiles).
|
||||
- Timer: measures both the rate that a particular piece of code is called and the distribution of its duration (eg. rate of requests in requests per second).
|
||||
- Health checks: provides a means of centralizing service (database, message broker health checks).
|
||||
@ -293,17 +293,17 @@ The following table identifies additional metrics to report for a Corda node:
|
||||
| Component / Subsystem | Proposed Metric(s) |
|
||||
| ---------------------------------------- | ---------------------------------------- |
|
||||
| Database | Connectivity (health check) |
|
||||
| Corda Persistence | Database configuration details: <br />Data source properties: JDBC driver, JDBC driver class name, URL<br />Database properties: isolation level, schema name, init database flag<br />Run-time metrics: total & in flight connection, session, transaction counts; committed / rolledback transaction (counter); transaction durations (metric) |
|
||||
| Corda Persistence | Database configuration details: <br />Data source properties: JDBC driver, JDBC driver class name, URL<br />Database properties: isolation level, schema name, init database flag<br />Run-time metrics: total & in flight connection, session, transaction counts; committed / rolled back transaction (counter); transaction durations (metric) |
|
||||
| Message Broker | Connectivity (health check) |
|
||||
| Corda Messaging Client | |
|
||||
| State Machine | Fiber thread pool queue size (counter), Live fibers (counter) , Fibers waiting for ledger commit (counter)<br />Flow Session Messages (counters): init, confirm, received, reject, normal end, error end, total received messages (for a given flow session, Id and state)<br />(in addition to existing metrics captured)<br />Flow error (count) |
|
||||
| Flow State Machine | Initiated flows (counter)<br />For a given flow session (counters): initiated flows, send, sendAndReceive, receive, receiveAll, retries upon send<br />For flow messaging (timers) to determine roundtrip latencies between send/receive interactions with counterparties.<br />Flow suspension metrics (count, age, wait reason, cordapp) |
|
||||
| RPC | For each RPC operation we should export metrics to report: calling user, roundtrip latency (timer), calling frequency (meter). Metric reporting should include the Corda RPC protocol version (should be the same as the node's Platform Version) in play. <br />Failed requests would be of particular interest for alerting. |
|
||||
| Vault | Roundtrip latency of Vault Queries (timer)<br />Soft locking counters for reserve, release (counter), elapsed times soft locks are held for per flow id (timer, histogram), list of soft locked flow ids and associated stateRefs.<br />attempt to soft lock fungible states for spending (timer) |
|
||||
| Flow State Machine | Initiated flows (counter)<br />For a given flow session (counters): initiated flows, send, sendAndReceive, receive, receiveAll, retries upon send<br />For flow messaging (timers) to determine round trip latencies between send/receive interactions with counterparties.<br />Flow suspension metrics (count, age, wait reason, cordapp) |
|
||||
| RPC | For each RPC operation we should export metrics to report: calling user, round trip latency (timer), calling frequency (meter). Metric reporting should include the Corda RPC protocol version (should be the same as the node's Platform Version) in play. <br />Failed requests would be of particular interest for alerting. |
|
||||
| Vault | round trip latency of Vault Queries (timer)<br />Soft locking counters for reserve, release (counter), elapsed times soft locks are held for per flow id (timer, histogram), list of soft locked flow ids and associated stateRefs.<br />attempt to soft lock fungible states for spending (timer) |
|
||||
| Transaction Verification<br />(InMemoryTransactionVerifierService) | worker pool size (counter), verify duration (timer), verify throughput (meter), success (counter), failure counter), in flight (counter) |
|
||||
| Notarisation | Notary details (type, members in cluster)<br />Counters for success, failures, failure types (conflict, invalid time window, invalid transaction, wrong notary), elapsed time (timer)<br />Ideally provide breakdown of latency across notarisation steps: state ref notary validation, signature checking, from sending to remote notary to receiving response |
|
||||
| RAFT Notary Service<br />(awaiting choice of new RAFT implementation) | should include similar metrics to previous RAFT (see appendix). |
|
||||
| SimpleNotaryService | success/failure uniqueness checking<br />success/failure timewindow checking |
|
||||
| SimpleNotaryService | success/failure uniqueness checking<br />success/failure time-window checking |
|
||||
| ValidatingNotaryService | as above plus success/failure of transaction validation |
|
||||
| RaftNonValidatingNotaryService | as `SimpleNotaryService`, plus timer for algorithmic execution latency |
|
||||
| RaftValidatingNotaryService | as `ValidatingNotaryService`, plus timer for algorithmic execution latency |
|
||||
@ -423,7 +423,7 @@ Additionally, JMX metrics are also generated within the Corda *node-driver* perf
|
||||
|
||||
## Appendix B - Corda Logging and Reporting coverage
|
||||
|
||||
Primary node services exposed publically via ServiceHub (SH) or internally by ServiceHubInternal (SHI):
|
||||
Primary node services exposed publicly via ServiceHub (SH) or internally by ServiceHubInternal (SHI):
|
||||
|
||||
| Service | Type | Implementation | Logging summary |
|
||||
| ---------------------------------------- | ---- | ---------------------------------- | ---------------------------------------- |
|
||||
@ -537,7 +537,7 @@ The following table summarised the types of metrics associated with Message Queu
|
||||
| count | total number of messages added to a queue since the server started |
|
||||
| countDelta | number of messages added to the queue *since the last message counter update* |
|
||||
| messageCount | *current* number of messages in the queue |
|
||||
| messageCountDelta | *overall* number of messages added/removed from the queue *since the last message counter update*. Positive value indicated more messages were added, negative viceversa. |
|
||||
| messageCountDelta | *overall* number of messages added/removed from the queue *since the last message counter update*. Positive value indicated more messages were added, negative vice versa. |
|
||||
| lastAddTimestamp | timestamp of the last time a message was added to the queue |
|
||||
| updateTimestamp | timestamp of the last message counter update |
|
||||
|
||||
|
@ -19,8 +19,8 @@ This design question concerns the way we can manage a certification key. A more
|
||||
|
||||
### A. Use Intel's recommended protocol
|
||||
|
||||
This involves using aesmd and the Intel SDK to establish an opaque attestation key that transparently signs quotes.
|
||||
Then for each enclave we need to do several roundtrips to IAS to get a revocation list (which we don't need) and request
|
||||
This involves using ``aesmd`` and the Intel SDK to establish an opaque attestation key that transparently signs quotes.
|
||||
Then for each enclave we need to do several round trips to IAS to get a revocation list (which we don't need) and request
|
||||
a direct Intel signature over the quote (which we shouldn't need as the trust has been established already during EPID
|
||||
join)
|
||||
|
||||
@ -30,7 +30,7 @@ join)
|
||||
|
||||
#### Disadvantages
|
||||
|
||||
1. Frequent roundtrips to Intel infrastructure
|
||||
1. Frequent round trips to Intel infrastructure
|
||||
2. Intel can reproduce the certifying private key
|
||||
3. Involves unnecessary protocol steps and features we don't need (EPID)
|
||||
|
||||
@ -43,7 +43,7 @@ certificate that derives its certification key using the sealing fuse values.
|
||||
|
||||
1. Certifying key not reproducible by Intel
|
||||
2. Allows for our own CPU enrollment process, should we need one
|
||||
3. Infrequent roundtrips to Intel infrastructure (only needed once per microcode update)
|
||||
3. Infrequent round trips to Intel infrastructure (only needed once per microcode update)
|
||||
|
||||
#### Disadvantages
|
||||
|
||||
@ -51,7 +51,7 @@ certificate that derives its certification key using the sealing fuse values.
|
||||
|
||||
### C. Intercept Intel's recommended protocol
|
||||
|
||||
This involves using Intel's current protocol as is but instead of doing roundtrips to IAS to get signatures over quotes
|
||||
This involves using Intel's current protocol as is but instead of doing round trips to IAS to get signatures over quotes
|
||||
we try to establish the chain of trust during EPID provisioning and reuse it later.
|
||||
|
||||
#### Advantages
|
||||
|
@ -19,7 +19,7 @@ This is a simple choice of technology.
|
||||
|
||||
1. Clunky API
|
||||
2. No HTTP API
|
||||
3. Handrolled protocol
|
||||
3. Hand-rolled protocol
|
||||
|
||||
### B. etcd
|
||||
|
||||
|
@ -38,7 +38,7 @@ to enforce epochs as mentioned [here](../details/time.md).
|
||||
|
||||
#### Advantages
|
||||
|
||||
1. We would solve long term secret persistence early, allowing for a longer timeframe for testing upgrades and
|
||||
1. We would solve long term secret persistence early, allowing for a longer time frame for testing upgrades and
|
||||
reprovisioning before we integrate Corda
|
||||
2. Allows "fixed stateful provable computations as a service" product, e.g. HA encryption
|
||||
|
||||
|
@ -75,13 +75,13 @@ This may be done in the following steps:
|
||||
4. Find an alive host that has the channel in its active set for the measurement
|
||||
|
||||
1 may be done by maintaining a channel -> measurements map in etcd. This mapping would effectively define the enclave
|
||||
deployment and would be the central place to control incremental rollout or rollbacks.
|
||||
deployment and would be the central place to control incremental roll-out or rollbacks.
|
||||
|
||||
2 requires storing of additional metadata per advertised channel, namely a datastructure describing the enclave's trust
|
||||
predicate. A similar datastructure is provided by the discovering entity - these two predicates can then be used to
|
||||
filter measurements based on trust.
|
||||
|
||||
3 is where we may want to introduce more control if we want to support incremental rollout/canary deployments.
|
||||
3 is where we may want to introduce more control if we want to support incremental roll-out/canary deployments.
|
||||
|
||||
4 is where various (non-MVP) optimisation considerations come to mind. We could add a loadbalancer, do autoscaling based
|
||||
on load (although Kubernetes already provides support for this), could have a preference for looping back to the same
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
The Intel Attestation Service proxy's responsibility is simply to forward requests to and from the IAS.
|
||||
|
||||
The reason we need this proxy is because Intel requires us to do Mutual TLS with them for each attestation roundtrip.
|
||||
The reason we need this proxy is because Intel requires us to do Mutual TLS with them for each attestation round trip.
|
||||
For this we need an R3 maintained private key, and as we want third parties to be able to do attestation we need to
|
||||
store this private key in these proxies.
|
||||
|
||||
|
@ -5,7 +5,7 @@ layer which hides the infrastructure details. Users provide "lambdas", which are
|
||||
other lambdas, access other AWS services etc. Because Lambdas are inherently stateless (any state they need must be
|
||||
accessed through a service) they may be loaded and executed on demand. This is in contrast with microservices, which
|
||||
are inherently stateful. Internally AWS caches the lambda images and even caches JIT compiled/warmed up code in order
|
||||
to reduce latency. Furthermore the lambda invokation interface provides a convenient way to scale these lambdas: as the
|
||||
to reduce latency. Furthermore the lambda invocation interface provides a convenient way to scale these lambdas: as the
|
||||
functions are statelesss AWS can spin up new VMs to push lambda functions to. The user simply pays for CPU usage, all
|
||||
the infrastructure pain is hidden by Amazon.
|
||||
|
||||
|
@ -20,11 +20,11 @@ may delay the delivery of the signature indefinitely, even until after the certi
|
||||
DH happened before the nonce was generated, which means even if an attacker can crack the expired key they would not be
|
||||
able to steal the DH session, only try creating new ones, which will fail at the timestamp check.
|
||||
|
||||
This seems to be working, however note that this would impose a full roundtrip to an oracle *per DH exchange*.
|
||||
This seems to be working, however note that this would impose a full round trip to an oracle *per DH exchange*.
|
||||
|
||||
### Timestamp-encrypted channels
|
||||
|
||||
In order to reduce the roundtrips required for timestamp checking we can invert the responsibility of checking of the
|
||||
In order to reduce the round trips required for timestamp checking we can invert the responsibility of checking of the
|
||||
timestamp. We can do this by encrypting the channel traffic with an additional key generated by the enclave but that can
|
||||
only be revealed by the time oracle. The enclave encrypts the encryption key with the oracle's public key so the peer
|
||||
trying to communicate with the enclave must forward the encrypted key to the oracle. The oracle in turn will check the
|
||||
|
@ -48,7 +48,7 @@ Let's assume that both A and B are happy with T2, except Node A hasn't establish
|
||||
to prove the validity of T2 to A without revealing the details of T1.
|
||||
|
||||
The following diagram shows an overview of how this can be achieved. Note that the diagram is highly oversimplified
|
||||
and is meant to communicate the high-level dataflow relevant to Corda.
|
||||
and is meant to communicate the high-level data flow relevant to Corda.
|
||||
|
||||
![SGX Provisioning](SgxProvisioning.png "SGX Provisioning")
|
||||
|
||||
|
@ -51,7 +51,7 @@ as part of a separate design effort. Figuring out what you will *not* do is freq
|
||||
|
||||
List of design decisions identified in defining the target solution.
|
||||
|
||||
For each item, please complete the attached [Design Decision template](decisions/decision.html)
|
||||
For each item, please complete the attached [Design Decision template](decisions/decision.md)
|
||||
|
||||
Use the ``.. toctree::`` feature to list out the design decision docs here (see the source of this file for an example).
|
||||
|
||||
|
@ -37,7 +37,7 @@ Corda Modules
|
||||
``core-deterministic`` and ``serialization-deterministic`` are generated from Corda's ``core`` and ``serialization``
|
||||
modules respectively using both `ProGuard <https://www.guardsquare.com/en/proguard>`_ and Corda's ``JarFilter`` Gradle
|
||||
plugin. Corda developers configure these tools by applying Corda's ``@KeepForDJVM`` and ``@DeleteForDJVM``
|
||||
annotations to elements of ``core`` and ``serialization`` as described `here <deterministic_annotations_>`_.
|
||||
annotations to elements of ``core`` and ``serialization`` as described :ref:`here <deterministic_annotations>`.
|
||||
|
||||
The build generates each of Corda's deterministic JARs in six steps:
|
||||
|
||||
|
@ -115,7 +115,7 @@ object TopupIssuerFlow {
|
||||
val txns: List<SignedTransaction> = reserveLimits.map { amount ->
|
||||
// request asset issue
|
||||
logger.info("Requesting currency issue $amount")
|
||||
val txn = issueCashTo(amount, topupRequest.issueToParty, topupRequest.issuerPartyRef)
|
||||
val txn = issueCashTo(amount, topupRequest.issueToParty, topupRequest.issuerPartyRef, topupRequest.notaryParty)
|
||||
progressTracker.currentStep = SENDING_TOP_UP_ISSUE_REQUEST
|
||||
return@map txn.stx
|
||||
}
|
||||
@ -128,10 +128,8 @@ object TopupIssuerFlow {
|
||||
@Suspendable
|
||||
private fun issueCashTo(amount: Amount<Currency>,
|
||||
issueTo: Party,
|
||||
issuerPartyRef: OpaqueBytes): AbstractCashFlow.Result {
|
||||
// TODO: pass notary in as request parameter
|
||||
val notaryParty = serviceHub.networkMapCache.notaryIdentities.firstOrNull()
|
||||
?: throw IllegalArgumentException("Couldn't find any notary in NetworkMapCache")
|
||||
issuerPartyRef: OpaqueBytes,
|
||||
notaryParty: Party): AbstractCashFlow.Result {
|
||||
// invoke Cash subflow to issue Asset
|
||||
progressTracker.currentStep = ISSUING
|
||||
val issueCashFlow = CashIssueFlow(amount, issuerPartyRef, notaryParty)
|
||||
|
@ -69,15 +69,15 @@ private fun prepareOurInputsAndOutputs(serviceHub: ServiceHub, lockId: UUID, req
|
||||
val (inputs, residual) = gatherOurInputs(serviceHub, lockId, sellAmount, request.notary)
|
||||
|
||||
// Build and an output state for the counterparty
|
||||
val transferedFundsOutput = Cash.State(sellAmount, request.counterparty)
|
||||
val transferredFundsOutput = Cash.State(sellAmount, request.counterparty)
|
||||
|
||||
val outputs = if (residual > 0L) {
|
||||
// Build an output state for the residual change back to us
|
||||
val residualAmount = Amount(residual, sellAmount.token)
|
||||
val residualOutput = Cash.State(residualAmount, serviceHub.myInfo.singleIdentity())
|
||||
listOf(transferedFundsOutput, residualOutput)
|
||||
listOf(transferredFundsOutput, residualOutput)
|
||||
} else {
|
||||
listOf(transferedFundsOutput)
|
||||
listOf(transferredFundsOutput)
|
||||
}
|
||||
return Pair(inputs, outputs)
|
||||
// DOCEND 2
|
||||
|
@ -1,96 +0,0 @@
|
||||
package net.corda.docs.tutorial.mocknetwork
|
||||
|
||||
import co.paralleluniverse.fibers.Suspendable
|
||||
import com.google.common.collect.ImmutableList
|
||||
import net.corda.core.contracts.requireThat
|
||||
import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.flows.FlowSession
|
||||
import net.corda.core.flows.InitiatedBy
|
||||
import net.corda.core.flows.InitiatingFlow
|
||||
import net.corda.core.identity.Party
|
||||
import net.corda.core.utilities.unwrap
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.StartedMockNode
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
import org.junit.Rule
|
||||
import org.junit.rules.ExpectedException
|
||||
|
||||
class TutorialMockNetwork {
|
||||
|
||||
@InitiatingFlow
|
||||
class FlowA(private val otherParty: Party) : FlowLogic<Unit>() {
|
||||
|
||||
@Suspendable
|
||||
override fun call() {
|
||||
val session = initiateFlow(otherParty)
|
||||
|
||||
session.receive<Int>().unwrap {
|
||||
requireThat { "Expected to receive 1" using (it == 1) }
|
||||
}
|
||||
|
||||
session.receive<Int>().unwrap {
|
||||
requireThat { "Expected to receive 2" using (it == 2) }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@InitiatedBy(FlowA::class)
|
||||
class FlowB(private val session: FlowSession) : FlowLogic<Unit>() {
|
||||
|
||||
@Suspendable
|
||||
override fun call() {
|
||||
session.send(1)
|
||||
session.send(2)
|
||||
}
|
||||
}
|
||||
|
||||
private lateinit var mockNet: MockNetwork
|
||||
private lateinit var nodeA: StartedMockNode
|
||||
private lateinit var nodeB: StartedMockNode
|
||||
|
||||
@Rule
|
||||
@JvmField
|
||||
val expectedEx: ExpectedException = ExpectedException.none()
|
||||
|
||||
@Before
|
||||
fun setUp() {
|
||||
mockNet = MockNetwork(ImmutableList.of("net.corda.docs.tutorial.mocknetwork"))
|
||||
nodeA = mockNet.createPartyNode()
|
||||
nodeB = mockNet.createPartyNode()
|
||||
}
|
||||
|
||||
@After
|
||||
fun tearDown() {
|
||||
mockNet.stopNodes()
|
||||
}
|
||||
|
||||
// @Test
|
||||
// fun `fail if initiated doesn't send back 1 on first result`() {
|
||||
|
||||
// DOCSTART 1
|
||||
// TODO: Fix this test - accessing the MessagingService directly exposes internal interfaces
|
||||
// nodeB.setMessagingServiceSpy(object : MessagingServiceSpy(nodeB.network) {
|
||||
// override fun send(message: Message, target: MessageRecipients, retryId: Long?, sequenceKey: Any, additionalHeaders: Map<String, String>) {
|
||||
// val messageData = message.data.deserialize<Any>() as? ExistingSessionMessage
|
||||
// val payload = messageData?.payload
|
||||
//
|
||||
// if (payload is DataSessionMessage && payload.payload.deserialize() == 1) {
|
||||
// val alteredMessageData = messageData.copy(payload = payload.copy(99.serialize())).serialize().bytes
|
||||
// messagingService.send(InMemoryMessagingNetwork.InMemoryMessage(message.topic, OpaqueBytes(alteredMessageData), message.uniqueMessageId), target, retryId)
|
||||
// } else {
|
||||
// messagingService.send(message, target, retryId)
|
||||
// }
|
||||
// }
|
||||
// })
|
||||
// DOCEND 1
|
||||
|
||||
// val initiatingReceiveFlow = nodeA.startFlow(FlowA(nodeB.info.legalIdentities.first()))
|
||||
//
|
||||
// mockNet.runNetwork()
|
||||
//
|
||||
// expectedEx.expect(IllegalArgumentException::class.java)
|
||||
// expectedEx.expectMessage("Expected to receive 1")
|
||||
// initiatingReceiveFlow.getOrThrow()
|
||||
// }
|
||||
}
|
@ -12,7 +12,7 @@ import net.corda.finance.contracts.Fix
|
||||
import java.util.function.Predicate
|
||||
|
||||
fun main(args: Array<String>) {
|
||||
// Typealias to make the example coherent.
|
||||
// Type alias to make the example coherent.
|
||||
val oracle = Any() as AbstractParty
|
||||
val stx = Any() as SignedTransaction
|
||||
|
||||
|
@ -59,17 +59,4 @@ it re-assigns ownership instead. The chain of two transactions is finally commit
|
||||
directly to the ``megaCorpNode.services.recordTransaction`` method (note that this method doesn't check the
|
||||
transactions are valid) inside a ``database.transaction``. All node flows run within a database transaction in the
|
||||
nodes themselves, but any time we need to use the database directly from a unit test, you need to provide a database
|
||||
transaction as shown here.
|
||||
|
||||
.. MockNetwork message manipulation
|
||||
.. --------------------------------
|
||||
.. The MockNetwork has the ability to manipulate message streams. You can use this to test your flows behaviour on corrupted,
|
||||
or malicious data received.
|
||||
|
||||
.. Message modification example in ``TutorialMockNetwork.kt``:
|
||||
|
||||
.. .. literalinclude:: ../../docs/source/example-code/src/main/kotlin/net/corda/docs/tutorial/mocknetwork/TutorialMockNetwork.kt
|
||||
:language: kotlin
|
||||
:start-after: DOCSTART 1
|
||||
:end-before: DOCEND 1
|
||||
:dedent: 8
|
||||
transaction as shown here.
|
@ -139,7 +139,7 @@ To copy the same file to all nodes `ext.drivers` can be defined in the top level
|
||||
|
||||
Specifying a custom webserver
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
By default, any node listing a webport will use the default development webserver, which is not production-ready. You
|
||||
By default, any node listing a web port will use the default development webserver, which is not production-ready. You
|
||||
can use your own webserver JAR instead by using the ``webserverJar`` argument in a ``Cordform`` ``node`` configuration
|
||||
block:
|
||||
|
||||
|
@ -183,8 +183,8 @@ Next steps
|
||||
----------
|
||||
There are a number of improvements we could make to this CorDapp:
|
||||
|
||||
* We chould add unit tests, using the contract-test and flow-test frameworks
|
||||
* We chould change ``IOUState.value`` from an integer to a proper amount of a given currency
|
||||
* We could add unit tests, using the contract-test and flow-test frameworks
|
||||
* We could change ``IOUState.value`` from an integer to a proper amount of a given currency
|
||||
* We could add an API, to make it easier to interact with the CorDapp
|
||||
|
||||
But for now, the biggest priority is to add an ``IOUContract`` imposing constraints on the evolution of each
|
||||
|
@ -40,8 +40,9 @@ We look forward to seeing what you can do with Corda!
|
||||
tools-index.rst
|
||||
node-internals-index.rst
|
||||
component-library-index.rst
|
||||
troubleshooting.rst
|
||||
serialization-index.rst
|
||||
json.rst
|
||||
troubleshooting.rst
|
||||
|
||||
.. toctree::
|
||||
:caption: Operations
|
||||
@ -53,6 +54,16 @@ We look forward to seeing what you can do with Corda!
|
||||
aws-vm.rst
|
||||
loadtesting.rst
|
||||
|
||||
.. Documentation is not included in the pdf unless it is included in a toctree somewhere
|
||||
.. only:: pdfmode
|
||||
|
||||
.. toctree::
|
||||
:caption: Other documentation
|
||||
|
||||
deterministic-modules.rst
|
||||
release-notes.rst
|
||||
changelog.rst
|
||||
|
||||
.. only:: htmlmode
|
||||
|
||||
.. toctree::
|
||||
|
@ -119,7 +119,7 @@ These include:
|
||||
* When a node would prefer to use a different notary cluster for a given transaction due to privacy or efficiency
|
||||
concerns
|
||||
|
||||
Before these transactions can be created, the states must first all be repointed to the same notary cluster. This is
|
||||
Before these transactions can be created, the states must first all be re-pointed to the same notary cluster. This is
|
||||
achieved using a special notary-change transaction that takes:
|
||||
|
||||
* A single input state
|
||||
|
@ -1,5 +1,5 @@
|
||||
Tradeoffs
|
||||
=========
|
||||
Trade-offs
|
||||
==========
|
||||
|
||||
.. topic:: Summary
|
||||
|
||||
@ -40,7 +40,7 @@ Corda also uses several other techniques to maximize privacy on the network:
|
||||
* **Transaction tear-offs**: Transactions are structured in a way that allows them to be digitally signed without
|
||||
disclosing the transaction's contents. This is achieved using a data structure called a Merkle tree. You can read
|
||||
more about this technique in :doc:`tutorial-tear-offs`.
|
||||
* **Key randomisation**: The parties to a transaction are identified only by their public keys, and fresh keypairs are
|
||||
* **Key randomisation**: The parties to a transaction are identified only by their public keys, and fresh key pairs are
|
||||
generated for each transaction. As a result, an onlooker cannot identify which parties were involved in a given
|
||||
transaction.
|
||||
|
||||
|
@ -124,7 +124,7 @@ The current set of network parameters:
|
||||
|
||||
More parameters will be added in future releases to regulate things like allowed port numbers, how long a node can be
|
||||
offline before it is evicted from the zone, whether or not IPv6 connectivity is required for zone members, required
|
||||
cryptographic algorithms and rollout schedules (e.g. for moving to post quantum cryptography), parameters related to
|
||||
cryptographic algorithms and roll-out schedules (e.g. for moving to post quantum cryptography), parameters related to
|
||||
SGX and so on.
|
||||
|
||||
Network parameters update process
|
||||
@ -134,7 +134,7 @@ In case of the need to change network parameters Corda zone operator will start
|
||||
that may lead to this decision: adding a notary, setting new fields that were added to enable smooth network interoperability,
|
||||
or a change of the existing compatibility constants is required, for example.
|
||||
|
||||
.. note:: A future release may support the notion of phased rollout of network parameter changes.
|
||||
.. note:: A future release may support the notion of phased roll-out of network parameter changes.
|
||||
|
||||
To synchronize all nodes in the compatibility zone to use the new set of the network parameters two RPC methods are
|
||||
provided. The process requires human interaction and approval of the change, so node operators can review the
|
||||
|
@ -230,7 +230,7 @@ It can be thought of as a DNS equivalent. If you want to de-list a user, you wou
|
||||
It is very likely that your map server won't be entirely standalone, but rather, integrated with whatever your master
|
||||
user database is.
|
||||
|
||||
The network map server also distributes signed network parameter files and controls the rollout schedule for when they
|
||||
The network map server also distributes signed network parameter files and controls the roll-out schedule for when they
|
||||
become available for download and opt-in, and when they become enforced. This is again a policy decision you will
|
||||
probably choose to place some simple UI or workflow tooling around, in particular to enforce restrictions on who can
|
||||
edit the map or the parameters.
|
||||
@ -322,7 +322,7 @@ Selecting parameter values
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
How to choose the parameters? This is the most complex question facing you as a new zone operator. Some settings may seem
|
||||
straightforward and others may involve cost/benefit tradeoffs specific to your business. For example, you could choose
|
||||
straightforward and others may involve cost/benefit trade-offs specific to your business. For example, you could choose
|
||||
to run a validating notary yourself, in which case you would (in the absence of SGX) see all the users' data. Or you could
|
||||
run a non-validating notary, with BFT fault tolerance, which implies recruiting others to take part in the cluster.
|
||||
|
||||
|
@ -1,6 +1,15 @@
|
||||
Quickstart
|
||||
==========
|
||||
|
||||
.. only:: pdfmode
|
||||
|
||||
.. toctree::
|
||||
:caption: Other docs
|
||||
:maxdepth: 1
|
||||
|
||||
getting-set-up.rst
|
||||
tutorial-cordapp.rst
|
||||
|
||||
* :doc:`Set up your machine for CorDapp development <getting-set-up>`
|
||||
* :doc:`Run the Example CorDapp <tutorial-cordapp>`
|
||||
* `View CorDapps in Corda Explore <http://explore.corda.zone/>`_
|
||||
|
@ -376,7 +376,7 @@ and via the versioning APIs.
|
||||
* **Observer Nodes**
|
||||
|
||||
Adds the facility for transparent forwarding of transactions to some third party observer, such as a regulator. By having
|
||||
that entity simply run an Observer node they can simply recieve a stream of digitally signed, de-duplicated reports that
|
||||
that entity simply run an Observer node they can simply receive a stream of digitally signed, de-duplicated reports that
|
||||
can be used for reporting.
|
||||
|
||||
.. _release_notes_v1_0:
|
||||
|
12
docs/source/serialization-index.rst
Normal file
12
docs/source/serialization-index.rst
Normal file
@ -0,0 +1,12 @@
|
||||
Serialization
|
||||
=============
|
||||
|
||||
.. toctree::
|
||||
|
||||
:caption: Other docs
|
||||
:maxdepth: 1
|
||||
|
||||
serialization.rst
|
||||
cordapp-custom-serializers
|
||||
serialization-default-evolution.rst
|
||||
serialization-enum-evolution.rst
|
@ -17,7 +17,7 @@ the `CRaSH`_ shell and supports many of the same features. These features includ
|
||||
* Uploading and downloading attachments
|
||||
* Issuing SQL queries to the underlying database
|
||||
* Viewing JMX metrics and monitoring exports
|
||||
* UNIX style pipes for both text and objects, an ``egrep`` command and a command for working with columnular data
|
||||
* UNIX style pipes for both text and objects, an ``egrep`` command and a command for working with columnar data
|
||||
* Shutting the node down.
|
||||
|
||||
Permissions
|
||||
|
@ -1,12 +1,6 @@
|
||||
Hello, World! Pt.2 - Contract constraints
|
||||
=========================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tut-two-party-contract
|
||||
tut-two-party-flow
|
||||
|
||||
.. note:: This tutorial extends the CorDapp built during the :doc:`Hello, World tutorial <hello-world-introduction>`.
|
||||
|
||||
In the Hello, World tutorial, we built a CorDapp allowing us to model IOUs on ledger. Our CorDapp was made up of two
|
||||
@ -21,4 +15,10 @@ to create IOUs of any value, between any party.
|
||||
In this tutorial, we'll write a contract to imposes rules on how an ``IOUState`` can change over time. In turn, this
|
||||
will require some small changes to the flow we defined in the previous tutorial.
|
||||
|
||||
We'll start by writing the contract.
|
||||
We'll start by writing the contract.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tut-two-party-contract
|
||||
tut-two-party-flow
|
@ -48,7 +48,7 @@ Observables are described in further detail in :doc:`clientrpc`
|
||||
The graph will be defined as follows:
|
||||
|
||||
* Each transaction is a vertex, represented by printing ``NODE <txhash>``
|
||||
* Each input-output relationship is an edge, represented by prining ``EDGE <txhash> <txhash>``
|
||||
* Each input-output relationship is an edge, represented by printing ``EDGE <txhash> <txhash>``
|
||||
|
||||
.. literalinclude:: example-code/src/main/kotlin/net/corda/docs/ClientRpcTutorial.kt
|
||||
:language: kotlin
|
||||
|
@ -303,7 +303,7 @@ The first line simply gets the time-window out of the transaction. Setting a tim
|
||||
may be missing here. We check for it being null later.
|
||||
|
||||
.. warning:: In the Kotlin version as long as we write a comparison with the transaction time first the compiler will
|
||||
verify we didn't forget to check if it's missing. Unfortunately due to the need for smooth Java interop, this
|
||||
verify we didn't forget to check if it's missing. Unfortunately due to the need for smooth interoperability with Java, this
|
||||
check won't happen if we write e.g. ``someDate > time``, it has to be ``time < someDate``. So it's good practice to
|
||||
always write the transaction time-window first.
|
||||
|
||||
@ -317,7 +317,7 @@ this group. We do not allow multiple units of CP to be split or merged even if t
|
||||
exception if the list size is not 1, otherwise it returns the single item in that list. In Java, this appears as a
|
||||
regular static method of the type familiar from many FooUtils type singleton classes and we have statically imported it
|
||||
here. In Kotlin, it appears as a method that can be called on any JDK list. The syntax is slightly different but
|
||||
behind the scenes, the code compiles to the same bytecodes.
|
||||
behind the scenes, the code compiles to the same bytecode.
|
||||
|
||||
Next, we check that the transaction was signed by the public key that's marked as the current owner of the commercial
|
||||
paper. Because the platform has already verified all the digital signatures before the contract begins execution,
|
||||
|
@ -273,7 +273,7 @@ Testing
|
||||
* Starting a flow can now be done directly from a node object. Change calls of the form ``node.getServices().startFlow(...)``
|
||||
to ``node.startFlow(...)``
|
||||
|
||||
* Similarly a tranaction can be executed directly from a node object. Change calls of the form ``node.getDatabase().transaction({ it -> ... })``
|
||||
* Similarly a transaction can be executed directly from a node object. Change calls of the form ``node.getDatabase().transaction({ it -> ... })``
|
||||
to ``node.transaction({() -> ... })``
|
||||
|
||||
* ``startFlow`` now returns a ``CordaFuture``, there is no need to call ``startFlow(...).getResultantFuture()``
|
||||
@ -391,7 +391,7 @@ Flow framework
|
||||
|
||||
* ``FlowLogic.send``/``FlowLogic.receive``/``FlowLogic.sendAndReceive`` has been replaced by ``FlowSession.send``/
|
||||
``FlowSession.receive``/``FlowSession.sendAndReceive``. The replacement functions do not take a destination
|
||||
parameter, as this is defined implictly by the session used
|
||||
parameter, as this is defined implicitly by the session used
|
||||
|
||||
* Initiated flows now take in a ``FlowSession`` instead of ``Party`` in their constructor. If you need to access the
|
||||
counterparty identity, it is in the ``counterparty`` property of the flow session
|
||||
|
@ -71,7 +71,7 @@ The ``InitiatedBy`` flow does the opposite:
|
||||
* Receives a ``String``
|
||||
* Sends a ``CustomType``
|
||||
|
||||
As long as both the ``IntiatingFlow`` and the ``InitiatedBy`` flows conform to the sequence of actions, the flows can
|
||||
As long as both the ``InitiatingFlow`` and the ``InitiatedBy`` flows conform to the sequence of actions, the flows can
|
||||
be implemented in any way you see fit (including adding proprietary business logic that is not shared with other
|
||||
parties).
|
||||
|
||||
@ -81,7 +81,7 @@ A flow can become backwards-incompatible in two main ways:
|
||||
|
||||
* The sequence of ``send`` and ``receive`` calls changes:
|
||||
|
||||
* A ``send`` or ``receive`` is added or removed from either the ``InitatingFlow`` or ``InitiatedBy`` flow
|
||||
* A ``send`` or ``receive`` is added or removed from either the ``InitiatingFlow`` or ``InitiatedBy`` flow
|
||||
* The sequence of ``send`` and ``receive`` calls changes
|
||||
|
||||
* The types of the ``send`` and ``receive`` calls changes
|
||||
@ -112,7 +112,7 @@ If you shut down all nodes and upgrade them all at the same time, any incompatib
|
||||
|
||||
In situations where some nodes may still be using previous versions of a flow and thus new versions of your flow may
|
||||
talk to old versions, the updated flows need to be backwards-compatible. This will be the case for almost any real
|
||||
deployment in which you cannot easily coordinate the rollout of new code across the network.
|
||||
deployment in which you cannot easily coordinate the roll-out of new code across the network.
|
||||
|
||||
How do I ensure flow backwards-compatibility?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -50,7 +50,7 @@ Note the following:
|
||||
* a vault update API is internally used by transaction recording flows.
|
||||
* the vault database schemas are directly accessible via JDBC for customer joins and queries
|
||||
|
||||
Section 8 of the `Technical white paper`_ describes features of the vault yet to be implemented including private key managament, state splitting and merging, asset re-issuance and node event scheduling.
|
||||
Section 8 of the `Technical white paper`_ describes features of the vault yet to be implemented including private key management, state splitting and merging, asset re-issuance and node event scheduling.
|
||||
|
||||
.. _`Technical white paper`: _static/corda-technical-whitepaper.pdf
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user