Merge remote-tracking branch 'open/master' into aslemmer-merge-19-Feb

This commit is contained in:
Andras Slemmer 2018-02-21 18:10:37 +00:00
commit 6a2217ace6
9 changed files with 323 additions and 137 deletions

View File

@ -15,6 +15,14 @@ configurations {
smokeTestRuntime.extendsFrom runtime
}
compileKotlin {
kotlinOptions.jvmTarget = "1.8"
}
compileTestKotlin {
kotlinOptions.jvmTarget = "1.8"
}
sourceSets {
integrationTest {
kotlin {

View File

@ -7,55 +7,135 @@ API: Contract Constraints
Contract constraints
--------------------
Transaction states specify a constraint over the contract that will be used to verify it. For a transaction to be
valid, the ``verify`` function associated with each state must run successfully. However, for this to be secure, it is
not sufficient to specify the ``verify`` function by name as there may exist multiple different implementations with
the same method signature and enclosing class. Contract constraints solve this problem by allowing a contract developer
to constrain which ``verify`` functions out of the universe of implementations can be used (i.e. the universe is
everything that matches the signature and contract constraints restrict this universe to a subset).
A typical constraint is the hash of the CorDapp JAR that contains the contract and states but will in future releases
include constraints that require specific signers of the JAR, or both the signer and the hash. Constraints can be
specified when constructing a transaction; if unspecified, an automatic constraint is used.
Corda separates verification of states from their definition. Whilst you might have expected the ``ContractState``
interface to define a verify method, or perhaps to do verification logic in the constructor, instead it is primarily
done by a method on a ``Contract`` class. This is because what we're actually checking is the
validity of a *transaction*, which is more than just whether the individual states are internally consistent.
The transition between two valid states may be invalid, if the rules of the application are not being respected.
For instance, two cash states of $100 and $200 may both be internally valid, but replacing the first with the second
isn't allowed unless you're a cash issuer - otherwise you could print money for free.
For a transaction to be valid, the ``verify`` function associated with each state must run successfully. However,
for this to be secure, it is not sufficient to specify the ``verify`` function by name as there may exist multiple
different implementations with the same method signature and enclosing class. This normally will happen as applications
evolve, but could also happen maliciously.
Contract constraints solve this problem by allowing a contract developer to constrain which ``verify`` functions out of
the universe of implementations can be used (i.e. the universe is everything that matches the signature and contract
constraints restrict this universe to a subset). Constraints are satisfied by attachments (JARs). You are not allowed to
attach two JARs that both define the same application due to the *no overlap rule*. This rule specifies that two
attachment JARs may not provide the same file path. If they do, the transaction is considered invalid. Because each
state specifies both a constraint over attachments *and* a Contract class name to use, the specified class must appear
in only one attachment.
So who picks the attachment to use? It is chosen by the creator of the transaction that has to satisfy input constraints.
The transaction creator also gets to pick the constraints used by any output states, but the contract logic itself may
have opinions about what those constraints are - a typical contract would require that the constraints are propagated,
that is, the contract will not just enforce the validity of the next transaction that uses a state, but *all successive
transactions as well*. The constraints mechanism creates a balance of power between the creator of data on
the ledger and the user who is trying to edit it, which can be very useful when managing upgrades to Corda applications.
There are two ways of handling upgrades to a smart contract in Corda:
1. *Implicit:* By allowing multiple implementations of the contract ahead of time, using constraints.
2. *Explicit:* By creating a special *contract upgrade transaction* and getting all participants of a state to sign it using the
contract upgrade flows.
This article focuses on the first approach. To learn about the second please see :doc:`upgrading-cordapps`.
The advantage of pre-authorising upgrades using constraints is that you don't need the heavyweight process of creating
upgrade transactions for every state on the ledger. The disadvantage is that you place more faith in third parties,
who could potentially change the app in ways you did not expect or agree with. The advantage of using the explicit
upgrade approach is that you can upgrade states regardless of their constraint, including in cases where you didn't
anticipate a need to do so. But it requires everyone to sign, requires everyone to manually authorise the upgrade,
consumes notary and ledger resources, and is just in general more complex.
How constraints work
--------------------
Starting from Corda 3 there are two types of constraint that can be used: hash and zone whitelist. In future
releases a third type will be added, the signature constraint.
**Hash constraints.** The behaviour provided by public blockchain systems like Bitcoin and Ethereum is that once data is placed on the ledger,
the program that controls it is fixed and cannot be changed. There is no support for upgrades at all. This implements a
form of "code is law", assuming you trust the community of that blockchain to not release a new version of the platform
that invalidates or changes the meaning of your program.
This is supported by Corda using a hash constraint. This specifies exactly one hash of a CorDapp JAR that contains the
contract and states any consuming transaction is allowed to use. Once such a state is created, other nodes will only
accept a transaction if it uses that exact JAR file as an attachment. By implication, any bugs in the contract code
or state definitions cannot be fixed except by using an explicit upgrade process via ``ContractUpgradeFlow``.
.. note:: Corda does not support any way to create states that can never be upgraded at all, but the same effect can be
obtained by using a hash constraint and then simply refusing to agree to any explicit upgrades. Hash
constraints put you in control by requiring an explicit agreement to any upgrade.
**Zone constraints.** Often a hash constraint will be too restrictive. You do want the ability to upgrade an app,
and you don't mind the upgrade taking effect "just in time" when a transaction happens to be required for other business
reasons. In this case you can use a zone constraint. This specifies that the network parameters of a compatibility zone
(see :doc:`network-map`) is expected to contain a map of class name to hashes of JARs that are allowed to provide that
class. The process for upgrading an app then involves asking the zone operator to add the hash of your new JAR to the
parameters file, and trigger the network parameters upgrade process. This involves each node operator running a shell
command to accept the new parameters file and then restarting the node. Node owners who do not restart their node in
time effectively stop being a part of the network.
**Signature constraints.** These are not yet supported, but once implemented they will allow a state to require a JAR
signed by a specified identity, via the regular Java jarsigner tool. This will be the most flexible type
and the smoothest to deploy: no restarts or contract upgrade transactions are needed.
**Defaults.** The default constraint type is either a zone constraint, if the network parameters in effect when the
transaction is built contain an entry for that contract class, or a hash constraint if not.
A ``TransactionState`` has a ``constraint`` field that represents that state's attachment constraint. When a party
constructs a ``TransactionState`` without specifying the constraint parameter a default value
(``AutomaticHashConstraint``) is used. This default will be automatically resolved to a specific
``HashAttachmentConstraint`` that contains the hash of the attachment which contains the contract of that
``TransactionState``. This automatic resolution occurs when a ``TransactionBuilder`` is converted to a
``WireTransaction``. This reduces the boilerplate involved in finding a specific hash constraint when building a
transaction.
constructs a ``TransactionState``, or adds a state using ``TransactionBuilder.addOutput(ContractState)`` without
specifying the constraint parameter, a default value (``AutomaticHashConstraint``) is used. This default will be
automatically resolved to a specific ``HashAttachmentConstraint`` or a ``WhitelistedByZoneAttachmentConstraint``.
This automatic resolution occurs when a ``TransactionBuilder`` is converted to a ``WireTransaction``. This reduces
the boilerplate that would otherwise be involved.
It is possible to specify the constraint explicitly with any other class that implements the ``AttachmentConstraint``
interface. To specify a hash manually the ``HashAttachmentConstraint`` can be used and to not provide any constraint
the ``AlwaysAcceptAttachmentConstraint`` can be used - though this is intended for testing only. An example below
shows how to construct a ``TransactionState`` with an explicitly specified hash constraint from within a flow:
Finally, an ``AlwaysAcceptAttachmentConstraint`` can be used which accepts anything, though this is intended for
testing only.
Please note that the ``AttachmentConstraint`` interface is marked as ``@DoNotImplement``. You are not allowed to write
new constraint types. Only the platform may implement this interface. If you tried, other nodes would not understand
your constraint type and your transaction would not verify.
.. warning:: An AlwaysAccept constraint is effectively the same as disabling security for those states entirely.
Nothing stops you using this constraint in production, but that degrades Corda to being effectively a form
of distributed messaging with optional contract logic being useful only to catch mistakes, rather than potentially
malicious action. If you are deploying an app for which malicious actors aren't in your threat model, using an
AlwaysAccept constraint might simplify things operationally.
An example below shows how to construct a ``TransactionState`` with an explicitly specified hash constraint from within
a flow:
.. sourcecode:: java
// Constructing a transaction with a custom hash constraint on a state
TransactionBuilder tx = new TransactionBuilder()
// Constructing a transaction with a custom hash constraint on a state
TransactionBuilder tx = new TransactionBuilder();
Party notaryParty = ... // a notary party
DummyState contractState = new DummyState()
SecureHash myAttachmentsHash = serviceHub.cordappProvider.getContractAttachmentID(DummyContract.PROGRAM_ID)
TransactionState transactionState = new TransactionState(contractState, DummyContract.Companion.getPROGRAMID(), notaryParty, new AttachmentHashConstraint(myAttachmentsHash))
Party notaryParty = ... // a notary party
DummyState contractState = new DummyState();
tx.addOutputState(transactionState)
WireTransaction wtx = tx.toWireTransaction(serviceHub) // This is where an automatic constraint would be resolved
LedgerTransaction ltx = wtx.toLedgerTransaction(serviceHub)
ltx.verify() // Verifies both the attachment constraints and contracts
SecureHash myAttachmentHash = SecureHash.parse("2b4042aed7e0e39d312c4c477dca1d96ec5a878ddcfd5583251a8367edbd4a5f");
TransactionState transactionState = new TransactionState(contractState, DummyContract.Companion.getPROGRAMID(), notaryParty, new AttachmentHashConstraint(myAttachmentHash));
This mechanism exists both for integrity and security reasons. It is important not to verify against the wrong contract,
which could happen if the wrong version of the contract is attached. More importantly when resolving transaction chains
there will, in a future release, be attachments loaded from the network into the attachment sandbox that are used
to verify the transaction chain. Ensuring the attachment used is the correct one ensures that the verification is
tamper-proof by providing a fake contract.
tx.addOutputState(transactionState);
WireTransaction wtx = tx.toWireTransaction(serviceHub); // This is where an automatic constraint would be resolved.
LedgerTransaction ltx = wtx.toLedgerTransaction(serviceHub);
ltx.verify(); // Verifies both the attachment constraints and contracts
Hard-coding the hash of your app in the code itself can be pretty awkward, so the API also offers the ``AutomaticHashConstraint``.
This isn't a real constraint that will appear in a transaction: it acts as a marker to the ``TransactionBuilder`` that
you require the hash of the node's installed app which supplies the specified contract to be used. In practice, when using
hash constraints, you almost always want "whatever the current code is" and not a hard-coded hash. So this automatic
constraint placeholder is useful.
CorDapps as attachments
-----------------------
CorDapp JARs (:doc:`cordapp-overview`) that are installed to the node and contain classes implementing the ``Contract``
CorDapp JARs (see :doc:`cordapp-overview`) that are installed to the node and contain classes implementing the ``Contract``
interface are automatically loaded into the ``AttachmentStorage`` of a node at startup.
After CorDapps are loaded into the attachment store the node creates a link between contract classes and the attachment
@ -63,28 +143,12 @@ that they were loaded from. This makes it possible to find the attachment for an
automatic resolution of attachments is done by the ``TransactionBuilder`` and how, when verifying the constraints and
contracts, attachments are associated with their respective contracts.
Implementations of AttachmentConstraint
---------------------------------------
There are three implementations of ``AttachmentConstraint`` with more planned in the future.
``AlwaysAcceptAttachmentConstraint``: Any attachment (except a missing one) will satisfy this constraint.
``AutomaticHashConstraint``: This will be resolved to a ``HashAttachmentConstraint`` when a ``TransactionBuilder`` is
converted to a ``WireTransaction``. The ``HashAttachmentConstraint`` will include the attachment hash of the CorDapp
that contains the ``ContractState`` on the ``TransactionState.contract`` field.
``HashAttachmentConstraint``: Will require that the hash of the attachment containing the contract matches the hash
stored in the constraint.
We plan to add a future ``AttachmentConstraint`` that will only be satisfied by the presence of signatures on the
attachment JAR. This allows for trusting of attachments from trusted entities.
Limitations
-----------
An ``AttachmentConstraint`` is verified by running the ``AttachmentConstraint.isSatisfiedBy`` method. When this is called
it is provided only the relevant attachment by the transaction that is verifying it.
.. note:: The obvious way to write a CorDapp is to put all you states, contracts, flows and support code into a single
Java module. This will work but it will effectively publish your entire app onto the ledger. That has two problems:
(1) it is inefficient, and (2) it means changes to your flows or other parts of the app will be seen by the ledger
as a "new app", which may end up requiring essentially unnecessary upgrade procedures. It's better to split your
app into multiple modules: one which contains just states, contracts and core data types. And another which contains
the rest of the app. See :ref:`cordapp-structure`.
Testing
-------
@ -93,7 +157,8 @@ Since all tests involving transactions now require attachments it is also requir
for tests. Unit test environments in JVM ecosystems tend to use class directories rather than JARs, and so CorDapp JARs
typically aren't built for testing. Requiring this would add significant complexity to the build systems of Corda
and CorDapps, so the test suite has a set of convenient functions to generate CorDapps from package names or
to specify JAR URLs in the case that the CorDapp(s) involved in testing already exist.
to specify JAR URLs in the case that the CorDapp(s) involved in testing already exist. You can also just use
``AlwaysAcceptAttachmentConstraint`` in your tests to disable the constraints mechanism.
MockNetwork/MockNode
********************
@ -102,12 +167,14 @@ The simplest way to ensure that a vanilla instance of a MockNode generates the c
``cordappPackages`` constructor parameter (Kotlin) or the ``setCordappPackages`` method on ``MockNetworkParameters`` (Java)
when creating the MockNetwork. This will cause the ``AbstractNode`` to use the named packages as sources for CorDapps. All files
within those packages will be zipped into a JAR and added to the attachment store and loaded as CorDapps by the
``CordappLoader``. An example of this usage would be:
``CordappLoader``.
An example of this usage would be:
.. sourcecode:: java
class SomeTestClass {
MockNetwork network = null
MockNetwork network = null;
@Before
void setup() {
@ -117,6 +184,7 @@ within those packages will be zipped into a JAR and added to the attachment stor
... // Your tests go here
}
MockServices
************
@ -127,6 +195,10 @@ to use as CorDapps using the ``cordappPackages`` parameter.
MockServices mockServices = new MockServices(Arrays.asList("com.domain.cordapp"))
However - there is an easier way! If your unit tests are in the same package as the contract code itself, then you
can use the no-args constructor of ``MockServices``. The package to be scanned for CorDapps will be the same as the
the package of the class that constructed the object. This is a convenient default.
Driver
******

View File

@ -1,14 +1,25 @@
Network Map
===========
The network map is a collection of signed ``NodeInfo`` objects (signed by the node it represents and thus tamper-proof)
forming the set of reachable nodes in a compatibility zone. A node can receive these objects from two sources:
The network map is a collection of signed ``NodeInfo`` objects. Each NodeInfo is signed by the node it represents and
thus cannot be tampered with. It forms the set of reachable nodes in a compatibility zone. A node can receive these
objects from two sources:
1. The HTTP network map service if the ``compatibilityZoneURL`` config key is specified.
1. A network map server that speaks a simple HTTP based protocol.
2. The ``additional-node-infos`` directory within the node's directory.
HTTP network map service
------------------------
The network map server also distributes the parameters file that define values for various settings that all nodes need
to agree on to remain in sync.
.. note:: In Corda 3 no implementation of the HTTP network map server is provided. This is because the details of how
a compatibility zone manages its membership (the databases, ticketing workflows, HSM hardware etc) is expected to vary
between operators, so we provide a simple REST based protocol for uploading/downloading NodeInfos and managing
network parameters. A future version of Corda may provide a simple "stub" implementation for running test zones.
In Corda 3 the right way to run a test network is through distribution of the relevant files via your own mechanisms.
We provide a tool to automate the bulk of this task (see below).
HTTP network map protocol
-------------------------
If the node is configured with the ``compatibilityZoneURL`` config then it first uploads its own signed ``NodeInfo``
to the server (and each time it changes on startup) and then proceeds to download the entire network map. The network map
@ -31,6 +42,12 @@ The set of REST end-points for the network map service are as follows.
| GET | /network-map/network-parameters/{hash} | Retrieve the signed network parameters (see below). The entire object is signed with the network map certificate which is also attached. |
+----------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------+
HTTP is used for the network map service instead of Corda's own AMQP based peer to peer messaging protocol to
enable the server to be placed behind caching content delivery networks like Cloudflare, Akamai, Amazon Cloudfront and so on.
By using industrial HTTP cache networks the map server can be shielded from DoS attacks more effectively. Additionally,
for the case of distributing small files that rarely change, HTTP is a well understood and optimised protocol. Corda's
own protocol is designed for complex multi-way conversations between authenticated identities using signed binary
messages separated into parallel and nested flows, which isn't necessary for network map distribution.
The ``additional-node-infos`` directory
---------------------------------------
@ -43,24 +60,39 @@ latest one is taken.
On startup the node generates its own signed node info file, filename of the format ``nodeInfo-${hash}``. To create a simple
network without the HTTP network map service then simply place this file in the ``additional-node-infos`` directory
of every node that's part of this network.
of every node that's part of this network. For example, a simple way to do this is to use rsync.
Usually, test networks have a structure that is known ahead of time. For the creation of such networks we provide a
``network-bootstrapper`` tool. This tool pre-generates node configuration directories if given the IP addresses/domain
names of each machine in the network. The generated node directories contain the NodeInfos for every other node on
the network, along with the network parameters file and identity certificates. Generated nodes do not need to all be
online at once - an offline node that isn't being interacted with doesn't impact the network in any way. So a test
cluster generated like this can be sized for the maximum size you may need, and then scaled up and down as necessary.
More information can be found in :doc:`setting-up-a-corda-network`.
Network parameters
------------------
Network parameters are a set of values that every node participating in the zone needs to agree on and use to
correctly interoperate with each other. If the node is using the HTTP network map service then on first startup it will
download the signed network parameters, cache it in a ``network-parameters`` file and apply them on the node.
correctly interoperate with each other. They can be thought of as an encapsulation of all aspects of a Corda deployment
on which reasonable people may disagree. Whilst other blockchain/DLT systems typically require a source code fork to
alter various constants (like the total number of coins in a cryptocurrency, port numbers to use etc), in Corda we
have refactored these sorts of decisions out into a separate file and allow "zone operators" to make decisions about
them. The operator signs a data structure that contains the values and they are distributed along with the network map.
Tools are provided to gain user opt-in consent to a new version of the parameters and ensure everyone switches to them
at the same time.
If the node is using the HTTP network map service then on first startup it will download the signed network parameters,
cache it in a ``network-parameters`` file and apply them on the node.
.. warning:: If the ``network-parameters`` file is changed and no longer matches what the network map service is advertising
then the node will automatically shutdown. Resolution to this is to delete the incorrect file and restart the node so
that the parameters can be downloaded again.
.. note:: A future release will support the notion of phased rollout of network parameter changes.
If the node isn't using a HTTP network map service then it's expected the signed file is provided by some other means.
For such a scenario there is the network bootstrapper tool which in addition to generating the network parameters file
also distributes the node info files to the node directories. More information can be found in :doc:`setting-up-a-corda-network`.
also distributes the node info files to the node directories.
The current set of network parameters:
@ -85,16 +117,23 @@ Network parameters update process
---------------------------------
In case of the need to change network parameters Corda zone operator will start the update process. There are many reasons
that may lead to this decision: we discovered that some new fields have to be added to enable smooth network interoperability or change
of the existing compatibility constants is required due to upgrade or security reasons.
that may lead to this decision: adding a notary, setting new fields that were added to enable smooth network interoperability,
or a change of the existing compatibility constants is required, for example.
To synchronize all nodes in the compatibility zone to use the new set of the network parameters two RPC methods exist. The process
requires human interaction and approval of the change.
.. note:: A future release may support the notion of phased rollout of network parameter changes.
To synchronize all nodes in the compatibility zone to use the new set of the network parameters two RPC methods are
provided. The process requires human interaction and approval of the change, so node operators can review the
differences before agreeing to them.
When the update is about to happen the network map service starts to advertise the additional information with the usual network map
data. It includes new network parameters hash, description of the change and the update deadline. Node queries network map server
for the new set of parameters and emits ``ParametersUpdateInfo`` via ``CordaRPCOps::networkParametersFeed`` method to inform
node operator about the event.
data. It includes new network parameters hash, description of the change and the update deadline. Nodes query the network map server
for the new set of parameters.
The fact a new set of parameters is being advertised shows up in the node logs with the message
"Downloaded new network parameters", and programs connected via RPC can receive ``ParametersUpdateInfo`` by using
the ``CordaRPCOps.networkParametersFeed`` method. Typically a zone operator would also email node operators to let them
know about the details of the impending change, along with the justification, how to object, deadlines and so on.
.. container:: codeset
@ -103,12 +142,22 @@ node operator about the event.
:start-after: DOCSTART 1
:end-before: DOCEND 1
Node administrator can review the change and decide if is going to accept it. The approval should be done before ``updateDeadline``.
Nodes that don't approve before the deadline will be removed from the network map.
If the network operator starts advertising a different set of new parameters then that new set overrides the previous set. Only the latest update can be accepted.
To send back parameters approval to the zone operator RPC method ``fun acceptNewNetworkParameters(parametersHash: SecureHash)``
has to be called with ``parametersHash`` from update. Notice that the process cannot be undone.
The node administrator can review the change and decide if they are going to accept it. The approval should be do
before the update Deadline. Nodes that don't approve before the deadline will likely be removed from the network map by
the zone operator, but that is a decision that is left to the operator's discretion. For example the operator might also
choose to change the deadline instead.
If the network operator starts advertising a different set of new parameters then that new set overrides the previous set.
Only the latest update can be accepted.
To send back parameters approval to the zone operator, the RPC method ``fun acceptNewNetworkParameters(parametersHash: SecureHash)``
has to be called with ``parametersHash`` from the update. Note that approval cannot be undone. You can do this via the Corda
shell (see :doc:`shell`):
``run acceptNewNetworkParameters parametersHash: "ba19fc1b9e9c1c7cbea712efda5f78b53ae4e5d123c89d02c9da44ec50e9c17d"``
If the administrator does not accept the update then next time the node polls network map after the deadline, the
advertised network parameters will be the updated ones. The previous set of parameters will no longer be valid.
At this point the node will automatically shutdown and will require the node operator to bring it back again.
Next time the node polls network map after the deadline the advertised network parameters will be the updated ones. Previous set
of parameters will no longer be valid. At this point the node will automatically shutdown and will require the node operator
to bring it back again.

View File

@ -48,9 +48,6 @@ The ``version`` property, which defaults to 1, specifies the flow's version. Thi
whenever there is a release of a flow which has changes that are not backwards-compatible. A non-backwards compatible
change is one that changes the interface of the flow.
Currently, CorDapp developers have to explicitly write logic to handle these flow version numbers. In the future,
however, the platform will use prescribed rules for handling versions.
What defines the interface of a flow?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The flow interface is defined by the sequence of ``send`` and ``receive`` calls between an ``InitiatingFlow`` and an
@ -103,28 +100,19 @@ following behaviour:
How do I upgrade my flows?
~~~~~~~~~~~~~~~~~~~~~~~~~~
For flag-day upgrades, the process is simple.
Assumptions
^^^^^^^^^^^
* All nodes in the business network can be shut down for a period of time
* All nodes retire the old flows and adopt the new flows at the same time
Process
^^^^^^^
1. Update the flow and test the changes. Increment the flow version number in the ``InitiatingFlow`` annotation
1. Update the flow and test the changes. Increment the flow version number in the ``InitiatingFlow`` annotation.
2. Ensure that all versions of the existing flow have finished running and there are no pending ``SchedulableFlows`` on
any of the nodes on the business network
3. Shut down all the nodes
4. Replace the existing CorDapp JAR with the CorDapp JAR containing the new flow
5. Start the nodes
any of the nodes on the business network. This can be done by *draining the node* (see below).
3. Shut down the node.
4. Replace the existing CorDapp JAR with the CorDapp JAR containing the new flow.
5. Start the node.
From this point onwards, all the nodes will be using the updated flows.
If you shut down all nodes and upgrade them all at the same time, any incompatible change can be made.
In situations where some nodes may still be using previous versions of a flow, the updated flows need to be
backwards-compatible.
In situations where some nodes may still be using previous versions of a flow and thus new versions of your flow may
talk to old versions, the updated flows need to be backwards-compatible. This will be the case for almost any real
deployment in which you cannot easily coordinate the rollout of new code across the network.
How do I ensure flow backwards-compatibility?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -262,37 +250,57 @@ In code, inlined subflows appear as regular ``FlowLogic`` instances without eith
Inlined flows are not versioned, as they inherit the version of their parent ``InitiatingFlow`` or ``InitiatedBy``
flow.
Are there any other considerations?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Suspended flows
^^^^^^^^^^^^^^^
Currently, serialised flow state machines persisted in the node's database cannot be updated. All flows must finish
before the updated flow classes are added to the node's plugins folder.
Flows that don't create sessions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Flows which are not an ``InitiatingFlow`` or ``InitiatedBy`` flow, or inlined subflows that are not called from an
``InitiatingFlow`` or ``InitiatedBy`` flow, can be updated without consideration of backwards-compatibility. Flows of
this type include utility flows for querying the vault and flows for reaching out to external systems.
Flow drains
~~~~~~~~~~~
A flow *checkpoint* is a serialised snapshot of the flow's stack frames and any objects reachable from the stack.
Checkpoints are saved to the database automatically when a flow suspends or resumes, which typically happens when
sending or receiving messages. A flow may be replayed from the last checkpoint if the node restarts. Automatic
checkpointing is an unusual feature of Corda and significantly helps developers write reliable code that can survive
node restarts and crashes. It also assists with scaling up, as flows that are waiting for a response can be flushed
from memory.
However, this means that restoring an old checkpoint to a new version of a flow may cause resume failures. For example
if you remove a local variable from a method that previously had one, then the flow engine won't be able to figure out
where to put the stored value of the variable.
For this reason, in currently released versions of Corda you must *drain the node* before doing an app upgrade that
changes ``@Suspendable`` code. A drain blocks new flows from starting but allows existing flows to finish. Thus once
a drain is complete there should be no outstanding checkpoints or running flows. Upgrading the app will then succeed.
A node can be drained or undrained via RPC using the ``setFlowsDrainingModeEnabled`` method, and via the shell using
the standard ``run`` command to invoke the RPC. See :doc:`shell` to learn more.
Contract and state versioning
-----------------------------
Contracts and states can be upgraded if and only if all of the state's participants agree to the proposed upgrade. The
following combinations of upgrades are possible:
* A contract is upgraded while the state definition remains the same
* A state is upgraded while the contract stays the same
* The state and the contract are updated simultaneously
There are two types of contract/state upgrade:
1. *Implicit:* By allowing multiple implementations of the contract ahead of time, using constraints. See :doc:`api-contract-constraints` to learn more.
2. *Explicit:* By creating a special *contract upgrade transaction* and getting all participants of a state to sign it using the
contract upgrade flows.
This section of the documentation focuses only on the *explicit* type of upgrade.
In an explicit upgrade contracts and states can be changed in arbitrary ways, if and only if all of the state'
s participants agree to the proposed upgrade. The following combinations of upgrades are possible:
* A contract is upgraded while the state definition remains the same.
* A state is upgraded while the contract stays the same.
* The state and the contract are updated simultaneously.
The procedure for updating a state or a contract using a flag-day approach is quite simple:
* Update and test the state or contract
* Stop all the nodes on the business network
* Produce a new CorDapp JAR file and distribute it to all the relevant parties
* Start all nodes on the network
* Run the contract upgrade authorisation flow for each state that requires updating on every node
* For each state, one node should run the contract upgrade initiation flow
* Update and test the state or contract.
* Produce a new CorDapp JAR file and distribute it to all the relevant parties.
* Each node operator stops their node, replaces the existing JAR with the new one, and restarts. They may wish to do
a node drain first to avoid the definition of states or contracts changing whilst a flow is in progress.
* Run the contract upgrade authorisation flow for each state that requires updating on every node.
* For each state, one node should run the contract upgrade initiation flow, which will contact the rest.
Update Process
~~~~~~~~~~~~~~

View File

@ -26,6 +26,8 @@ For testing purposes, CorDapps may also include:
In production, a production-ready webserver should be used, and these files should be moved into a different module or
project so that they do not bloat the CorDapp at build time.
.. _cordapp-structure:
Structure
---------
You should base the structure of your project on the Java or Kotlin templates:

View File

@ -228,7 +228,7 @@ internal fun <T : Any> propertiesForSerializationFromConstructor(
// Check that the method has a getter in java.
val getter = matchingProperty.getter ?: throw NotSerializableException(
"Property has no getter method for $name of $clazz. If using Java and the parameter name"
"Property has no getter method for - \"$name\" - of \"$clazz\". If using Java and the parameter name"
+ "looks anonymous, check that you have the -parameters option specified in the "
+ "Java compiler. Alternately, provide a proxy serializer "
+ "(SerializationCustomSerializer) if recompiling isn't an option.")
@ -236,15 +236,15 @@ internal fun <T : Any> propertiesForSerializationFromConstructor(
val returnType = resolveTypeVariables(getter.genericReturnType, type)
if (!constructorParamTakesReturnTypeOfGetter(returnType, getter.genericReturnType, param.value)) {
throw NotSerializableException(
"Property '$name' has type '$returnType' on class '$clazz' but differs from constructor " +
"parameter type '${param.value.type.javaType}'")
"Property - \"$name\" - has type \"$returnType\" on \"$clazz\" but differs from constructor " +
"parameter type \"${param.value.type.javaType}\"")
}
Pair(PublicPropertyReader(getter), returnType)
} else {
val field = classProperties[name]!!.field ?:
throw NotSerializableException("No property matching constructor parameter named '$name' " +
"of '$clazz'. If using Java, check that you have the -parameters option specified " +
throw NotSerializableException("No property matching constructor parameter named - \"$name\" - " +
"of \"$clazz\". If using Java, check that you have the -parameters option specified " +
"in the Java compiler. Alternately, provide a proxy serializer " +
"(SerializationCustomSerializer) if recompiling isn't an option")
@ -252,7 +252,7 @@ internal fun <T : Any> propertiesForSerializationFromConstructor(
}
} else {
throw NotSerializableException(
"Constructor parameter $name doesn't refer to a property of class '$clazz'")
"Constructor parameter - \"$name\" - doesn't refer to a property of \"$clazz\"")
}
this += PropertyAccessorConstructor(

View File

@ -317,7 +317,6 @@ class RPCServer(
context.invocation.pushToLoggingContext()
when (arguments) {
is Try.Success -> {
log.info("SUBMITTING")
rpcExecutor!!.submit {
val result = invokeRpc(context, clientToServer.methodName, arguments.value)
sendReply(clientToServer.replyId, clientToServer.clientAddress, result)

View File

@ -3,6 +3,7 @@ package net.corda.testing.driver
import net.corda.core.concurrent.CordaFuture
import net.corda.core.identity.CordaX500Name
import net.corda.core.internal.CertRole
import net.corda.core.internal.concurrent.transpose
import net.corda.core.internal.div
import net.corda.core.internal.list
import net.corda.core.internal.readLines
@ -11,6 +12,7 @@ import net.corda.core.utilities.getOrThrow
import net.corda.node.internal.NodeStartup
import net.corda.testing.common.internal.ProjectStructure.projectRootDir
import net.corda.testing.core.DUMMY_BANK_A_NAME
import net.corda.testing.core.DUMMY_BANK_B_NAME
import net.corda.testing.core.DUMMY_NOTARY_NAME
import net.corda.testing.http.HttpApi
import net.corda.testing.internal.IntegrationTest
@ -20,12 +22,13 @@ import net.corda.testing.node.NotarySpec
import net.corda.testing.node.internal.addressMustBeBound
import net.corda.testing.node.internal.addressMustNotBeBound
import net.corda.testing.node.internal.internalDriver
import org.assertj.core.api.Assertions.assertThat
import org.assertj.core.api.Assertions.*
import org.json.simple.JSONObject
import org.junit.ClassRule
import org.junit.Test
import java.util.concurrent.Executors
import java.util.concurrent.ScheduledExecutorService
import kotlin.streams.toList
class DriverTests : IntegrationTest() {
companion object {
@ -127,4 +130,39 @@ class DriverTests : IntegrationTest() {
}
assertThat(baseDirectory / "process-id").doesNotExist()
}
}
@Test
fun `driver rejects multiple nodes with the same name`() {
driver(DriverParameters(startNodesInProcess = true)) {
assertThatThrownBy { listOf(newNode(DUMMY_BANK_A_NAME)(), newNode(DUMMY_BANK_B_NAME)(), newNode(DUMMY_BANK_A_NAME)()).transpose().getOrThrow() }.isInstanceOf(IllegalArgumentException::class.java)
}
}
@Test
fun `driver rejects multiple nodes with the same name parallel`() {
driver(DriverParameters(startNodesInProcess = true)) {
val nodes = listOf(newNode(DUMMY_BANK_A_NAME), newNode(DUMMY_BANK_B_NAME), newNode(DUMMY_BANK_A_NAME))
assertThatThrownBy { nodes.parallelStream().map { it.invoke() }.toList().transpose().getOrThrow() }.isInstanceOf(IllegalArgumentException::class.java)
}
}
@Test
fun `driver allows reusing names of nodes that have been stopped`() {
driver(DriverParameters(startNodesInProcess = true)) {
val nodeA = newNode(DUMMY_BANK_A_NAME)().getOrThrow()
nodeA.stop()
assertThatCode { newNode(DUMMY_BANK_A_NAME)().getOrThrow() }.doesNotThrowAnyException()
}
}
private fun DriverDSL.newNode(name: CordaX500Name) = { startNode(NodeParameters(providedName = name)) }
}

View File

@ -101,6 +101,7 @@ class DriverDSLImpl(
private val cordappPackages = extraCordappPackagesToScan + getCallerPackage()
// Map from a nodes legal name to an observable emitting the number of nodes in its network map.
private val countObservables = mutableMapOf<CordaX500Name, Observable<Int>>()
private val nodeNames = mutableSetOf<CordaX500Name>()
/**
* Future which completes when the network map is available, whether a local one or one from the CZ. This future acts
* as a gate to prevent nodes from starting too early. The value of the future is a [LocalNetworkMap] object, which
@ -186,6 +187,12 @@ class DriverDSLImpl(
val p2pAddress = portAllocation.nextHostAndPort()
// TODO: Derive name from the full picked name, don't just wrap the common name
val name = providedName ?: CordaX500Name("${oneOf(names).organisation}-${p2pAddress.port}", "London", "GB")
synchronized(nodeNames) {
val wasANewNode = nodeNames.add(name)
if (!wasANewNode) {
throw IllegalArgumentException("Node with name $name is already started or starting.")
}
}
val registrationFuture = if (compatibilityZone?.rootCert != null) {
// We don't need the network map to be available to be able to register the node
startNodeRegistration(name, compatibilityZone.rootCert, compatibilityZone.url)
@ -642,6 +649,9 @@ class DriverDSLImpl(
val onNodeExit: () -> Unit = {
localNetworkMap?.nodeInfosCopier?.removeConfig(baseDirectory)
countObservables.remove(config.corda.myLegalName)
synchronized(nodeNames) {
nodeNames.remove(config.corda.myLegalName)
}
}
val useHTTPS = config.typesafe.run { hasPath("useHTTPS") && getBoolean("useHTTPS") }