<liclass="toctree-l3"><aclass="reference internal"href="#persistentkeymanagementservice-and-e2etestkeymanagementservice">PersistentKeyManagementService and E2ETestKeyManagementService</a></li>
</ul>
</li>
<liclass="toctree-l2"><aclass="reference internal"href="#messaging-and-network-management-services">Messaging and Network Management Services</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="#persistentnetworkmapservice-and-networkmapservice">PersistentNetworkMapService and NetworkMapService</a></li>
</ul>
</li>
<liclass="toctree-l2"><aclass="reference internal"href="#storage-and-persistence-related-services">Storage and Persistence Related Services</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="#dbtransactionmappingstorage-and-inmemorystatemachinerecordedtransactionmappingstorage">DBTransactionMappingStorage and InMemoryStateMachineRecordedTransactionMappingStorage</a></li>
<liclass="toctree-l3"><aclass="reference internal"href="#persistentuniquenessprovider-inmemoryuniquenessprovider-and-raftuniquenessprovider">PersistentUniquenessProvider, InMemoryUniquenessProvider and RaftUniquenessProvider</a></li>
<h1>A Brief Introduction To The Node Services<aclass="headerlink"href="#a-brief-introduction-to-the-node-services"title="Permalink to this headline">¶</a></h1>
<p>This document is intended as a very brief introduction to the current
service components inside the node. Whilst not at all exhaustive it is
hoped that this will give some context when writing applications and
code that use these services, or which are operated upon by the internal
components of Corda.</p>
<divclass="section"id="services-within-the-node">
<h2>Services Within The Node<aclass="headerlink"href="#services-within-the-node"title="Permalink to this headline">¶</a></h2>
<p>The node services represent the various sub functions of the Corda node.
Some are directly accessible to contracts and flows through the
<codeclass="docutils literal"><spanclass="pre">ServiceHub</span></code>, whilst others are the framework internals used to host
the node functions. Any public service interfaces are defined in the
<codeclass="docutils literal"><spanclass="pre">:core</span></code> gradle project in the
<codeclass="docutils literal"><spanclass="pre">src/main/kotlin/net/corda/core/node/services</span></code> folder. The
<codeclass="docutils literal"><spanclass="pre">ServiceHub</span></code> interface exposes functionality suitable for flows.
The implementation code for all standard services lives in the gradle
<codeclass="docutils literal"><spanclass="pre">:node</span></code> project under the <codeclass="docutils literal"><spanclass="pre">src/main/kotlin/net/corda/node/services</span></code>
folder. The <codeclass="docutils literal"><spanclass="pre">src/main/kotlin/net/corda/node/services/api</span></code> folder
contains declarations for internal only services and for interoperation
between services.</p>
<p>All the services are constructed in the <codeclass="docutils literal"><spanclass="pre">AbstractNode</span></code><codeclass="docutils literal"><spanclass="pre">start</span></code>
method (and the extension in <codeclass="docutils literal"><spanclass="pre">Node</span></code>). They may also register a
shutdown handler during initialisation, which will be called in reverse
order to the start registration sequence when the <codeclass="docutils literal"><spanclass="pre">Node.stop</span></code>
is called.</p>
<p>As well as the standard services trusted CorDapp plugins may register
custom services. These plugin services are passed a reference to the
<codeclass="docutils literal"><spanclass="pre">PluginServiceHub</span></code> which allows some more powerful functions e.g.
starting flows.</p>
<p>For unit testing a number of non-persistent, memory only services are
defined in the <codeclass="docutils literal"><spanclass="pre">:node</span></code> and <codeclass="docutils literal"><spanclass="pre">:test-utils</span></code> projects. The
<codeclass="docutils literal"><spanclass="pre">:test-utils</span></code> project also provides an in-memory networking simulation
to allow unit testing of flows and service functions.</p>
<p>The roles of the individual services are described below.</p>
<h2>Key Management and Identity Services<aclass="headerlink"href="#key-management-and-identity-services"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="inmemoryidentityservice">
<h3>InMemoryIdentityService<aclass="headerlink"href="#inmemoryidentityservice"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">InMemoryIdentityService</span></code> implements the <codeclass="docutils literal"><spanclass="pre">IdentityService</span></code>
interface and provides a store of remote mappings between <codeclass="docutils literal"><spanclass="pre">CompositeKey</span></code>
and remote <codeclass="docutils literal"><spanclass="pre">Parties</span></code>. It is automatically populated from the
<codeclass="docutils literal"><spanclass="pre">NetworkMapCache</span></code> updates and is used when translating <codeclass="docutils literal"><spanclass="pre">CompositeKey</span></code>
exposed in transactions into fully populated <codeclass="docutils literal"><spanclass="pre">Party</span></code> identities. This
service is also used in the default JSON mapping of parties in the web
server, thus allowing the party names to be used to refer to other node
legal identities. In the future the Identity service will be made
persistent and extended to allow anonymised session keys to be used in
flows where the well-known <codeclass="docutils literal"><spanclass="pre">CompositeKey</span></code> of nodes need to be hidden
<h3>PersistentKeyManagementService and E2ETestKeyManagementService<aclass="headerlink"href="#persistentkeymanagementservice-and-e2etestkeymanagementservice"title="Permalink to this headline">¶</a></h3>
<p>Typical usage of these services is to locate an appropriate
<codeclass="docutils literal"><spanclass="pre">PrivateKey</span></code> to complete and sign a verified transaction as part of a
flow. The normal node legal identifier keys are typically accessed via
helper extension methods on the <codeclass="docutils literal"><spanclass="pre">ServiceHub</span></code>, but these ultimately
fetch the keys from the <codeclass="docutils literal"><spanclass="pre">KeyManagementService</span></code>. The
<codeclass="docutils literal"><spanclass="pre">KeyManagementService</span></code> interface also allows other keys to be
generated if anonymous keys are needed in a flow. Note that this
interface works at the level of individual <codeclass="docutils literal"><spanclass="pre">PublicKey</span></code>/<codeclass="docutils literal"><spanclass="pre">PrivateKey</span></code>
pairs, but the signing authority will be represented by a
<codeclass="docutils literal"><spanclass="pre">CompositeKey</span></code> on the <codeclass="docutils literal"><spanclass="pre">NodeInfo</span></code> to allow key clustering and
threshold schemes.</p>
<p>The <codeclass="docutils literal"><spanclass="pre">PersistentKeyManagementService</span></code> is a persistent implementation of
the <codeclass="docutils literal"><spanclass="pre">KeyManagementService</span></code> interface that records the key pairs to a
key-value storage table in the database. <codeclass="docutils literal"><spanclass="pre">E2ETestKeyManagementService</span></code>
is a simple implementation of the <codeclass="docutils literal"><spanclass="pre">KeyManagementService</span></code> that is used
to track our <codeclass="docutils literal"><spanclass="pre">KeyPairs</span></code> for use in unit testing when no database is
<h2>Messaging and Network Management Services<aclass="headerlink"href="#messaging-and-network-management-services"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="artemismessagingserver">
<h3>ArtemisMessagingServer<aclass="headerlink"href="#artemismessagingserver"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">ArtemisMessagingServer</span></code> service is run internally by the Corda
node to host the <codeclass="docutils literal"><spanclass="pre">ArtemisMQ</span></code> messaging broker that is used for
reliable node communications. Although the node can be configured to
disable this and connect to a remote broker by setting the
<codeclass="docutils literal"><spanclass="pre">messagingServerAddress</span></code> configuration to be the remote broker
address. (The <codeclass="docutils literal"><spanclass="pre">MockNode</span></code> used during testing does not use this
service, and has a simplified in-memory network layer instead.) This
service is not exposed to any CorDapp code as it is an entirely internal
infrastructural component. However, the developer may need to be aware
of this component, because the <codeclass="docutils literal"><spanclass="pre">ArtemisMessagingServer</span></code> is responsible
for configuring the network ports (based upon settings in <codeclass="docutils literal"><spanclass="pre">node.conf</span></code>)
and the service configures the security settings of the <codeclass="docutils literal"><spanclass="pre">ArtemisMQ</span></code>
middleware and acts to form bridges between node mailbox queues based
upon connection details advertised by the <codeclass="docutils literal"><spanclass="pre">NetworkMapService</span></code>. The
<codeclass="docutils literal"><spanclass="pre">ArtemisMQ</span></code> broker is configured to use TLS1.2 with a custom
<codeclass="docutils literal"><spanclass="pre">TrustStore</span></code> containing a Corda root certificate and a <codeclass="docutils literal"><spanclass="pre">KeyStore</span></code>
with a certificate and key signed by a chain back to this root
certificate. These keystores typically reside in the <codeclass="docutils literal"><spanclass="pre">certificates</span></code>
sub folder of the node workspace. For the nodes to be able to connect to
each other it is essential that the entire set of nodes are able to
authenticate against each other and thus typically that they share a
common root certificate. Also note that the address configuration
defined for the server is the basis for the address advertised in the
NetworkMapService and thus must be externally connectable by all nodes
in the network.</p>
</div>
<divclass="section"id="nodemessagingclient">
<h3>NodeMessagingClient<aclass="headerlink"href="#nodemessagingclient"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">NodeMessagingClient</span></code> is the implementation of the
<codeclass="docutils literal"><spanclass="pre">MessagingService</span></code> interface operating across the <codeclass="docutils literal"><spanclass="pre">ArtemisMQ</span></code>
middleware layer. It typically connects to the local <codeclass="docutils literal"><spanclass="pre">ArtemisMQ</span></code>
hosted within the <codeclass="docutils literal"><spanclass="pre">ArtemisMessagingServer</span></code> service. However, the
<codeclass="docutils literal"><spanclass="pre">messagingServerAddress</span></code> configuration can be set to a remote broker
address if required. The responsibilities of this service include
managing the node’s persistent mailbox, sending messages to remote peer
nodes, acknowledging properly consumed messages and deduplicating any
resent messages. The service also handles the incoming requests from new
RPC client sessions and hands them to the <codeclass="docutils literal"><spanclass="pre">CordaRPCOpsImpl</span></code> to carry
out the requests.</p>
</div>
<divclass="section"id="inmemorynetworkmapcache">
<h3>InMemoryNetworkMapCache<aclass="headerlink"href="#inmemorynetworkmapcache"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">InMemoryNetworkMapCache</span></code> implements the <codeclass="docutils literal"><spanclass="pre">NetworkMapCache</span></code>
interface and is responsible for tracking the identities and advertised
services of authorised nodes provided by the remote
<codeclass="docutils literal"><spanclass="pre">NetworkMapService</span></code>. Typical use is to search for nodes hosting
specific advertised services e.g. a Notary service, or an Oracle
service. Also, this service allows mapping of friendly names, or
<codeclass="docutils literal"><spanclass="pre">Party</span></code> identities to the full <codeclass="docutils literal"><spanclass="pre">NodeInfo</span></code> which is used in the
<codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> to convert between the <codeclass="docutils literal"><spanclass="pre">CompositeKey</span></code>, or
<codeclass="docutils literal"><spanclass="pre">Party</span></code> based addressing used in the flows/contracts and the
physical host and port information required for the physical
<h3>PersistentNetworkMapService and NetworkMapService<aclass="headerlink"href="#persistentnetworkmapservice-and-networkmapservice"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">NetworkMapService</span></code> is a node internal component responsible for
managing and communicating the directory of authenticated registered
nodes and advertised services in the Corda network. Only a single node
in the network (in future this will be a clustered service) should host
the NetworkMapService implementation. All other Corda nodes initiate
their remote connection to the <codeclass="docutils literal"><spanclass="pre">NetworkMapService</span></code> early in the
start-up sequence and wait to synchronise their local
<codeclass="docutils literal"><spanclass="pre">NetworkMapCache</span></code> before activating any flows. For the
<codeclass="docutils literal"><spanclass="pre">PersistentNetworkMapService</span></code> registered <codeclass="docutils literal"><spanclass="pre">NodeInfo</span></code> data is
persisted and will include nodes that are not currently active. The
networking layer will persist any messages directed at such inactive
nodes with the expectation that they will be delivered eventually, or
else that the source flow will be terminated by admin intervention.
An <codeclass="docutils literal"><spanclass="pre">InMemoryNetworkMapService</span></code> is also available for unit tests
without a database.</p>
<p>The <codeclass="docutils literal"><spanclass="pre">NetworkMapService</span></code> should not be used by any flows, or
contracts. Instead they should access the NetworkMapCache service to
<h2>Storage and Persistence Related Services<aclass="headerlink"href="#storage-and-persistence-related-services"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="storageserviceimpl">
<h3>StorageServiceImpl<aclass="headerlink"href="#storageserviceimpl"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">StorageServiceImpl</span></code> service simply hold references to the various
persistence related services and provides a single grouped interface on
the <codeclass="docutils literal"><spanclass="pre">ServiceHub</span></code>.</p>
</div>
<divclass="section"id="dbcheckpointstorage">
<h3>DBCheckpointStorage<aclass="headerlink"href="#dbcheckpointstorage"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">DBCheckpointStorage</span></code> service is used from within the
<codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> code to persist the progress of flows. Thus
ensuring that if the program terminates the flow can be restarted
from the same point and complete the flow. This service should not
<h3>DBTransactionMappingStorage and InMemoryStateMachineRecordedTransactionMappingStorage<aclass="headerlink"href="#dbtransactionmappingstorage-and-inmemorystatemachinerecordedtransactionmappingstorage"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">DBTransactionMappingStorage</span></code> is used within the
<codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> code to relate transactions and flows. This
relationship is exposed in the eventing interface to the RPC clients,
thus allowing them to track the end result of a flow and map to the
actual transactions/states completed. Otherwise this service is unlikely
to be accessed by any CorDapps. The
<codeclass="docutils literal"><spanclass="pre">InMemoryStateMachineRecordedTransactionMappingStorage</span></code> service is
available as a non-persistent implementation for unit tests with no database.</p>
</div>
<divclass="section"id="dbtransactionstorage">
<h3>DBTransactionStorage<aclass="headerlink"href="#dbtransactionstorage"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">DBTransactionStorage</span></code> service is a persistent implementation of
the <codeclass="docutils literal"><spanclass="pre">TransactionStorage</span></code> interface and allows flows read-only
access to full transactions, plus transaction level event callbacks.
Storage of new transactions must be made via the <codeclass="docutils literal"><spanclass="pre">recordTransactions</span></code>
method on the <codeclass="docutils literal"><spanclass="pre">ServiceHub</span></code>, not via a direct call to this service, so
that the various event notifications can occur.</p>
</div>
<divclass="section"id="nodeattachmentservice">
<h3>NodeAttachmentService<aclass="headerlink"href="#nodeattachmentservice"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">NodeAttachmentService</span></code> provides an implementation of the
<codeclass="docutils literal"><spanclass="pre">AttachmentStorage</span></code> interface exposed on the <codeclass="docutils literal"><spanclass="pre">ServiceHub</span></code> allowing
transactions to add documents, copies of the contract code and binary
data to transactions. The data is persisted to the local file system
inside the attachments subfolder of the node workspace. The service is
also interfaced to by the web server, which allows files to be uploaded
<h2>Flow Framework And Event Scheduling Services<aclass="headerlink"href="#flow-framework-and-event-scheduling-services"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="statemachinemanager">
<h3>StateMachineManager<aclass="headerlink"href="#statemachinemanager"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> is the service that runs the active
flows of the node whether initiated by an RPC client, the web
interface, a scheduled state activity, or triggered by receipt of a
message from another node. The <codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> wraps the
flow code (extensions of the <codeclass="docutils literal"><spanclass="pre">FlowLogic</span></code> class) inside an
instance of the <codeclass="docutils literal"><spanclass="pre">FlowStateMachineImpl</span></code> class, which is a
<codeclass="docutils literal"><spanclass="pre">Quasar</span></code><codeclass="docutils literal"><spanclass="pre">Fiber</span></code>. This allows the <codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> to suspend
flows at all key lifecycle points and persist their serialized state
to the database via the <codeclass="docutils literal"><spanclass="pre">DBCheckpointStorage</span></code> service. This process
uses the facilities of the <codeclass="docutils literal"><spanclass="pre">Quasar</span></code><codeclass="docutils literal"><spanclass="pre">Fibers</span></code> library to manage this
process and hence the requirement for the node to run the <codeclass="docutils literal"><spanclass="pre">Quasar</span></code>
java instrumentation agent in its JVM.</p>
<p>In operation the <codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> is typically running an active
flow on its server thread until it encounters a blocking, or
externally visible operation, such as sending a message, waiting for a
message, or initiating a <codeclass="docutils literal"><spanclass="pre">subFlow</span></code>. The fiber is then suspended
and its stack frames serialized to the database, thus ensuring that if
the node is stopped, or crashes at this point the flow will restart
with exactly the same action again. To further ensure consistency, every
event which resumes a flow opens a database transaction, which is
committed during this suspension process ensuring that the database
modifications e.g. state commits stay in sync with the mutating changes
of the flow. Having recorded the fiber state the
<codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> then carries out the network actions as required
(internally one flow message exchanged may actually involve several
physical session messages to authenticate and invoke registered
flows on the remote nodes). The flow will stay suspended until
the required message is returned and the scheduler will resume
processing of other activated flows. On receipt of the expected
response message from the network layer the <codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code>
locates the appropriate flow, resuming it immediately after the
blocking step with the received message. Thus from the perspective of
the flow the code executes as a simple linear progression of
processing, even if there were node restarts and possibly message
resends (the messaging layer deduplicates messages based on an id that
is part of the checkpoint).</p>
<p>The <codeclass="docutils literal"><spanclass="pre">StateMachineManager</span></code> service is not directly exposed to the
flows, or contracts themselves.</p>
</div>
<divclass="section"id="nodeschedulerservice">
<h3>NodeSchedulerService<aclass="headerlink"href="#nodeschedulerservice"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">NodeSchedulerService</span></code> implements the <codeclass="docutils literal"><spanclass="pre">SchedulerService</span></code>
interface and monitors the Vault updates to track any new states that
implement the <codeclass="docutils literal"><spanclass="pre">SchedulableState</span></code> interface and require automatic
scheduled flow initiation. At the scheduled due time the
<codeclass="docutils literal"><spanclass="pre">NodeSchedulerService</span></code> will create a new flow instance passing it
a reference to the state that triggered the event. The flow can then
begin whatever action is required. Note that the scheduled activity
occurs in all nodes holding the state in their Vault, it may therefore
be required for the flow to exit early if the current node is not
<h3>PersistentUniquenessProvider, InMemoryUniquenessProvider and RaftUniquenessProvider<aclass="headerlink"href="#persistentuniquenessprovider-inmemoryuniquenessprovider-and-raftuniquenessprovider"title="Permalink to this headline">¶</a></h3>
<p>These variants of <codeclass="docutils literal"><spanclass="pre">UniquenessProvider</span></code> service are used by the notary
flows to track consumed states and thus reject double-spend
scenarios. The <codeclass="docutils literal"><spanclass="pre">InMemoryUniquenessProvider</span></code> is for unit testing only,
the default being the <codeclass="docutils literal"><spanclass="pre">PersistentUniquenessProvider</span></code> which records the
changes to the DB. When the Raft based notary is active the states are
tracked by the whole cluster using a <codeclass="docutils literal"><spanclass="pre">RaftUniquenessProvider</span></code>. Outside
of the notary flows themselves this service should not be accessed
<h3>NotaryService (SimpleNotaryService, ValidatingNotaryService, RaftValidatingNotaryService)<aclass="headerlink"href="#notaryservice-simplenotaryservice-validatingnotaryservice-raftvalidatingnotaryservice"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">NotaryService</span></code> is an abstract base class for the various concrete
implementations of the Notary server flow. By default, a node does
not run any <codeclass="docutils literal"><spanclass="pre">NotaryService</span></code> server component. However, the appropriate
implementation service is automatically started if the relevant
<codeclass="docutils literal"><spanclass="pre">ServiceType</span></code> id is included in the node’s
<codeclass="docutils literal"><spanclass="pre">extraAdvertisedServiceIds</span></code> configuration property. The node will then
advertise itself as a Notary via the <codeclass="docutils literal"><spanclass="pre">NetworkMapService</span></code> and may then
participate in controlling state uniqueness when contacted by nodes
using the <codeclass="docutils literal"><spanclass="pre">NotaryFlow.Client</span></code><codeclass="docutils literal"><spanclass="pre">subFlow</span></code>. The
<codeclass="docutils literal"><spanclass="pre">SimpleNotaryService</span></code> only offers protection against double spend, but
does no further verification. The <codeclass="docutils literal"><spanclass="pre">ValidatingNotaryService</span></code> checks
that proposed transactions are correctly signed by all keys listed in
the commands and runs the contract verify to ensure that the rules of
the state transition are being followed. The
<codeclass="docutils literal"><spanclass="pre">RaftValidatingNotaryService</span></code> further extends the flow to operate
against a cluster of nodes running shared consensus state across the
RAFT protocol (note this requires the additional configuration of the
<h2>Vault Related Services<aclass="headerlink"href="#vault-related-services"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="nodevaultservice">
<h3>NodeVaultService<aclass="headerlink"href="#nodevaultservice"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">NodeVaultService</span></code> implements the <codeclass="docutils literal"><spanclass="pre">VaultService</span></code> interface to
allow access to the node’s own set of unconsumed states. The service
does this by tracking update notifications from the
<codeclass="docutils literal"><spanclass="pre">TransactionStorage</span></code> service and processing relevant updates to delete
consumed states and insert new states. The resulting update is then
persisted to the database. The <codeclass="docutils literal"><spanclass="pre">VaultService</span></code> then exposes query and
event notification APIs to flows and CorDapp plugins to allow them
to respond to updates, or query for states meeting various conditions to
begin the formation of new transactions consuming them. The equivalent
services are also forwarded to RPC clients, so that they may show
<h3>NodeSchemaService and HibernateObserver<aclass="headerlink"href="#nodeschemaservice-and-hibernateobserver"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal"><spanclass="pre">HibernateObserver</span></code> runs within the node framework and listens for
vault state updates, the <codeclass="docutils literal"><spanclass="pre">HibernateObserver</span></code> then uses the mapping
services of the <codeclass="docutils literal"><spanclass="pre">NodeSchemaService</span></code> to record the states in auxiliary
database tables. This allows Corda state updates to be exposed to
external legacy systems by insertion of unpacked data into existing
tables. To enable these features the contract state must implement the
<codeclass="docutils literal"><spanclass="pre">QueryableState</span></code> interface to define the mappings.</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.