mirror of
https://github.com/corda/corda.git
synced 2025-06-19 07:38:22 +00:00
Merge Open Source to Enterprise (#79)
* Check array size before accessing * Review fixes * CORDA-540: Make Verifier work in AMQP mode (#1870) * reference to finance module via not hardcoded group ID (#1515) * generic way to reference to group id when loading finance.jar via cordapp * Fixed the node shell to work with the DataFeed class * Attempt to make NodeStatePersistenceTests more stable (#1895) By ensuring that the nodes are properly started and aware of each other before firing any flows through them. Also minor refactoring. * Disable unstable test on Windows (#1899) * CORDA-530 Don't soft-lock non-fungible states (#1794) * Don't run unlock query if nothing was locked * Constructors should not have side-effects * [CORDA-442] let Driver run without network map (#1890) * [CORDA-442] let Driver run without network map - Nodes started by driver run without a networkMapNode. - Driver does not take a networkMapStartStrategy anymore - a new parameter in the configuration "noNetworkMapServiceMode" allows for a node not to be a networkMapNode nor to connect to one. - Driver now waits for each node to write its own NodeInfo file to disk and then copies it into each other node. - When driver starts a node N, it waits for every node to be have N nodes in their network map. Note: the code to copy around the NodeInfo files was already in DemoBench, the NodeInfoFilesCopier class was just moved from DemoBench into core (I'm very open to core not being the best place, please advise) * Added missing cordappPackage dependencies. (#1894) * Eliminate circular dependency of NodeSchedulerService on ServiceHub. (#1891) * Update customSchemas documentation. (#1902) * [CORDA-694] Commands visibility for Oracles (without sacrificing privacy) (#1835) new checkCommandVisibility feature for Oracles * CORDA-599 PersistentNetworkMapCache no longer circularly depends on SH (#1652) * CORDA-725 - Change AMQP identifier to officially assigned value This does change our header format so pre-cached test files need regenerating * CORDA-725 - update changelog * CORDA-680 Update cordapp packages documentation (#1901) * Introduce MockNetworkParameters * Cordformation in Kotlin (#1873) Cordformation rewritten in kotlin. * Kotlin migration * Review Comments * CORDA-704: Implement `@DoNotImplement` annotation (#1903) * Enhance the API Scanner plugin to monitor class annotations. * Implement @DoNotImplement annotation, and apply it. * Update API definition. * Update API change detection to handle @DoNotImplement. * Document the `@DoNotImplement` annotation. * Experimental support for PostgreSQL (#1525) * Cash selection refactoring such that 3d party DB providers are only required to implement Coin Selection SQL logic. * Re-added debug logging statement. * Updated to include PR review feedback from VK * Refactoring following rebase from master. * Fix broken JUnits following rebase. * Use JDBC ResultSet getBlob() and added custom serializer to address concern raised by tomtau in PR. * Fix failing JUnits. * Experimental support for PostgreSQL: CashSelection done using window functions * Moved postgresql version information into corda/build.gradle * Using a PreparedStatement in CashSelectionPostgreSQLImpl * Changed the PostgreSQL Cash Selection implementation to use the new refactored AbstractCashSelection * * Retire MockServiceHubInternal (#1909) * Introduce rigorousMock * Add test-utils and node-driver to generated documentation * Fix-up: Bank Of Corda sample (#1912) In the previous version when running with `--role ISSUER` the application failed to start. The reason was that in spite of `quantity` and `currency` were optional, un-necessary `requestParams` been constructed regardless. * move SMM * Interface changes for multi-threading * CORDA-351: added dependency check plugin to gradle build script (#1911) * CORDA-351: added dependency check plugin to gradle build script * CORDA-351: Added suppression stub file with example * CORDA-351: added suppresionFile property * CORDA-435 - Ensure Kryo only tests use Kryo serializatin context Also correct lambda typos (from lamba) * Network map service REST API wrapper (#1907) * Network map client - WIP * Java doc and doc for doc site * remove javax.ws dependency * NetworkParameter -> NetworkParameters * move network map client to node * Fix jetty test dependencies * NetworkParameter -> NetworkParameters * Address PR issues * Address PR issues and unit test fix * Address PR issues * Fixing Bank-Of-Corda Demo in `master` (#1922) * Fix-up: Bank Of Corda sample Use correct CorDapp packages to scan (cherry picked from commit2caa134
) * Set adequate permissions for the nodes such that NodeExplorer can connect (cherry picked from commitae88242
) * Set adequate permissions for the nodes such that NodeExplorer can connect (cherry picked from commitae88242
) * Correct run configuration * Fix-up port numbers * CORDA-435 - AMQP serialisation cannot work with private vals They won't be reported as properties by the introspector and thus we will fail to find a constructor for them. This makes sense as we will be unable to serialise an object whose members we cannot read * CORDA-435 - AMQP enablement fixes AMQP has different serialization rules than Kryo surrounding the way we introspect objects to work out how to construct them * [CORDA-442] make MockNetwork not start a networkmap node (#1908) * [CORDA-442] make MockNetwork not start a networkmap node Now MockNetwork will put the appropriate NodeInfos inside each running node networkMapCache. Tests relating to networkmap node starting and interaction have been removed since they where relaying on MockNetwork * Minor fix for api checker script to support macOS * Retrofit changes from Enterprise PR #61 (#1934) * Introduce MockNodeParameters/Args (#1923) * CORDA-736 Add some new features to corda.jar via node.conf for testing (#1926) * CORDA-699 Add injection or modification of memory network messages (#1920) * Updated API stability changeset to reflect new schema attribute name.
This commit is contained in:
@ -137,6 +137,9 @@ dependencies {
|
||||
// For H2 database support in persistence
|
||||
compile "com.h2database:h2:$h2_version"
|
||||
|
||||
// For Postgres database support in persistence
|
||||
compile "org.postgresql:postgresql:$postgresql_version"
|
||||
|
||||
// SQL connection pooling library
|
||||
compile "com.zaxxer:HikariCP:2.5.1"
|
||||
|
||||
@ -182,6 +185,17 @@ dependencies {
|
||||
smokeTestCompile project(':smoke-test-utils')
|
||||
smokeTestCompile "org.assertj:assertj-core:${assertj_version}"
|
||||
smokeTestCompile "junit:junit:$junit_version"
|
||||
|
||||
// Jetty dependencies for NetworkMapClient test.
|
||||
// Web stuff: for HTTP[S] servlets
|
||||
testCompile "org.eclipse.jetty:jetty-servlet:${jetty_version}"
|
||||
testCompile "org.eclipse.jetty:jetty-webapp:${jetty_version}"
|
||||
testCompile "javax.servlet:javax.servlet-api:3.1.0"
|
||||
|
||||
// Jersey for JAX-RS implementation for use in Jetty
|
||||
testCompile "org.glassfish.jersey.core:jersey-server:${jersey_version}"
|
||||
testCompile "org.glassfish.jersey.containers:jersey-container-servlet-core:${jersey_version}"
|
||||
testCompile "org.glassfish.jersey.containers:jersey-container-jetty-http:${jersey_version}"
|
||||
}
|
||||
|
||||
task integrationTest(type: Test) {
|
||||
|
@ -31,7 +31,10 @@ task buildCordaJAR(type: FatCapsule, dependsOn: project(':node').compileJava) {
|
||||
"$rootDir/config/dev/log4j2.xml"
|
||||
)
|
||||
from 'NOTICE' // Copy CDDL notice
|
||||
|
||||
from { project(':node').configurations.runtime.allDependencies.matching { // Include config library JAR.
|
||||
it.group.equals("com.typesafe") && it.name.equals("config")
|
||||
}.collect { zipTree(project(':node').configurations.runtime.files(it).first()) } }
|
||||
from { "$rootDir/node/build/resources/main/reference.conf" }
|
||||
|
||||
capsuleManifest {
|
||||
applicationVersion = corda_release_version
|
||||
@ -47,6 +50,7 @@ task buildCordaJAR(type: FatCapsule, dependsOn: project(':node').compileJava) {
|
||||
// JVM configuration:
|
||||
// - Constrain to small heap sizes to ease development on low end devices.
|
||||
// - Switch to the G1 GC which is going to be the default in Java 9 and gives low pause times/string dedup.
|
||||
// NOTE: these can be overridden in node.conf.
|
||||
//
|
||||
// If you change these flags, please also update Driver.kt
|
||||
jvmArgs = ['-Xmx200m', '-XX:+UseG1GC']
|
||||
|
@ -14,7 +14,6 @@ import net.corda.nodeapi.internal.ServiceType
|
||||
import net.corda.testing.ALICE
|
||||
import net.corda.testing.ProjectStructure.projectRootDir
|
||||
import net.corda.testing.driver.ListenProcessDeathException
|
||||
import net.corda.testing.driver.NetworkMapStartStrategy
|
||||
import net.corda.testing.driver.driver
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.assertj.core.api.Assertions.assertThatThrownBy
|
||||
@ -59,7 +58,7 @@ class BootTests {
|
||||
@Test
|
||||
fun `node quits on failure to register with network map`() {
|
||||
val tooManyAdvertisedServices = (1..100).map { ServiceInfo(ServiceType.notary.getSubType("$it")) }.toSet()
|
||||
driver(networkMapStartStrategy = NetworkMapStartStrategy.Nominated(ALICE.name)) {
|
||||
driver {
|
||||
val future = startNode(providedName = ALICE.name)
|
||||
assertFailsWith(ListenProcessDeathException::class) { future.getOrThrow() }
|
||||
}
|
||||
|
@ -1,7 +1,6 @@
|
||||
package net.corda.node
|
||||
|
||||
import com.google.common.base.Stopwatch
|
||||
import net.corda.testing.driver.NetworkMapStartStrategy
|
||||
import net.corda.testing.driver.driver
|
||||
import org.junit.Ignore
|
||||
import org.junit.Test
|
||||
@ -14,8 +13,7 @@ class NodeStartupPerformanceTests {
|
||||
// Measure the startup time of nodes. Note that this includes an RPC roundtrip, which causes e.g. Kryo initialisation.
|
||||
@Test
|
||||
fun `single node startup time`() {
|
||||
driver(networkMapStartStrategy = NetworkMapStartStrategy.Dedicated(startAutomatically = false)) {
|
||||
startDedicatedNetworkMapService().get()
|
||||
driver {
|
||||
val times = ArrayList<Long>()
|
||||
for (i in 1..10) {
|
||||
val time = Stopwatch.createStarted().apply {
|
||||
|
@ -1,5 +1,6 @@
|
||||
package net.corda.node.services
|
||||
|
||||
import com.nhaarman.mockito_kotlin.doReturn
|
||||
import com.nhaarman.mockito_kotlin.whenever
|
||||
import net.corda.core.contracts.AlwaysAcceptAttachmentConstraint
|
||||
import net.corda.core.contracts.ContractState
|
||||
@ -29,6 +30,7 @@ import net.corda.testing.contracts.DummyContract
|
||||
import net.corda.testing.dummyCommand
|
||||
import net.corda.testing.getDefaultNotary
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNodeParameters
|
||||
import org.junit.After
|
||||
import org.junit.Test
|
||||
import java.nio.file.Paths
|
||||
@ -56,10 +58,10 @@ class BFTNotaryServiceTests {
|
||||
clusterName)
|
||||
val clusterAddresses = replicaIds.map { NetworkHostAndPort("localhost", 11000 + it * 10) }
|
||||
replicaIds.forEach { replicaId ->
|
||||
mockNet.createNode(configOverrides = {
|
||||
mockNet.createNode(MockNodeParameters(configOverrides = {
|
||||
val notary = NotaryConfig(validating = false, bftSMaRt = BFTSMaRtConfiguration(replicaId, clusterAddresses, exposeRaces = exposeRaces))
|
||||
whenever(it.notary).thenReturn(notary)
|
||||
})
|
||||
doReturn(notary).whenever(it).notary
|
||||
}))
|
||||
}
|
||||
mockNet.runNetwork() // Exchange initial network map registration messages.
|
||||
}
|
||||
|
@ -8,6 +8,7 @@ import net.corda.core.internal.div
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.node.services.KeyManagementService
|
||||
import net.corda.node.services.identity.InMemoryIdentityService
|
||||
import net.corda.nodeapi.NodeInfoFilesCopier
|
||||
import net.corda.testing.ALICE
|
||||
import net.corda.testing.ALICE_KEY
|
||||
import net.corda.testing.DEV_TRUST_ROOT
|
||||
@ -42,7 +43,6 @@ class NodeInfoWatcherTest : NodeBasedTest() {
|
||||
lateinit var nodeInfoWatcher: NodeInfoWatcher
|
||||
|
||||
companion object {
|
||||
val nodeInfoFileRegex = Regex("nodeInfo\\-.*")
|
||||
val nodeInfo = NodeInfo(listOf(), listOf(getTestPartyAndCertificate(ALICE)), 0, 0)
|
||||
}
|
||||
|
||||
@ -56,13 +56,14 @@ class NodeInfoWatcherTest : NodeBasedTest() {
|
||||
|
||||
@Test
|
||||
fun `save a NodeInfo`() {
|
||||
assertEquals(0, folder.root.list().filter { it.matches(nodeInfoFileRegex) }.size)
|
||||
assertEquals(0,
|
||||
folder.root.list().filter { it.startsWith(NodeInfoFilesCopier.NODE_INFO_FILE_NAME_PREFIX) }.size)
|
||||
NodeInfoWatcher.saveToFile(folder.root.toPath(), nodeInfo, keyManagementService)
|
||||
|
||||
val nodeInfoFiles = folder.root.list().filter { it.matches(nodeInfoFileRegex) }
|
||||
val nodeInfoFiles = folder.root.list().filter { it.startsWith(NodeInfoFilesCopier.NODE_INFO_FILE_NAME_PREFIX) }
|
||||
assertEquals(1, nodeInfoFiles.size)
|
||||
val fileName = nodeInfoFiles.first()
|
||||
assertTrue(fileName.matches(nodeInfoFileRegex))
|
||||
assertTrue(fileName.startsWith(NodeInfoFilesCopier.NODE_INFO_FILE_NAME_PREFIX))
|
||||
val file = (folder.root.path / fileName).toFile()
|
||||
// Just check that something is written, another tests verifies that the written value can be read back.
|
||||
assertThat(contentOf(file)).isNotEmpty()
|
||||
|
@ -46,7 +46,7 @@ class PersistentNetworkMapCacheTest : NodeBasedTest() {
|
||||
@Test
|
||||
fun `get nodes by owning key and by name, no network map service`() {
|
||||
val alice = startNodesWithPort(listOf(ALICE), noNetworkMap = true)[0]
|
||||
val netCache = alice.services.networkMapCache as PersistentNetworkMapCache
|
||||
val netCache = alice.services.networkMapCache
|
||||
alice.database.transaction {
|
||||
val res = netCache.getNodeByLegalIdentity(alice.info.chooseIdentity())
|
||||
assertEquals(alice.info, res)
|
||||
@ -58,7 +58,7 @@ class PersistentNetworkMapCacheTest : NodeBasedTest() {
|
||||
@Test
|
||||
fun `get nodes by address no network map service`() {
|
||||
val alice = startNodesWithPort(listOf(ALICE), noNetworkMap = true)[0]
|
||||
val netCache = alice.services.networkMapCache as PersistentNetworkMapCache
|
||||
val netCache = alice.services.networkMapCache
|
||||
alice.database.transaction {
|
||||
val res = netCache.getNodeByAddress(alice.info.addresses[0])
|
||||
assertEquals(alice.info, res)
|
||||
|
@ -1,5 +1,6 @@
|
||||
package net.corda.services.messaging
|
||||
|
||||
import com.nhaarman.mockito_kotlin.doReturn
|
||||
import com.nhaarman.mockito_kotlin.whenever
|
||||
import net.corda.core.concurrent.CordaFuture
|
||||
import net.corda.core.crypto.random63BitValue
|
||||
@ -61,8 +62,8 @@ class P2PSecurityTest : NodeBasedTest() {
|
||||
val config = testNodeConfiguration(
|
||||
baseDirectory = baseDirectory(legalName),
|
||||
myLegalName = legalName).also {
|
||||
whenever(it.networkMapService).thenReturn(NetworkMapInfo(networkMapNode.internals.configuration.p2pAddress, networkMapNode.info.chooseIdentity().name))
|
||||
whenever(it.activeMQServer).thenReturn(ActiveMqServerConfiguration(BridgeConfiguration(1001, 2, 3.4)))
|
||||
doReturn(NetworkMapInfo(networkMapNode.internals.configuration.p2pAddress, networkMapNode.info.chooseIdentity().name)).whenever(it).networkMapService
|
||||
doReturn(ActiveMqServerConfiguration(BridgeConfiguration(1001, 2, 3.4))).whenever(it).activeMQServer
|
||||
}
|
||||
config.configureWithDevSSLCertificate() // This creates the node's TLS cert with the CN as the legal name
|
||||
return SimpleNode(config, trustRoot = trustRoot).apply { start() }
|
||||
@ -73,6 +74,6 @@ class P2PSecurityTest : NodeBasedTest() {
|
||||
val nodeInfo = NodeInfo(listOf(MOCK_HOST_AND_PORT), listOf(legalIdentity), 1, serial = 1)
|
||||
val registration = NodeRegistration(nodeInfo, System.currentTimeMillis(), AddOrRemove.ADD, Instant.MAX)
|
||||
val request = RegistrationRequest(registration.toWire(keyService, identity.public), network.myAddress)
|
||||
return network.sendRequest<NetworkMapService.RegistrationResponse>(NetworkMapService.REGISTER_TOPIC, request, networkMapNode.network.myAddress)
|
||||
return network.sendRequest(NetworkMapService.REGISTER_TOPIC, request, networkMapNode.network.myAddress)
|
||||
}
|
||||
}
|
||||
|
@ -7,6 +7,7 @@ import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.flows.StartableByRPC
|
||||
import net.corda.core.identity.AbstractParty
|
||||
import net.corda.core.identity.Party
|
||||
import net.corda.core.internal.concurrent.transpose
|
||||
import net.corda.core.messaging.startFlow
|
||||
import net.corda.core.schemas.MappedSchema
|
||||
import net.corda.core.schemas.PersistentState
|
||||
@ -21,37 +22,56 @@ import net.corda.node.services.FlowPermissions
|
||||
import net.corda.nodeapi.User
|
||||
import net.corda.testing.DUMMY_NOTARY
|
||||
import net.corda.testing.chooseIdentity
|
||||
import net.corda.testing.driver.DriverDSLExposedInterface
|
||||
import net.corda.testing.driver.NodeHandle
|
||||
import net.corda.testing.driver.driver
|
||||
import org.junit.Assume
|
||||
import org.junit.Test
|
||||
import java.lang.management.ManagementFactory
|
||||
import javax.persistence.Column
|
||||
import javax.persistence.Entity
|
||||
import javax.persistence.Table
|
||||
import kotlin.test.assertEquals
|
||||
import kotlin.test.assertNotNull
|
||||
|
||||
class NodeStatePersistenceTests {
|
||||
|
||||
@Test
|
||||
fun `persistent state survives node restart`() {
|
||||
// Temporary disable this test when executed on Windows. It is known to be sporadically failing.
|
||||
// More investigation is needed to establish why.
|
||||
Assume.assumeFalse(System.getProperty("os.name").toLowerCase().startsWith("win"))
|
||||
|
||||
val user = User("mark", "dadada", setOf(FlowPermissions.startFlowPermission<SendMessageFlow>()))
|
||||
val message = Message("Hello world!")
|
||||
driver(isDebug = true, startNodesInProcess = isQuasarAgentSpecified()) {
|
||||
startNotaryNode(DUMMY_NOTARY.name, validating = false).getOrThrow()
|
||||
var nodeHandle = startNode(rpcUsers = listOf(user)).getOrThrow()
|
||||
val nodeName = nodeHandle.nodeInfo.chooseIdentity().name
|
||||
nodeHandle.rpcClientToNode().start(user.username, user.password).use {
|
||||
it.proxy.startFlow(::SendMessageFlow, message).returnValue.getOrThrow()
|
||||
}
|
||||
nodeHandle.stop().getOrThrow()
|
||||
val (nodeName, notaryNodeHandle) = {
|
||||
val notaryNodeHandle = startNotaryNode(DUMMY_NOTARY.name, validating = false).getOrThrow()
|
||||
val nodeHandle = startNode(rpcUsers = listOf(user)).getOrThrow()
|
||||
ensureAcquainted(notaryNodeHandle, nodeHandle)
|
||||
val nodeName = nodeHandle.nodeInfo.chooseIdentity().name
|
||||
nodeHandle.rpcClientToNode().start(user.username, user.password).use {
|
||||
it.proxy.startFlow(::SendMessageFlow, message).returnValue.getOrThrow()
|
||||
}
|
||||
nodeHandle.stop().getOrThrow()
|
||||
nodeName to notaryNodeHandle
|
||||
}()
|
||||
|
||||
nodeHandle = startNode(providedName = nodeName, rpcUsers = listOf(user)).getOrThrow()
|
||||
val nodeHandle = startNode(providedName = nodeName, rpcUsers = listOf(user)).getOrThrow()
|
||||
ensureAcquainted(notaryNodeHandle, nodeHandle)
|
||||
nodeHandle.rpcClientToNode().start(user.username, user.password).use {
|
||||
val page = it.proxy.vaultQuery(MessageState::class.java)
|
||||
val retrievedMessage = page.states.singleOrNull()?.state?.data?.message
|
||||
val stateAndRef = page.states.singleOrNull()
|
||||
assertNotNull(stateAndRef)
|
||||
val retrievedMessage = stateAndRef!!.state.data.message
|
||||
assertEquals(message, retrievedMessage)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun DriverDSLExposedInterface.ensureAcquainted(one: NodeHandle, another: NodeHandle) {
|
||||
listOf(one.pollUntilKnowsAbout(another), another.pollUntilKnowsAbout(one)).transpose().getOrThrow()
|
||||
}
|
||||
}
|
||||
|
||||
fun isQuasarAgentSpecified(): Boolean {
|
||||
@ -95,7 +115,7 @@ object MessageSchemaV1 : MappedSchema(
|
||||
) : PersistentState()
|
||||
}
|
||||
|
||||
val MESSAGE_CONTRACT_PROGRAM_ID = "net.corda.test.node.MessageContract"
|
||||
const val MESSAGE_CONTRACT_PROGRAM_ID = "net.corda.test.node.MessageContract"
|
||||
|
||||
open class MessageContract : Contract {
|
||||
override fun verify(tx: LedgerTransaction) {
|
||||
|
@ -2,18 +2,73 @@
|
||||
// must also be in the default package. When using Kotlin there are a whole host of exceptions
|
||||
// trying to construct this from Capsule, so it is written in Java.
|
||||
|
||||
import sun.misc.*;
|
||||
import com.typesafe.config.*;
|
||||
import sun.misc.Signal;
|
||||
import sun.misc.SignalHandler;
|
||||
|
||||
import java.io.*;
|
||||
import java.nio.file.*;
|
||||
import java.io.File;
|
||||
import java.nio.file.Path;
|
||||
import java.nio.file.Paths;
|
||||
import java.util.*;
|
||||
|
||||
public class CordaCaplet extends Capsule {
|
||||
|
||||
private Config nodeConfig = null;
|
||||
private String baseDir = null;
|
||||
|
||||
protected CordaCaplet(Capsule pred) {
|
||||
super(pred);
|
||||
}
|
||||
|
||||
private Config parseConfigFile(List<String> args) {
|
||||
String baseDirOption = getOption(args, "--base-directory");
|
||||
this.baseDir = Paths.get((baseDirOption == null) ? "." : baseDirOption).toAbsolutePath().normalize().toString();
|
||||
String config = getOption(args, "--config-file");
|
||||
File configFile = (config == null) ? new File(baseDir, "node.conf") : new File(config);
|
||||
try {
|
||||
ConfigParseOptions parseOptions = ConfigParseOptions.defaults().setAllowMissing(false);
|
||||
Config defaultConfig = ConfigFactory.parseResources("reference.conf", parseOptions);
|
||||
Config baseDirectoryConfig = ConfigFactory.parseMap(Collections.singletonMap("baseDirectory", baseDir));
|
||||
Config nodeConfig = ConfigFactory.parseFile(configFile, parseOptions);
|
||||
return baseDirectoryConfig.withFallback(nodeConfig).withFallback(defaultConfig).resolve();
|
||||
} catch (ConfigException e) {
|
||||
log(LOG_QUIET, e);
|
||||
return ConfigFactory.empty();
|
||||
}
|
||||
}
|
||||
|
||||
private String getOption(List<String> args, String option) {
|
||||
final String lowerCaseOption = option.toLowerCase();
|
||||
int index = 0;
|
||||
for (String arg : args) {
|
||||
if (arg.toLowerCase().equals(lowerCaseOption)) {
|
||||
if (index < args.size() - 1) {
|
||||
return args.get(index + 1);
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
index++;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ProcessBuilder prelaunch(List<String> jvmArgs, List<String> args) {
|
||||
nodeConfig = parseConfigFile(args);
|
||||
return super.prelaunch(jvmArgs, args);
|
||||
}
|
||||
|
||||
// Add working directory variable to capsules string replacement variables.
|
||||
@Override
|
||||
protected String getVarValue(String var) {
|
||||
if (var.equals("baseDirectory")) {
|
||||
return baseDir;
|
||||
} else {
|
||||
return super.getVarValue(var);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Overriding the Caplet classpath generation via the intended interface in Capsule.
|
||||
*/
|
||||
@ -25,18 +80,55 @@ public class CordaCaplet extends Capsule {
|
||||
if (ATTR_APP_CLASS_PATH == attr) {
|
||||
T cp = super.attribute(attr);
|
||||
|
||||
(new File("cordapps")).mkdir();
|
||||
augmentClasspath((List<Path>) cp, "cordapps");
|
||||
augmentClasspath((List<Path>) cp, "plugins");
|
||||
(new File(baseDir, "cordapps")).mkdir();
|
||||
augmentClasspath((List<Path>) cp, new File(baseDir, "cordapps"));
|
||||
augmentClasspath((List<Path>) cp, new File(baseDir, "plugins"));
|
||||
// Add additional directories of JARs to the classpath (at the end). e.g. for JDBC drivers
|
||||
try {
|
||||
List<String> jarDirs = nodeConfig.getStringList("jarDirs");
|
||||
log(LOG_VERBOSE, "Configured JAR directories = " + jarDirs);
|
||||
for (String jarDir : jarDirs) {
|
||||
augmentClasspath((List<Path>) cp, new File(jarDir));
|
||||
}
|
||||
} catch (ConfigException.Missing e) {
|
||||
// Ignore since it's ok to be Missing. Other errors would be unexpected.
|
||||
} catch (ConfigException e) {
|
||||
log(LOG_QUIET, e);
|
||||
}
|
||||
return cp;
|
||||
}
|
||||
return super.attribute(attr);
|
||||
} else if (ATTR_JVM_ARGS == attr) {
|
||||
// Read JVM args from the config if specified, else leave alone.
|
||||
List<String> jvmArgs = new ArrayList<>((List<String>) super.attribute(attr));
|
||||
try {
|
||||
List<String> configJvmArgs = nodeConfig.getStringList("jvmArgs");
|
||||
jvmArgs.clear();
|
||||
jvmArgs.addAll(configJvmArgs);
|
||||
log(LOG_VERBOSE, "Configured JVM args = " + jvmArgs);
|
||||
} catch (ConfigException.Missing e) {
|
||||
// Ignore since it's ok to be Missing. Other errors would be unexpected.
|
||||
} catch (ConfigException e) {
|
||||
log(LOG_QUIET, e);
|
||||
}
|
||||
return (T) jvmArgs;
|
||||
} else if (ATTR_SYSTEM_PROPERTIES == attr) {
|
||||
// Add system properties, if specified, from the config.
|
||||
Map<String, String> systemProps = new LinkedHashMap<>((Map<String, String>) super.attribute(attr));
|
||||
try {
|
||||
Config overrideSystemProps = nodeConfig.getConfig("systemProperties");
|
||||
log(LOG_VERBOSE, "Configured system properties = " + overrideSystemProps);
|
||||
for (Map.Entry<String, ConfigValue> entry : overrideSystemProps.entrySet()) {
|
||||
systemProps.put(entry.getKey(), entry.getValue().unwrapped().toString());
|
||||
}
|
||||
} catch (ConfigException.Missing e) {
|
||||
// Ignore since it's ok to be Missing. Other errors would be unexpected.
|
||||
} catch (ConfigException e) {
|
||||
log(LOG_QUIET, e);
|
||||
}
|
||||
return (T) systemProps;
|
||||
} else return super.attribute(attr);
|
||||
}
|
||||
|
||||
// TODO: Make directory configurable via the capsule manifest.
|
||||
// TODO: Add working directory variable to capsules string replacement variables.
|
||||
private void augmentClasspath(List<Path> classpath, String dirName) {
|
||||
File dir = new File(dirName);
|
||||
private void augmentClasspath(List<Path> classpath, File dir) {
|
||||
if (dir.exists()) {
|
||||
File[] files = dir.listFiles();
|
||||
for (File file : files) {
|
||||
|
@ -11,26 +11,23 @@ import net.corda.core.flows.*
|
||||
import net.corda.core.identity.CordaX500Name
|
||||
import net.corda.core.identity.Party
|
||||
import net.corda.core.identity.PartyAndCertificate
|
||||
import net.corda.core.internal.VisibleForTesting
|
||||
import net.corda.core.internal.cert
|
||||
import net.corda.core.internal.*
|
||||
import net.corda.core.internal.concurrent.doneFuture
|
||||
import net.corda.core.internal.concurrent.flatMap
|
||||
import net.corda.core.internal.concurrent.openFuture
|
||||
import net.corda.core.internal.toX509CertHolder
|
||||
import net.corda.core.internal.uncheckedCast
|
||||
import net.corda.core.messaging.*
|
||||
import net.corda.core.node.AppServiceHub
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.node.ServiceHub
|
||||
import net.corda.core.node.StateLoader
|
||||
import net.corda.core.node.services.*
|
||||
import net.corda.core.node.services.NetworkMapCache.MapChange
|
||||
import net.corda.core.serialization.SerializationWhitelist
|
||||
import net.corda.core.serialization.SerializeAsToken
|
||||
import net.corda.core.serialization.SingletonSerializeAsToken
|
||||
import net.corda.core.transactions.SignedTransaction
|
||||
import net.corda.core.utilities.NetworkHostAndPort
|
||||
import net.corda.core.utilities.debug
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.node.VersionInfo
|
||||
import net.corda.node.internal.classloading.requireAnnotation
|
||||
import net.corda.node.internal.cordapp.CordappLoader
|
||||
@ -78,6 +75,7 @@ import java.security.cert.CertificateFactory
|
||||
import java.security.cert.X509Certificate
|
||||
import java.sql.Connection
|
||||
import java.time.Clock
|
||||
import java.util.*
|
||||
import java.util.concurrent.ConcurrentHashMap
|
||||
import java.util.concurrent.ExecutorService
|
||||
import java.util.concurrent.TimeUnit.SECONDS
|
||||
@ -108,7 +106,7 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
|
||||
private class StartedNodeImpl<out N : AbstractNode>(
|
||||
override val internals: N,
|
||||
override val services: ServiceHubInternalImpl,
|
||||
services: ServiceHubInternalImpl,
|
||||
override val info: NodeInfo,
|
||||
override val checkpointStorage: CheckpointStorage,
|
||||
override val smm: StateMachineManager,
|
||||
@ -116,8 +114,11 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
override val inNodeNetworkMapService: NetworkMapService,
|
||||
override val network: MessagingService,
|
||||
override val database: CordaPersistence,
|
||||
override val rpcOps: CordaRPCOps) : StartedNode<N>
|
||||
|
||||
override val rpcOps: CordaRPCOps,
|
||||
flowStarter: FlowStarter,
|
||||
internal val schedulerService: NodeSchedulerService) : StartedNode<N> {
|
||||
override val services: StartedNodeServices = object : StartedNodeServices, ServiceHubInternal by services, FlowStarter by flowStarter {}
|
||||
}
|
||||
// TODO: Persist this, as well as whether the node is registered.
|
||||
/**
|
||||
* Sequence number of changes sent to the network map service, when registering/de-registering this node.
|
||||
@ -138,10 +139,12 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
protected val services: ServiceHubInternal get() = _services
|
||||
private lateinit var _services: ServiceHubInternalImpl
|
||||
protected lateinit var legalIdentity: PartyAndCertificate
|
||||
private lateinit var allIdentities: List<PartyAndCertificate>
|
||||
protected lateinit var info: NodeInfo
|
||||
protected var myNotaryIdentity: PartyAndCertificate? = null
|
||||
protected lateinit var checkpointStorage: CheckpointStorage
|
||||
protected lateinit var smm: StateMachineManager
|
||||
private lateinit var tokenizableServices: List<Any>
|
||||
protected lateinit var attachments: NodeAttachmentService
|
||||
protected lateinit var inNodeNetworkMapService: NetworkMapService
|
||||
protected lateinit var network: MessagingService
|
||||
@ -167,8 +170,8 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
@Volatile private var _started: StartedNode<AbstractNode>? = null
|
||||
|
||||
/** The implementation of the [CordaRPCOps] interface used by this node. */
|
||||
open fun makeRPCOps(): CordaRPCOps {
|
||||
return CordaRPCOpsImpl(services, smm, database)
|
||||
open fun makeRPCOps(flowStarter: FlowStarter): CordaRPCOps {
|
||||
return CordaRPCOpsImpl(services, smm, database, flowStarter)
|
||||
}
|
||||
|
||||
private fun saveOwnNodeInfo() {
|
||||
@ -190,7 +193,8 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
log.info("Generating nodeInfo ...")
|
||||
val schemaService = makeSchemaService()
|
||||
initialiseDatabasePersistence(schemaService) {
|
||||
makeServices(schemaService)
|
||||
val transactionStorage = makeTransactionStorage()
|
||||
makeServices(schemaService, transactionStorage, StateLoaderImpl(transactionStorage))
|
||||
saveOwnNodeInfo()
|
||||
}
|
||||
}
|
||||
@ -202,17 +206,13 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
val schemaService = makeSchemaService()
|
||||
// Do all of this in a database transaction so anything that might need a connection has one.
|
||||
val startedImpl = initialiseDatabasePersistence(schemaService) {
|
||||
val tokenizableServices = makeServices(schemaService)
|
||||
val transactionStorage = makeTransactionStorage()
|
||||
val stateLoader = StateLoaderImpl(transactionStorage)
|
||||
val services = makeServices(schemaService, transactionStorage, stateLoader)
|
||||
saveOwnNodeInfo()
|
||||
smm = StateMachineManager(services,
|
||||
checkpointStorage,
|
||||
serverThread,
|
||||
database,
|
||||
busyNodeLatch,
|
||||
cordappLoader.appClassLoader)
|
||||
|
||||
smm.tokenizableServices.addAll(tokenizableServices)
|
||||
|
||||
smm = makeStateMachineManager()
|
||||
val flowStarter = FlowStarterImpl(serverThread, smm)
|
||||
val schedulerService = NodeSchedulerService(platformClock, this@AbstractNode.database, flowStarter, stateLoader, unfinishedSchedules = busyNodeLatch, serverThread = serverThread)
|
||||
if (serverThread is ExecutorService) {
|
||||
runOnStop += {
|
||||
// We wait here, even though any in-flight messages should have been drained away because the
|
||||
@ -221,48 +221,60 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
MoreExecutors.shutdownAndAwaitTermination(serverThread as ExecutorService, 50, SECONDS)
|
||||
}
|
||||
}
|
||||
|
||||
makeVaultObservers()
|
||||
|
||||
val rpcOps = makeRPCOps()
|
||||
makeVaultObservers(schedulerService)
|
||||
val rpcOps = makeRPCOps(flowStarter)
|
||||
startMessagingService(rpcOps)
|
||||
installCoreFlows()
|
||||
|
||||
installCordaServices()
|
||||
val cordaServices = installCordaServices(flowStarter)
|
||||
tokenizableServices = services + cordaServices + schedulerService
|
||||
registerCordappFlows()
|
||||
_services.rpcFlows += cordappLoader.cordapps.flatMap { it.rpcFlows }
|
||||
FlowLogicRefFactoryImpl.classloader = cordappLoader.appClassLoader
|
||||
|
||||
runOnStop += network::stop
|
||||
StartedNodeImpl(this, _services, info, checkpointStorage, smm, attachments, inNodeNetworkMapService, network, database, rpcOps)
|
||||
StartedNodeImpl(this, _services, info, checkpointStorage, smm, attachments, inNodeNetworkMapService, network, database, rpcOps, flowStarter, schedulerService)
|
||||
}
|
||||
// If we successfully loaded network data from database, we set this future to Unit.
|
||||
_nodeReadyFuture.captureLater(registerWithNetworkMapIfConfigured())
|
||||
return startedImpl.apply {
|
||||
database.transaction {
|
||||
smm.start()
|
||||
smm.start(tokenizableServices)
|
||||
// Shut down the SMM so no Fibers are scheduled.
|
||||
runOnStop += { smm.stop(acceptableLiveFiberCountOnStop()) }
|
||||
services.schedulerService.start()
|
||||
schedulerService.start()
|
||||
}
|
||||
_started = this
|
||||
}
|
||||
}
|
||||
|
||||
protected open fun makeStateMachineManager(): StateMachineManager {
|
||||
return StateMachineManagerImpl(
|
||||
services,
|
||||
checkpointStorage,
|
||||
serverThread,
|
||||
database,
|
||||
busyNodeLatch,
|
||||
cordappLoader.appClassLoader
|
||||
)
|
||||
}
|
||||
|
||||
private class ServiceInstantiationException(cause: Throwable?) : CordaException("Service Instantiation Error", cause)
|
||||
|
||||
private fun installCordaServices() {
|
||||
private fun installCordaServices(flowStarter: FlowStarter): List<SerializeAsToken> {
|
||||
val loadedServices = cordappLoader.cordapps.flatMap { it.services }
|
||||
filterServicesToInstall(loadedServices).forEach {
|
||||
return filterServicesToInstall(loadedServices).mapNotNull {
|
||||
try {
|
||||
installCordaService(it)
|
||||
installCordaService(flowStarter, it)
|
||||
} catch (e: NoSuchMethodException) {
|
||||
log.error("${it.name}, as a Corda service, must have a constructor with a single parameter of type " +
|
||||
ServiceHub::class.java.name)
|
||||
null
|
||||
} catch (e: ServiceInstantiationException) {
|
||||
log.error("Corda service ${it.name} failed to instantiate", e.cause)
|
||||
null
|
||||
} catch (e: Exception) {
|
||||
log.error("Unable to install Corda service ${it.name}", e)
|
||||
null
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -274,8 +286,7 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
require(customNotaryServiceList.size == 1) {
|
||||
"Attempting to install more than one notary service: ${customNotaryServiceList.joinToString()}"
|
||||
}
|
||||
}
|
||||
else return loadedServices - customNotaryServiceList
|
||||
} else return loadedServices - customNotaryServiceList
|
||||
}
|
||||
return loadedServices
|
||||
}
|
||||
@ -289,7 +300,7 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
/**
|
||||
* This customizes the ServiceHub for each CordaService that is initiating flows
|
||||
*/
|
||||
private class AppServiceHubImpl<T : SerializeAsToken>(val serviceHub: ServiceHubInternal) : AppServiceHub, ServiceHub by serviceHub {
|
||||
private class AppServiceHubImpl<T : SerializeAsToken>(private val serviceHub: ServiceHub, private val flowStarter: FlowStarter) : AppServiceHub, ServiceHub by serviceHub {
|
||||
lateinit var serviceInstance: T
|
||||
override fun <T> startTrackedFlow(flow: FlowLogic<T>): FlowProgressHandle<T> {
|
||||
val stateMachine = startFlowChecked(flow)
|
||||
@ -305,38 +316,28 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
return FlowHandleImpl(id = stateMachine.id, returnValue = stateMachine.resultFuture)
|
||||
}
|
||||
|
||||
private fun <T> startFlowChecked(flow: FlowLogic<T>): FlowStateMachineImpl<T> {
|
||||
private fun <T> startFlowChecked(flow: FlowLogic<T>): FlowStateMachine<T> {
|
||||
val logicType = flow.javaClass
|
||||
require(logicType.isAnnotationPresent(StartableByService::class.java)) { "${logicType.name} was not designed for starting by a CordaService" }
|
||||
val currentUser = FlowInitiator.Service(serviceInstance.javaClass.name)
|
||||
return serviceHub.startFlow(flow, currentUser)
|
||||
return flowStarter.startFlow(flow, currentUser).getOrThrow()
|
||||
}
|
||||
|
||||
override fun equals(other: Any?): Boolean {
|
||||
if (this === other) return true
|
||||
if (other !is AppServiceHubImpl<*>) return false
|
||||
|
||||
if (serviceHub != other.serviceHub) return false
|
||||
if (serviceInstance != other.serviceInstance) return false
|
||||
|
||||
return true
|
||||
return serviceHub == other.serviceHub
|
||||
&& flowStarter == other.flowStarter
|
||||
&& serviceInstance == other.serviceInstance
|
||||
}
|
||||
|
||||
override fun hashCode(): Int {
|
||||
var result = serviceHub.hashCode()
|
||||
result = 31 * result + serviceInstance.hashCode()
|
||||
return result
|
||||
}
|
||||
override fun hashCode() = Objects.hash(serviceHub, flowStarter, serviceInstance)
|
||||
}
|
||||
|
||||
/**
|
||||
* Use this method to install your Corda services in your tests. This is automatically done by the node when it
|
||||
* starts up for all classes it finds which are annotated with [CordaService].
|
||||
*/
|
||||
fun <T : SerializeAsToken> installCordaService(serviceClass: Class<T>): T {
|
||||
private fun <T : SerializeAsToken> installCordaService(flowStarter: FlowStarter, serviceClass: Class<T>): T {
|
||||
serviceClass.requireAnnotation<CordaService>()
|
||||
val service = try {
|
||||
val serviceContext = AppServiceHubImpl<T>(services)
|
||||
val serviceContext = AppServiceHubImpl<T>(services, flowStarter)
|
||||
if (isNotaryService(serviceClass)) {
|
||||
check(myNotaryIdentity != null) { "Trying to install a notary service but no notary identity specified" }
|
||||
val constructor = serviceClass.getDeclaredConstructor(AppServiceHub::class.java, PublicKey::class.java).apply { isAccessible = true }
|
||||
@ -357,7 +358,6 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
throw ServiceInstantiationException(e.cause)
|
||||
}
|
||||
cordappServices.putInstance(serviceClass, service)
|
||||
smm.tokenizableServices += service
|
||||
|
||||
if (service is NotaryService) handleCustomNotaryService(service)
|
||||
|
||||
@ -365,6 +365,12 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
return service
|
||||
}
|
||||
|
||||
fun <T : Any> findTokenizableService(clazz: Class<T>): T? {
|
||||
return tokenizableServices.firstOrNull { clazz.isAssignableFrom(it.javaClass) }?.let { uncheckedCast(it) }
|
||||
}
|
||||
|
||||
inline fun <reified T : Any> findTokenizableService() = findTokenizableService(T::class.java)
|
||||
|
||||
private fun handleCustomNotaryService(service: NotaryService) {
|
||||
runOnStop += service::stop
|
||||
service.start()
|
||||
@ -467,19 +473,22 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
* Builds node internal, advertised, and plugin services.
|
||||
* Returns a list of tokenizable services to be added to the serialisation context.
|
||||
*/
|
||||
private fun makeServices(schemaService: SchemaService): MutableList<Any> {
|
||||
private fun makeServices(schemaService: SchemaService, transactionStorage: WritableTransactionStorage, stateLoader: StateLoader): MutableList<Any> {
|
||||
checkpointStorage = DBCheckpointStorage()
|
||||
val transactionStorage = makeTransactionStorage()
|
||||
val metrics = MetricRegistry()
|
||||
attachments = NodeAttachmentService(metrics)
|
||||
val cordappProvider = CordappProviderImpl(cordappLoader, attachments)
|
||||
_services = ServiceHubInternalImpl(schemaService, transactionStorage, StateLoaderImpl(transactionStorage), MonitoringService(metrics), cordappProvider)
|
||||
_services = ServiceHubInternalImpl(schemaService, transactionStorage, stateLoader, MonitoringService(metrics), cordappProvider)
|
||||
legalIdentity = obtainIdentity(notaryConfig = null)
|
||||
// TODO We keep only notary identity as additional legalIdentity if we run it on a node . Multiple identities need more design thinking.
|
||||
myNotaryIdentity = getNotaryIdentity()
|
||||
allIdentities = listOf(legalIdentity, myNotaryIdentity).filterNotNull()
|
||||
network = makeMessagingService(legalIdentity)
|
||||
info = makeInfo(legalIdentity)
|
||||
val addresses = myAddresses() // TODO There is no support for multiple IP addresses yet.
|
||||
info = NodeInfo(addresses, allIdentities, versionInfo.platformVersion, platformClock.instant().toEpochMilli())
|
||||
val networkMapCache = services.networkMapCache
|
||||
val tokenizableServices = mutableListOf(attachments, network, services.vaultService,
|
||||
services.keyManagementService, services.identityService, platformClock, services.schedulerService,
|
||||
services.keyManagementService, services.identityService, platformClock,
|
||||
services.auditService, services.monitoringService, networkMapCache, services.schemaService,
|
||||
services.transactionVerifierService, services.validatedTransactions, services.contractUpgradeService,
|
||||
services, cordappProvider, this)
|
||||
@ -489,19 +498,10 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
|
||||
protected open fun makeTransactionStorage(): WritableTransactionStorage = DBTransactionStorage()
|
||||
|
||||
private fun makeVaultObservers() {
|
||||
VaultSoftLockManager(services.vaultService, smm)
|
||||
ScheduledActivityObserver(services)
|
||||
HibernateObserver(services.vaultService.rawUpdates, services.database.hibernateConfig)
|
||||
}
|
||||
|
||||
private fun makeInfo(legalIdentity: PartyAndCertificate): NodeInfo {
|
||||
// TODO We keep only notary identity as additional legalIdentity if we run it on a node . Multiple identities need more design thinking.
|
||||
myNotaryIdentity = getNotaryIdentity()
|
||||
val allIdentitiesList = mutableListOf(legalIdentity)
|
||||
myNotaryIdentity?.let { allIdentitiesList.add(it) }
|
||||
val addresses = myAddresses() // TODO There is no support for multiple IP addresses yet.
|
||||
return NodeInfo(addresses, allIdentitiesList, versionInfo.platformVersion, platformClock.instant().toEpochMilli())
|
||||
private fun makeVaultObservers(schedulerService: SchedulerService) {
|
||||
VaultSoftLockManager.install(services.vaultService, smm)
|
||||
ScheduledActivityObserver.install(services.vaultService, schedulerService)
|
||||
HibernateObserver.install(services.vaultService.rawUpdates, database.hibernateConfig)
|
||||
}
|
||||
|
||||
/**
|
||||
@ -553,8 +553,16 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
}
|
||||
}
|
||||
|
||||
private fun setupInNodeNetworkMapService(networkMapCache: NetworkMapCacheInternal) {
|
||||
inNodeNetworkMapService =
|
||||
if (configuration.networkMapService == null && !configuration.noNetworkMapServiceMode)
|
||||
makeNetworkMapService(network, networkMapCache)
|
||||
else
|
||||
NullNetworkMapService
|
||||
}
|
||||
|
||||
private fun makeNetworkServices(network: MessagingService, networkMapCache: NetworkMapCacheInternal, tokenizableServices: MutableList<Any>) {
|
||||
inNodeNetworkMapService = if (configuration.networkMapService == null) makeNetworkMapService(network, networkMapCache) else NullNetworkMapService
|
||||
setupInNodeNetworkMapService(networkMapCache)
|
||||
configuration.notary?.let {
|
||||
val notaryService = makeCoreNotaryService(it)
|
||||
tokenizableServices.add(notaryService)
|
||||
@ -613,7 +621,7 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
|
||||
/** This is overriden by the mock node implementation to enable operation without any network map service */
|
||||
protected open fun noNetworkMapConfigured(): CordaFuture<Unit> {
|
||||
if (services.networkMapCache.loadDBSuccess) {
|
||||
if (services.networkMapCache.loadDBSuccess || configuration.noNetworkMapServiceMode) {
|
||||
return doneFuture(Unit)
|
||||
} else {
|
||||
// TODO: There should be a consistent approach to configuration error exceptions.
|
||||
@ -664,17 +672,7 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
val caCertificates: Array<X509Certificate> = listOf(legalIdentity.certificate, clientCa?.certificate?.cert)
|
||||
.filterNotNull()
|
||||
.toTypedArray()
|
||||
val service = PersistentIdentityService(info.legalIdentitiesAndCerts, trustRoot = trustRoot, caCertificates = *caCertificates)
|
||||
services.networkMapCache.allNodes.forEach { it.legalIdentitiesAndCerts.forEach { service.verifyAndRegisterIdentity(it) } }
|
||||
services.networkMapCache.changed.subscribe { mapChange ->
|
||||
// TODO how should we handle network map removal
|
||||
if (mapChange is MapChange.Added) {
|
||||
mapChange.node.legalIdentitiesAndCerts.forEach {
|
||||
service.verifyAndRegisterIdentity(it)
|
||||
}
|
||||
}
|
||||
}
|
||||
return service
|
||||
return PersistentIdentityService(allIdentities, trustRoot = trustRoot, caCertificates = *caCertificates)
|
||||
}
|
||||
|
||||
protected abstract fun makeTransactionVerifierService(): TransactionVerifierService
|
||||
@ -691,6 +689,7 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
toRun()
|
||||
}
|
||||
runOnStop.clear()
|
||||
_started = null
|
||||
}
|
||||
|
||||
protected abstract fun makeMessagingService(legalIdentity: PartyAndCertificate): MessagingService
|
||||
@ -758,6 +757,9 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
}
|
||||
|
||||
protected open fun generateKeyPair() = cryptoGenerateKeyPair()
|
||||
protected open fun makeVaultService(keyManagementService: KeyManagementService, stateLoader: StateLoader): VaultServiceInternal {
|
||||
return NodeVaultService(platformClock, keyManagementService, stateLoader, database.hibernateConfig)
|
||||
}
|
||||
|
||||
private inner class ServiceHubInternalImpl(
|
||||
override val schemaService: SchemaService,
|
||||
@ -770,15 +772,14 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
override val stateMachineRecordedTransactionMapping = DBTransactionMappingStorage()
|
||||
override val auditService = DummyAuditService()
|
||||
override val transactionVerifierService by lazy { makeTransactionVerifierService() }
|
||||
override val networkMapCache by lazy { PersistentNetworkMapCache(this) }
|
||||
override val vaultService by lazy { NodeVaultService(platformClock, keyManagementService, stateLoader, this@AbstractNode.database.hibernateConfig) }
|
||||
override val networkMapCache by lazy { NetworkMapCacheImpl(PersistentNetworkMapCache(this@AbstractNode.database, this@AbstractNode.configuration), identityService) }
|
||||
override val vaultService by lazy { makeVaultService(keyManagementService, stateLoader) }
|
||||
override val contractUpgradeService by lazy { ContractUpgradeServiceImpl() }
|
||||
|
||||
// Place the long term identity key in the KMS. Eventually, this is likely going to be separated again because
|
||||
// the KMS is meant for derived temporary keys used in transactions, and we're not supposed to sign things with
|
||||
// the identity key. But the infrastructure to make that easy isn't here yet.
|
||||
override val keyManagementService by lazy { makeKeyManagementService(identityService) }
|
||||
override val schedulerService by lazy { NodeSchedulerService(this, unfinishedSchedules = busyNodeLatch, serverThread = serverThread) }
|
||||
override val identityService by lazy {
|
||||
val trustStore = KeyStoreWrapper(configuration.trustStoreFile, configuration.trustStorePassword)
|
||||
val caKeyStore = KeyStoreWrapper(configuration.nodeKeystore, configuration.keyStorePassword)
|
||||
@ -798,10 +799,6 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
return cordappServices.getInstance(type) ?: throw IllegalArgumentException("Corda service ${type.name} does not exist")
|
||||
}
|
||||
|
||||
override fun <T> startFlow(logic: FlowLogic<T>, flowInitiator: FlowInitiator, ourIdentity: Party?): FlowStateMachineImpl<T> {
|
||||
return serverThread.fetchFrom { smm.add(logic, flowInitiator, ourIdentity) }
|
||||
}
|
||||
|
||||
override fun getFlowFactory(initiatingFlowClass: Class<out FlowLogic<*>>): InitiatedFlowFactory<*>? {
|
||||
return flowFactories[initiatingFlowClass]
|
||||
}
|
||||
@ -815,3 +812,9 @@ abstract class AbstractNode(config: NodeConfiguration,
|
||||
override fun jdbcSession(): Connection = database.createSession()
|
||||
}
|
||||
}
|
||||
|
||||
internal class FlowStarterImpl(private val serverThread: AffinityExecutor, private val smm: StateMachineManager) : FlowStarter {
|
||||
override fun <T> startFlow(logic: FlowLogic<T>, flowInitiator: FlowInitiator, ourIdentity: Party?): CordaFuture<FlowStateMachine<T>> {
|
||||
return serverThread.fetchFrom { smm.startFlow(logic, flowInitiator, ourIdentity) }
|
||||
}
|
||||
}
|
||||
|
@ -10,6 +10,7 @@ import net.corda.core.flows.StartableByRPC
|
||||
import net.corda.core.identity.AbstractParty
|
||||
import net.corda.core.identity.CordaX500Name
|
||||
import net.corda.core.identity.Party
|
||||
import net.corda.core.internal.FlowStateMachine
|
||||
import net.corda.core.messaging.*
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.node.services.NetworkMapCache
|
||||
@ -18,11 +19,12 @@ import net.corda.core.node.services.vault.PageSpecification
|
||||
import net.corda.core.node.services.vault.QueryCriteria
|
||||
import net.corda.core.node.services.vault.Sort
|
||||
import net.corda.core.transactions.SignedTransaction
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.node.services.FlowPermissions.Companion.startFlowPermission
|
||||
import net.corda.node.services.api.FlowStarter
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.messaging.getRpcContext
|
||||
import net.corda.node.services.messaging.requirePermission
|
||||
import net.corda.node.services.statemachine.FlowStateMachineImpl
|
||||
import net.corda.node.services.statemachine.StateMachineManager
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
import rx.Observable
|
||||
@ -37,7 +39,8 @@ import java.time.Instant
|
||||
class CordaRPCOpsImpl(
|
||||
private val services: ServiceHubInternal,
|
||||
private val smm: StateMachineManager,
|
||||
private val database: CordaPersistence
|
||||
private val database: CordaPersistence,
|
||||
private val flowStarter: FlowStarter
|
||||
) : CordaRPCOps {
|
||||
override fun networkMapSnapshot(): List<NodeInfo> {
|
||||
val (snapshot, updates) = networkMapFeed()
|
||||
@ -92,7 +95,7 @@ class CordaRPCOpsImpl(
|
||||
return database.transaction {
|
||||
val (allStateMachines, changes) = smm.track()
|
||||
DataFeed(
|
||||
allStateMachines.map { stateMachineInfoFromFlowLogic(it.logic) },
|
||||
allStateMachines.map { stateMachineInfoFromFlowLogic(it) },
|
||||
changes.map { stateMachineUpdateFromStateMachineChange(it) }
|
||||
)
|
||||
}
|
||||
@ -144,13 +147,13 @@ class CordaRPCOpsImpl(
|
||||
return FlowHandleImpl(id = stateMachine.id, returnValue = stateMachine.resultFuture)
|
||||
}
|
||||
|
||||
private fun <T> startFlow(logicType: Class<out FlowLogic<T>>, args: Array<out Any?>): FlowStateMachineImpl<T> {
|
||||
private fun <T> startFlow(logicType: Class<out FlowLogic<T>>, args: Array<out Any?>): FlowStateMachine<T> {
|
||||
require(logicType.isAnnotationPresent(StartableByRPC::class.java)) { "${logicType.name} was not designed for RPC" }
|
||||
val rpcContext = getRpcContext()
|
||||
rpcContext.requirePermission(startFlowPermission(logicType))
|
||||
val currentUser = FlowInitiator.RPC(rpcContext.currentUser.username)
|
||||
// TODO RPC flows should have mapping user -> identity that should be resolved automatically on starting flow.
|
||||
return services.invokeFlowAsync(logicType, currentUser, *args)
|
||||
return flowStarter.invokeFlowAsync(logicType, currentUser, *args).getOrThrow()
|
||||
}
|
||||
|
||||
override fun attachmentExists(id: SecureHash): Boolean {
|
||||
|
@ -84,6 +84,7 @@ open class NodeStartup(val args: Array<String>) {
|
||||
exitProcess(1)
|
||||
}
|
||||
|
||||
logger.info("Node exiting successfully")
|
||||
exitProcess(0)
|
||||
}
|
||||
|
||||
|
@ -7,9 +7,11 @@ import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.messaging.CordaRPCOps
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.node.StateLoader
|
||||
import net.corda.core.node.services.CordaService
|
||||
import net.corda.core.node.services.TransactionStorage
|
||||
import net.corda.core.serialization.SerializeAsToken
|
||||
import net.corda.node.services.api.CheckpointStorage
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.api.StartedNodeServices
|
||||
import net.corda.node.services.messaging.MessagingService
|
||||
import net.corda.node.services.network.NetworkMapService
|
||||
import net.corda.node.services.persistence.NodeAttachmentService
|
||||
@ -18,7 +20,7 @@ import net.corda.node.utilities.CordaPersistence
|
||||
|
||||
interface StartedNode<out N : AbstractNode> {
|
||||
val internals: N
|
||||
val services: ServiceHubInternal
|
||||
val services: StartedNodeServices
|
||||
val info: NodeInfo
|
||||
val checkpointStorage: CheckpointStorage
|
||||
val smm: StateMachineManager
|
||||
|
@ -16,9 +16,11 @@ import net.corda.core.messaging.StateMachineTransactionMapping
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.node.ServiceHub
|
||||
import net.corda.core.node.services.NetworkMapCache
|
||||
import net.corda.core.node.services.NetworkMapCacheBase
|
||||
import net.corda.core.node.services.TransactionStorage
|
||||
import net.corda.core.serialization.CordaSerializable
|
||||
import net.corda.core.transactions.SignedTransaction
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.core.utilities.loggerFor
|
||||
import net.corda.node.internal.InitiatedFlowFactory
|
||||
import net.corda.node.internal.cordapp.CordappProviderInternal
|
||||
@ -28,7 +30,8 @@ import net.corda.node.services.statemachine.FlowLogicRefFactoryImpl
|
||||
import net.corda.node.services.statemachine.FlowStateMachineImpl
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
|
||||
interface NetworkMapCacheInternal : NetworkMapCache {
|
||||
interface NetworkMapCacheInternal : NetworkMapCache, NetworkMapCacheBaseInternal
|
||||
interface NetworkMapCacheBaseInternal : NetworkMapCacheBase {
|
||||
/**
|
||||
* Deregister from updates from the given map service.
|
||||
* @param network the network messaging service.
|
||||
@ -84,7 +87,6 @@ interface ServiceHubInternal : ServiceHub {
|
||||
val monitoringService: MonitoringService
|
||||
val schemaService: SchemaService
|
||||
override val networkMapCache: NetworkMapCacheInternal
|
||||
val schedulerService: SchedulerService
|
||||
val auditService: AuditService
|
||||
val rpcFlows: List<Class<out FlowLogic<*>>>
|
||||
val networkService: MessagingService
|
||||
@ -109,18 +111,22 @@ interface ServiceHubInternal : ServiceHub {
|
||||
}
|
||||
}
|
||||
|
||||
fun getFlowFactory(initiatingFlowClass: Class<out FlowLogic<*>>): InitiatedFlowFactory<*>?
|
||||
}
|
||||
|
||||
interface FlowStarter {
|
||||
/**
|
||||
* Starts an already constructed flow. Note that you must be on the server thread to call this method. [FlowInitiator]
|
||||
* defaults to [FlowInitiator.RPC] with username "Only For Testing".
|
||||
*/
|
||||
@VisibleForTesting
|
||||
fun <T> startFlow(logic: FlowLogic<T>): FlowStateMachine<T> = startFlow(logic, FlowInitiator.RPC("Only For Testing"))
|
||||
fun <T> startFlow(logic: FlowLogic<T>): FlowStateMachine<T> = startFlow(logic, FlowInitiator.RPC("Only For Testing")).getOrThrow()
|
||||
|
||||
/**
|
||||
* Starts an already constructed flow. Note that you must be on the server thread to call this method.
|
||||
* @param flowInitiator indicates who started the flow, see: [FlowInitiator].
|
||||
*/
|
||||
fun <T> startFlow(logic: FlowLogic<T>, flowInitiator: FlowInitiator, ourIdentity: Party? = null): FlowStateMachineImpl<T>
|
||||
fun <T> startFlow(logic: FlowLogic<T>, flowInitiator: FlowInitiator, ourIdentity: Party? = null): CordaFuture<FlowStateMachine<T>>
|
||||
|
||||
/**
|
||||
* Will check [logicType] and [args] against a whitelist and if acceptable then construct and initiate the flow.
|
||||
@ -133,15 +139,14 @@ interface ServiceHubInternal : ServiceHub {
|
||||
fun <T> invokeFlowAsync(
|
||||
logicType: Class<out FlowLogic<T>>,
|
||||
flowInitiator: FlowInitiator,
|
||||
vararg args: Any?): FlowStateMachineImpl<T> {
|
||||
vararg args: Any?): CordaFuture<FlowStateMachine<T>> {
|
||||
val logicRef = FlowLogicRefFactoryImpl.createForRPC(logicType, *args)
|
||||
val logic: FlowLogic<T> = uncheckedCast(FlowLogicRefFactoryImpl.toFlowLogic(logicRef))
|
||||
return startFlow(logic, flowInitiator, ourIdentity = null)
|
||||
}
|
||||
|
||||
fun getFlowFactory(initiatingFlowClass: Class<out FlowLogic<*>>): InitiatedFlowFactory<*>?
|
||||
}
|
||||
|
||||
interface StartedNodeServices : ServiceHubInternal, FlowStarter
|
||||
/**
|
||||
* Thread-safe storage of transactions.
|
||||
*/
|
||||
|
@ -20,6 +20,7 @@ interface NodeConfiguration : NodeSSLConfiguration {
|
||||
* service.
|
||||
*/
|
||||
val networkMapService: NetworkMapInfo?
|
||||
val noNetworkMapServiceMode: Boolean
|
||||
val minimumPlatformVersion: Int
|
||||
val emailAddress: String
|
||||
val exportJMXto: String
|
||||
@ -78,6 +79,7 @@ data class FullNodeConfiguration(
|
||||
override val database: Properties?,
|
||||
override val certificateSigningService: URL,
|
||||
override val networkMapService: NetworkMapInfo?,
|
||||
override val noNetworkMapServiceMode: Boolean = false,
|
||||
override val minimumPlatformVersion: Int = 1,
|
||||
override val rpcUsers: List<User>,
|
||||
override val verifierType: VerifierType,
|
||||
|
@ -1,9 +1,8 @@
|
||||
package net.corda.node.services.events
|
||||
|
||||
import co.paralleluniverse.fibers.Suspendable
|
||||
import co.paralleluniverse.strands.SettableFuture as QuasarSettableFuture
|
||||
import com.google.common.util.concurrent.ListenableFuture
|
||||
import com.google.common.util.concurrent.SettableFuture as GuavaSettableFuture
|
||||
import com.google.common.util.concurrent.SettableFuture
|
||||
import net.corda.core.contracts.SchedulableState
|
||||
import net.corda.core.contracts.ScheduledActivity
|
||||
import net.corda.core.contracts.ScheduledStateRef
|
||||
@ -13,16 +12,19 @@ import net.corda.core.flows.FlowInitiator
|
||||
import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.internal.ThreadBox
|
||||
import net.corda.core.internal.VisibleForTesting
|
||||
import net.corda.core.internal.concurrent.flatMap
|
||||
import net.corda.core.internal.until
|
||||
import net.corda.core.node.StateLoader
|
||||
import net.corda.core.schemas.PersistentStateRef
|
||||
import net.corda.core.serialization.SingletonSerializeAsToken
|
||||
import net.corda.core.utilities.loggerFor
|
||||
import net.corda.core.utilities.trace
|
||||
import net.corda.node.internal.MutableClock
|
||||
import net.corda.node.services.api.FlowStarter
|
||||
import net.corda.node.services.api.SchedulerService
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.statemachine.FlowLogicRefFactoryImpl
|
||||
import net.corda.node.utilities.AffinityExecutor
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
import net.corda.node.utilities.NODE_DATABASE_PREFIX
|
||||
import net.corda.node.utilities.PersistentMap
|
||||
import org.apache.activemq.artemis.utils.ReusableLatch
|
||||
@ -34,6 +36,8 @@ import javax.annotation.concurrent.ThreadSafe
|
||||
import javax.persistence.Column
|
||||
import javax.persistence.EmbeddedId
|
||||
import javax.persistence.Entity
|
||||
import co.paralleluniverse.strands.SettableFuture as QuasarSettableFuture
|
||||
import com.google.common.util.concurrent.SettableFuture as GuavaSettableFuture
|
||||
|
||||
/**
|
||||
* A first pass of a simple [SchedulerService] that works with [MutableClock]s for testing, demonstrations and simulations
|
||||
@ -47,12 +51,14 @@ import javax.persistence.Entity
|
||||
* in the nodes, maybe we can consider multiple activities and whether the activities have been completed or not,
|
||||
* but that starts to sound a lot like off-ledger state.
|
||||
*
|
||||
* @param services Core node services.
|
||||
* @param schedulerTimerExecutor The executor the scheduler blocks on waiting for the clock to advance to the next
|
||||
* activity. Only replace this for unit testing purposes. This is not the executor the [FlowLogic] is launched on.
|
||||
*/
|
||||
@ThreadSafe
|
||||
class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
class NodeSchedulerService(private val clock: Clock,
|
||||
private val database: CordaPersistence,
|
||||
private val flowStarter: FlowStarter,
|
||||
private val stateLoader: StateLoader,
|
||||
private val schedulerTimerExecutor: Executor = Executors.newSingleThreadExecutor(),
|
||||
private val unfinishedSchedules: ReusableLatch = ReusableLatch(),
|
||||
private val serverThread: AffinityExecutor)
|
||||
@ -108,8 +114,8 @@ class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
toPersistentEntityKey = { PersistentStateRef(it.txhash.toString(), it.index) },
|
||||
fromPersistentEntity = {
|
||||
//TODO null check will become obsolete after making DB/JPA columns not nullable
|
||||
var txId = it.output.txId ?: throw IllegalStateException("DB returned null SecureHash transactionId")
|
||||
var index = it.output.index ?: throw IllegalStateException("DB returned null SecureHash index")
|
||||
val txId = it.output.txId ?: throw IllegalStateException("DB returned null SecureHash transactionId")
|
||||
val index = it.output.index ?: throw IllegalStateException("DB returned null SecureHash index")
|
||||
Pair(StateRef(SecureHash.parse(txId), index),
|
||||
ScheduledStateRef(StateRef(SecureHash.parse(txId), index), it.scheduledAt))
|
||||
},
|
||||
@ -172,7 +178,7 @@ class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
mutex.locked {
|
||||
val previousState = scheduledStates[action.ref]
|
||||
scheduledStates[action.ref] = action
|
||||
var previousEarliest = scheduledStatesQueue.peek()
|
||||
val previousEarliest = scheduledStatesQueue.peek()
|
||||
scheduledStatesQueue.remove(previousState)
|
||||
scheduledStatesQueue.add(action)
|
||||
if (previousState == null) {
|
||||
@ -211,7 +217,7 @@ class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
* cancelled then we run the scheduled action. Finally we remove that action from the scheduled actions and
|
||||
* recompute the next scheduled action.
|
||||
*/
|
||||
internal fun rescheduleWakeUp() {
|
||||
private fun rescheduleWakeUp() {
|
||||
// Note, we already have the mutex but we need the scope again here
|
||||
val (scheduledState, ourRescheduledFuture) = mutex.alreadyLocked {
|
||||
rescheduled?.cancel(false)
|
||||
@ -223,7 +229,7 @@ class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
log.trace { "Scheduling as next $scheduledState" }
|
||||
// This will block the scheduler single thread until the scheduled time (returns false) OR
|
||||
// the Future is cancelled due to rescheduling (returns true).
|
||||
if (!awaitWithDeadline(services.clock, scheduledState.scheduledAt, ourRescheduledFuture)) {
|
||||
if (!awaitWithDeadline(clock, scheduledState.scheduledAt, ourRescheduledFuture)) {
|
||||
log.trace { "Invoking as next $scheduledState" }
|
||||
onTimeReached(scheduledState)
|
||||
} else {
|
||||
@ -237,11 +243,11 @@ class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
serverThread.execute {
|
||||
var flowName: String? = "(unknown)"
|
||||
try {
|
||||
services.database.transaction {
|
||||
database.transaction {
|
||||
val scheduledFlow = getScheduledFlow(scheduledState)
|
||||
if (scheduledFlow != null) {
|
||||
flowName = scheduledFlow.javaClass.name
|
||||
val future = services.startFlow(scheduledFlow, FlowInitiator.Scheduled(scheduledState)).resultFuture
|
||||
val future = flowStarter.startFlow(scheduledFlow, FlowInitiator.Scheduled(scheduledState)).flatMap { it.resultFuture }
|
||||
future.then {
|
||||
unfinishedSchedules.countDown()
|
||||
}
|
||||
@ -265,9 +271,9 @@ class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
unfinishedSchedules.countDown()
|
||||
scheduledStates.remove(scheduledState.ref)
|
||||
scheduledStatesQueue.remove(scheduledState)
|
||||
} else if (scheduledActivity.scheduledAt.isAfter(services.clock.instant())) {
|
||||
} else if (scheduledActivity.scheduledAt.isAfter(clock.instant())) {
|
||||
log.info("Scheduled state $scheduledState has rescheduled to ${scheduledActivity.scheduledAt}.")
|
||||
var newState = ScheduledStateRef(scheduledState.ref, scheduledActivity.scheduledAt)
|
||||
val newState = ScheduledStateRef(scheduledState.ref, scheduledActivity.scheduledAt)
|
||||
scheduledStates[scheduledState.ref] = newState
|
||||
scheduledStatesQueue.remove(scheduledState)
|
||||
scheduledStatesQueue.add(newState)
|
||||
@ -286,7 +292,7 @@ class NodeSchedulerService(private val services: ServiceHubInternal,
|
||||
}
|
||||
|
||||
private fun getScheduledActivity(scheduledState: ScheduledStateRef): ScheduledActivity? {
|
||||
val txState = services.loadState(scheduledState.ref)
|
||||
val txState = stateLoader.loadState(scheduledState.ref)
|
||||
val state = txState.data as SchedulableState
|
||||
return try {
|
||||
// This can throw as running contract code.
|
||||
|
@ -4,18 +4,28 @@ import net.corda.core.contracts.ContractState
|
||||
import net.corda.core.contracts.SchedulableState
|
||||
import net.corda.core.contracts.ScheduledStateRef
|
||||
import net.corda.core.contracts.StateAndRef
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.core.node.services.VaultService
|
||||
import net.corda.node.services.api.SchedulerService
|
||||
import net.corda.node.services.statemachine.FlowLogicRefFactoryImpl
|
||||
|
||||
/**
|
||||
* This observes the vault and schedules and unschedules activities appropriately based on state production and
|
||||
* consumption.
|
||||
*/
|
||||
class ScheduledActivityObserver(val services: ServiceHubInternal) {
|
||||
init {
|
||||
services.vaultService.rawUpdates.subscribe { (consumed, produced) ->
|
||||
consumed.forEach { services.schedulerService.unscheduleStateActivity(it.ref) }
|
||||
produced.forEach { scheduleStateActivity(it) }
|
||||
class ScheduledActivityObserver private constructor(private val schedulerService: SchedulerService) {
|
||||
companion object {
|
||||
@JvmStatic
|
||||
fun install(vaultService: VaultService, schedulerService: SchedulerService) {
|
||||
val observer = ScheduledActivityObserver(schedulerService)
|
||||
vaultService.rawUpdates.subscribe { (consumed, produced) ->
|
||||
consumed.forEach { schedulerService.unscheduleStateActivity(it.ref) }
|
||||
produced.forEach { observer.scheduleStateActivity(it) }
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: Beware we are calling dynamically loaded contract code inside here.
|
||||
private inline fun <T : Any> sandbox(code: () -> T?): T? {
|
||||
return code()
|
||||
}
|
||||
}
|
||||
|
||||
@ -23,12 +33,7 @@ class ScheduledActivityObserver(val services: ServiceHubInternal) {
|
||||
val producedState = produced.state.data
|
||||
if (producedState is SchedulableState) {
|
||||
val scheduledAt = sandbox { producedState.nextScheduledActivity(produced.ref, FlowLogicRefFactoryImpl)?.scheduledAt } ?: return
|
||||
services.schedulerService.scheduleStateActivity(ScheduledStateRef(produced.ref, scheduledAt))
|
||||
schedulerService.scheduleStateActivity(ScheduledStateRef(produced.ref, scheduledAt))
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: Beware we are calling dynamically loaded contract code inside here.
|
||||
private inline fun <T : Any> sandbox(code: () -> T?): T? {
|
||||
return code()
|
||||
}
|
||||
}
|
||||
|
@ -11,8 +11,8 @@ import net.corda.core.node.services.UnknownAnonymousPartyException
|
||||
import net.corda.core.serialization.SingletonSerializeAsToken
|
||||
import net.corda.core.utilities.debug
|
||||
import net.corda.core.utilities.loggerFor
|
||||
import net.corda.core.utilities.MAX_HASH_HEX_SIZE
|
||||
import net.corda.node.utilities.AppendOnlyPersistentMap
|
||||
import net.corda.node.utilities.MAX_HASH_HEX_SIZE
|
||||
import net.corda.node.utilities.NODE_DATABASE_PREFIX
|
||||
import org.bouncycastle.cert.X509CertificateHolder
|
||||
import java.io.ByteArrayInputStream
|
||||
|
@ -4,12 +4,9 @@ import net.corda.core.crypto.*
|
||||
import net.corda.core.identity.PartyAndCertificate
|
||||
import net.corda.core.node.services.IdentityService
|
||||
import net.corda.core.node.services.KeyManagementService
|
||||
import net.corda.core.serialization.SerializationDefaults
|
||||
import net.corda.core.serialization.SingletonSerializeAsToken
|
||||
import net.corda.core.serialization.deserialize
|
||||
import net.corda.core.serialization.serialize
|
||||
import net.corda.core.utilities.MAX_HASH_HEX_SIZE
|
||||
import net.corda.node.utilities.AppendOnlyPersistentMap
|
||||
import net.corda.node.utilities.MAX_HASH_HEX_SIZE
|
||||
import net.corda.node.utilities.NODE_DATABASE_PREFIX
|
||||
import org.bouncycastle.operator.ContentSigner
|
||||
import java.security.KeyPair
|
||||
|
@ -83,9 +83,40 @@ interface MessagingService {
|
||||
* to send an ACK message back.
|
||||
*
|
||||
* @param retryId if provided the message will be scheduled for redelivery until [cancelRedelivery] is called for this id.
|
||||
* Note that this feature should only be used when the target is an idempotent distributed service, e.g. a notary.
|
||||
* Note that this feature should only be used when the target is an idempotent distributed service, e.g. a notary.
|
||||
* @param sequenceKey an object that may be used to enable a parallel [MessagingService] implementation. Two
|
||||
* subsequent send()s with the same [sequenceKey] (up to equality) are guaranteed to be delivered in the same
|
||||
* sequence the send()s were called. By default this is chosen conservatively to be [target].
|
||||
* @param acknowledgementHandler if non-null this handler will be called once the sent message has been committed by
|
||||
* the broker. Note that if specified [send] itself may return earlier than the commit.
|
||||
*/
|
||||
fun send(message: Message, target: MessageRecipients, retryId: Long? = null)
|
||||
fun send(
|
||||
message: Message,
|
||||
target: MessageRecipients,
|
||||
retryId: Long? = null,
|
||||
sequenceKey: Any = target,
|
||||
acknowledgementHandler: (() -> Unit)? = null
|
||||
)
|
||||
|
||||
/** A message with a target and sequenceKey specified. */
|
||||
data class AddressedMessage(
|
||||
val message: Message,
|
||||
val target: MessageRecipients,
|
||||
val retryId: Long? = null,
|
||||
val sequenceKey: Any = target
|
||||
)
|
||||
|
||||
/**
|
||||
* Sends a list of messages to the specified recipients. This function allows for an efficient batching
|
||||
* implementation.
|
||||
*
|
||||
* @param addressedMessages The list of messages together with the recipients, retry ids and sequence keys.
|
||||
* @param retryId if provided the message will be scheduled for redelivery until [cancelRedelivery] is called for this id.
|
||||
* Note that this feature should only be used when the target is an idempotent distributed service, e.g. a notary.
|
||||
* @param acknowledgementHandler if non-null this handler will be called once all sent messages have been committed
|
||||
* by the broker. Note that if specified [send] itself may return earlier than the commit.
|
||||
*/
|
||||
fun send(addressedMessages: List<AddressedMessage>, acknowledgementHandler: (() -> Unit)? = null)
|
||||
|
||||
/** Cancels the scheduled message redelivery for the specified [retryId] */
|
||||
fun cancelRedelivery(retryId: Long)
|
||||
|
@ -22,7 +22,7 @@ import net.corda.node.services.RPCUserService
|
||||
import net.corda.node.services.api.MonitoringService
|
||||
import net.corda.node.services.config.NodeConfiguration
|
||||
import net.corda.node.services.config.VerifierType
|
||||
import net.corda.node.services.statemachine.StateMachineManager
|
||||
import net.corda.node.services.statemachine.StateMachineManagerImpl
|
||||
import net.corda.node.services.transactions.InMemoryTransactionVerifierService
|
||||
import net.corda.node.services.transactions.OutOfProcessTransactionVerifierService
|
||||
import net.corda.node.utilities.*
|
||||
@ -485,7 +485,7 @@ class NodeMessagingClient(override val config: NodeConfiguration,
|
||||
}
|
||||
}
|
||||
|
||||
override fun send(message: Message, target: MessageRecipients, retryId: Long?) {
|
||||
override fun send(message: Message, target: MessageRecipients, retryId: Long?, sequenceKey: Any, acknowledgementHandler: (() -> Unit)?) {
|
||||
// We have to perform sending on a different thread pool, since using the same pool for messaging and
|
||||
// fibers leads to Netty buffer memory leaks, caused by both Netty and Quasar fiddling with thread-locals.
|
||||
messagingExecutor.fetchFrom {
|
||||
@ -502,7 +502,7 @@ class NodeMessagingClient(override val config: NodeConfiguration,
|
||||
putStringProperty(HDR_DUPLICATE_DETECTION_ID, SimpleString(message.uniqueMessageId.toString()))
|
||||
|
||||
// For demo purposes - if set then add a delay to messages in order to demonstrate that the flows are doing as intended
|
||||
if (amqDelayMillis > 0 && message.topicSession.topic == StateMachineManager.sessionTopic.topic) {
|
||||
if (amqDelayMillis > 0 && message.topicSession.topic == StateMachineManagerImpl.sessionTopic.topic) {
|
||||
putLongProperty(HDR_SCHEDULED_DELIVERY_TIME, System.currentTimeMillis() + amqDelayMillis)
|
||||
}
|
||||
}
|
||||
@ -523,6 +523,14 @@ class NodeMessagingClient(override val config: NodeConfiguration,
|
||||
}
|
||||
}
|
||||
}
|
||||
acknowledgementHandler?.invoke()
|
||||
}
|
||||
|
||||
override fun send(addressedMessages: List<MessagingService.AddressedMessage>, acknowledgementHandler: (() -> Unit)?) {
|
||||
for ((message, target, retryId, sequenceKey) in addressedMessages) {
|
||||
send(message, target, retryId, sequenceKey, null)
|
||||
}
|
||||
acknowledgementHandler?.invoke()
|
||||
}
|
||||
|
||||
private fun sendWithRetry(retryCount: Int, address: String, message: ClientMessage, retryId: Long) {
|
||||
|
@ -0,0 +1,70 @@
|
||||
package net.corda.node.services.network
|
||||
|
||||
import com.fasterxml.jackson.databind.ObjectMapper
|
||||
import net.corda.core.crypto.SecureHash
|
||||
import net.corda.core.crypto.SignedData
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.serialization.deserialize
|
||||
import net.corda.core.serialization.serialize
|
||||
import java.net.HttpURLConnection
|
||||
import java.net.URL
|
||||
|
||||
interface NetworkMapClient {
|
||||
/**
|
||||
* Publish node info to network map service.
|
||||
*/
|
||||
fun publish(signedNodeInfo: SignedData<NodeInfo>)
|
||||
|
||||
/**
|
||||
* Retrieve [NetworkMap] from the network map service containing list of node info hashes and network parameter hash.
|
||||
*/
|
||||
// TODO: Use NetworkMap object when available.
|
||||
fun getNetworkMap(): List<SecureHash>
|
||||
|
||||
/**
|
||||
* Retrieve [NodeInfo] from network map service using the node info hash.
|
||||
*/
|
||||
fun getNodeInfo(nodeInfoHash: SecureHash): NodeInfo?
|
||||
|
||||
// TODO: Implement getNetworkParameter when its available.
|
||||
//fun getNetworkParameter(networkParameterHash: SecureHash): NetworkParameter
|
||||
}
|
||||
|
||||
class HTTPNetworkMapClient(private val networkMapUrl: String) : NetworkMapClient {
|
||||
override fun publish(signedNodeInfo: SignedData<NodeInfo>) {
|
||||
val publishURL = URL("$networkMapUrl/publish")
|
||||
val conn = publishURL.openConnection() as HttpURLConnection
|
||||
conn.doOutput = true
|
||||
conn.requestMethod = "POST"
|
||||
conn.setRequestProperty("Content-Type", "application/octet-stream")
|
||||
conn.outputStream.write(signedNodeInfo.serialize().bytes)
|
||||
when (conn.responseCode) {
|
||||
HttpURLConnection.HTTP_OK -> return
|
||||
HttpURLConnection.HTTP_UNAUTHORIZED -> throw IllegalArgumentException(conn.errorStream.bufferedReader().readLine())
|
||||
else -> throw IllegalArgumentException("Unexpected response code ${conn.responseCode}, response error message: '${conn.errorStream.bufferedReader().readLines()}'")
|
||||
}
|
||||
}
|
||||
|
||||
override fun getNetworkMap(): List<SecureHash> {
|
||||
val conn = URL(networkMapUrl).openConnection() as HttpURLConnection
|
||||
|
||||
return when (conn.responseCode) {
|
||||
HttpURLConnection.HTTP_OK -> {
|
||||
val response = conn.inputStream.bufferedReader().use { it.readLine() }
|
||||
ObjectMapper().readValue(response, List::class.java).map { SecureHash.parse(it.toString()) }
|
||||
}
|
||||
else -> throw IllegalArgumentException("Unexpected response code ${conn.responseCode}, response error message: '${conn.errorStream.bufferedReader().readLines()}'")
|
||||
}
|
||||
}
|
||||
|
||||
override fun getNodeInfo(nodeInfoHash: SecureHash): NodeInfo? {
|
||||
val nodeInfoURL = URL("$networkMapUrl/$nodeInfoHash")
|
||||
val conn = nodeInfoURL.openConnection() as HttpURLConnection
|
||||
|
||||
return when (conn.responseCode) {
|
||||
HttpURLConnection.HTTP_OK -> conn.inputStream.readBytes().deserialize()
|
||||
HttpURLConnection.HTTP_NOT_FOUND -> null
|
||||
else -> throw IllegalArgumentException("Unexpected response code ${conn.responseCode}, response error message: '${conn.errorStream.bufferedReader().readLines()}'")
|
||||
}
|
||||
}
|
||||
}
|
@ -9,6 +9,7 @@ import net.corda.core.serialization.deserialize
|
||||
import net.corda.core.serialization.serialize
|
||||
import net.corda.core.utilities.loggerFor
|
||||
import net.corda.core.utilities.seconds
|
||||
import net.corda.nodeapi.NodeInfoFilesCopier
|
||||
import rx.Observable
|
||||
import rx.Scheduler
|
||||
import rx.schedulers.Schedulers
|
||||
@ -55,7 +56,8 @@ class NodeInfoWatcher(private val nodePath: Path,
|
||||
val serializedBytes = nodeInfo.serialize()
|
||||
val regSig = keyManager.sign(serializedBytes.bytes, nodeInfo.legalIdentities.first().owningKey)
|
||||
val signedData = SignedData(serializedBytes, regSig)
|
||||
signedData.serialize().open().copyTo(path / "nodeInfo-${serializedBytes.hash}")
|
||||
signedData.serialize().open().copyTo(
|
||||
path / "${NodeInfoFilesCopier.NODE_INFO_FILE_NAME_PREFIX}${serializedBytes.hash}")
|
||||
} catch (e: Exception) {
|
||||
logger.warn("Couldn't write node info to file", e)
|
||||
}
|
||||
|
@ -14,6 +14,7 @@ import net.corda.core.internal.concurrent.openFuture
|
||||
import net.corda.core.messaging.DataFeed
|
||||
import net.corda.core.messaging.SingleMessageRecipient
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.node.services.IdentityService
|
||||
import net.corda.core.node.services.NetworkMapCache.MapChange
|
||||
import net.corda.core.node.services.NotaryService
|
||||
import net.corda.core.node.services.PartyInfo
|
||||
@ -27,12 +28,14 @@ import net.corda.core.utilities.loggerFor
|
||||
import net.corda.core.utilities.toBase58String
|
||||
import net.corda.node.services.api.NetworkCacheException
|
||||
import net.corda.node.services.api.NetworkMapCacheInternal
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.api.NetworkMapCacheBaseInternal
|
||||
import net.corda.node.services.config.NodeConfiguration
|
||||
import net.corda.node.services.messaging.MessagingService
|
||||
import net.corda.node.services.messaging.createMessage
|
||||
import net.corda.node.services.messaging.sendRequest
|
||||
import net.corda.node.services.network.NetworkMapService.FetchMapResponse
|
||||
import net.corda.node.services.network.NetworkMapService.SubscribeResponse
|
||||
import net.corda.node.utilities.*
|
||||
import net.corda.node.utilities.AddOrRemove
|
||||
import net.corda.node.utilities.bufferUntilDatabaseCommit
|
||||
import net.corda.node.utilities.wrapWithDatabaseTransaction
|
||||
@ -45,15 +48,32 @@ import java.util.*
|
||||
import javax.annotation.concurrent.ThreadSafe
|
||||
import kotlin.collections.HashMap
|
||||
|
||||
class NetworkMapCacheImpl(networkMapCacheBase: NetworkMapCacheBaseInternal, private val identityService: IdentityService) : NetworkMapCacheBaseInternal by networkMapCacheBase, NetworkMapCacheInternal {
|
||||
init {
|
||||
networkMapCacheBase.allNodes.forEach { it.legalIdentitiesAndCerts.forEach { identityService.verifyAndRegisterIdentity(it) } }
|
||||
networkMapCacheBase.changed.subscribe { mapChange ->
|
||||
// TODO how should we handle network map removal
|
||||
if (mapChange is MapChange.Added) {
|
||||
mapChange.node.legalIdentitiesAndCerts.forEach {
|
||||
identityService.verifyAndRegisterIdentity(it)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun getNodeByLegalIdentity(party: AbstractParty): NodeInfo? {
|
||||
val wellKnownParty = identityService.wellKnownPartyFromAnonymous(party)
|
||||
return wellKnownParty?.let {
|
||||
getNodesByLegalIdentityKey(it.owningKey).firstOrNull()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extremely simple in-memory cache of the network map.
|
||||
*
|
||||
* @param serviceHub an optional service hub from which we'll take the identity service. We take a service hub rather
|
||||
* than the identity service directly, as this avoids problems with service start sequence (network map cache
|
||||
* and identity services depend on each other). Should always be provided except for unit test cases.
|
||||
*/
|
||||
@ThreadSafe
|
||||
open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal) : SingletonSerializeAsToken(), NetworkMapCacheInternal {
|
||||
open class PersistentNetworkMapCache(private val database: CordaPersistence, configuration: NodeConfiguration) : SingletonSerializeAsToken(), NetworkMapCacheBaseInternal {
|
||||
companion object {
|
||||
val logger = loggerFor<PersistentNetworkMapCache>()
|
||||
}
|
||||
@ -89,12 +109,12 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
.sortedBy { it.name.toString() }
|
||||
}
|
||||
|
||||
private val nodeInfoSerializer = NodeInfoWatcher(serviceHub.configuration.baseDirectory,
|
||||
serviceHub.configuration.additionalNodeInfoPollingFrequencyMsec)
|
||||
private val nodeInfoSerializer = NodeInfoWatcher(configuration.baseDirectory,
|
||||
configuration.additionalNodeInfoPollingFrequencyMsec)
|
||||
|
||||
init {
|
||||
loadFromFiles()
|
||||
serviceHub.database.transaction { loadFromDB(session) }
|
||||
database.transaction { loadFromDB(session) }
|
||||
}
|
||||
|
||||
private fun loadFromFiles() {
|
||||
@ -103,7 +123,7 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
}
|
||||
|
||||
override fun getPartyInfo(party: Party): PartyInfo? {
|
||||
val nodes = serviceHub.database.transaction { queryByIdentityKey(session, party.owningKey) }
|
||||
val nodes = database.transaction { queryByIdentityKey(session, party.owningKey) }
|
||||
if (nodes.size == 1 && nodes[0].isLegalIdentity(party)) {
|
||||
return PartyInfo.SingleNode(party, nodes[0].addresses)
|
||||
}
|
||||
@ -118,20 +138,13 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
}
|
||||
|
||||
override fun getNodeByLegalName(name: CordaX500Name): NodeInfo? = getNodesByLegalName(name).firstOrNull()
|
||||
override fun getNodesByLegalName(name: CordaX500Name): List<NodeInfo> = serviceHub.database.transaction { queryByLegalName(session, name) }
|
||||
override fun getNodesByLegalName(name: CordaX500Name): List<NodeInfo> = database.transaction { queryByLegalName(session, name) }
|
||||
override fun getNodesByLegalIdentityKey(identityKey: PublicKey): List<NodeInfo> =
|
||||
serviceHub.database.transaction { queryByIdentityKey(session, identityKey) }
|
||||
database.transaction { queryByIdentityKey(session, identityKey) }
|
||||
|
||||
override fun getNodeByLegalIdentity(party: AbstractParty): NodeInfo? {
|
||||
val wellKnownParty = serviceHub.identityService.wellKnownPartyFromAnonymous(party)
|
||||
return wellKnownParty?.let {
|
||||
getNodesByLegalIdentityKey(it.owningKey).firstOrNull()
|
||||
}
|
||||
}
|
||||
override fun getNodeByAddress(address: NetworkHostAndPort): NodeInfo? = database.transaction { queryByAddress(session, address) }
|
||||
|
||||
override fun getNodeByAddress(address: NetworkHostAndPort): NodeInfo? = serviceHub.database.transaction { queryByAddress(session, address) }
|
||||
|
||||
override fun getPeerCertificateByLegalName(name: CordaX500Name): PartyAndCertificate? = serviceHub.database.transaction { queryIdentityByLegalName(session, name) }
|
||||
override fun getPeerCertificateByLegalName(name: CordaX500Name): PartyAndCertificate? = database.transaction { queryIdentityByLegalName(session, name) }
|
||||
|
||||
override fun track(): DataFeed<List<NodeInfo>, MapChange> {
|
||||
synchronized(_changed) {
|
||||
@ -183,13 +196,13 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
val previousNode = registeredNodes.put(node.legalIdentities.first().owningKey, node) // TODO hack... we left the first one as special one
|
||||
if (previousNode == null) {
|
||||
logger.info("No previous node found")
|
||||
serviceHub.database.transaction {
|
||||
database.transaction {
|
||||
updateInfoDB(node)
|
||||
changePublisher.onNext(MapChange.Added(node))
|
||||
}
|
||||
} else if (previousNode != node) {
|
||||
logger.info("Previous node was found as: $previousNode")
|
||||
serviceHub.database.transaction {
|
||||
database.transaction {
|
||||
updateInfoDB(node)
|
||||
changePublisher.onNext(MapChange.Modified(node, previousNode))
|
||||
}
|
||||
@ -204,7 +217,7 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
logger.info("Removing node with info: $node")
|
||||
synchronized(_changed) {
|
||||
registeredNodes.remove(node.legalIdentities.first().owningKey)
|
||||
serviceHub.database.transaction {
|
||||
database.transaction {
|
||||
removeInfoDB(session, node)
|
||||
changePublisher.onNext(MapChange.Removed(node))
|
||||
}
|
||||
@ -239,7 +252,7 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
}
|
||||
|
||||
override val allNodes: List<NodeInfo>
|
||||
get() = serviceHub.database.transaction {
|
||||
get() = database.transaction {
|
||||
getAllInfos(session).map { it.toNodeInfo() }
|
||||
}
|
||||
|
||||
@ -288,8 +301,8 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
private fun updateInfoDB(nodeInfo: NodeInfo) {
|
||||
// TODO Temporary workaround to force isolated transaction (otherwise it causes race conditions when processing
|
||||
// network map registration on network map node)
|
||||
serviceHub.database.dataSource.connection.use {
|
||||
val session = serviceHub.database.entityManagerFactory.withOptions().connection(it.apply {
|
||||
database.dataSource.connection.use {
|
||||
val session = database.entityManagerFactory.withOptions().connection(it.apply {
|
||||
transactionIsolation = 1
|
||||
}).openSession()
|
||||
session.use {
|
||||
@ -370,7 +383,7 @@ open class PersistentNetworkMapCache(private val serviceHub: ServiceHubInternal)
|
||||
}
|
||||
|
||||
override fun clearNetworkMapCache() {
|
||||
serviceHub.database.transaction {
|
||||
database.transaction {
|
||||
val result = getAllInfos(session)
|
||||
for (nodeInfo in result) session.remove(nodeInfo)
|
||||
}
|
||||
|
@ -93,6 +93,7 @@ class HibernateConfiguration(val schemaService: SchemaService, private val datab
|
||||
// during schema creation / update.
|
||||
class NodeDatabaseConnectionProvider : ConnectionProvider {
|
||||
override fun closeConnection(conn: Connection) {
|
||||
conn.autoCommit = false
|
||||
val tx = DatabaseTransactionManager.current()
|
||||
tx.commit()
|
||||
tx.close()
|
||||
|
@ -3,6 +3,7 @@ package net.corda.node.services.schema
|
||||
import net.corda.core.contracts.ContractState
|
||||
import net.corda.core.contracts.StateAndRef
|
||||
import net.corda.core.contracts.StateRef
|
||||
import net.corda.core.internal.VisibleForTesting
|
||||
import net.corda.core.node.services.Vault
|
||||
import net.corda.core.schemas.MappedSchema
|
||||
import net.corda.core.schemas.PersistentStateRef
|
||||
@ -17,14 +18,15 @@ import rx.Observable
|
||||
* A vault observer that extracts Object Relational Mappings for contract states that support it, and persists them with Hibernate.
|
||||
*/
|
||||
// TODO: Manage version evolution of the schemas via additional tooling.
|
||||
class HibernateObserver(vaultUpdates: Observable<Vault.Update<ContractState>>, val config: HibernateConfiguration) {
|
||||
|
||||
class HibernateObserver private constructor(private val config: HibernateConfiguration) {
|
||||
companion object {
|
||||
val logger = loggerFor<HibernateObserver>()
|
||||
}
|
||||
|
||||
init {
|
||||
vaultUpdates.subscribe { persist(it.produced) }
|
||||
private val log = loggerFor<HibernateObserver>()
|
||||
@JvmStatic
|
||||
fun install(vaultUpdates: Observable<Vault.Update<ContractState>>, config: HibernateConfiguration): HibernateObserver {
|
||||
val observer = HibernateObserver(config)
|
||||
vaultUpdates.subscribe { observer.persist(it.produced) }
|
||||
return observer
|
||||
}
|
||||
}
|
||||
|
||||
private fun persist(produced: Set<StateAndRef<ContractState>>) {
|
||||
@ -33,11 +35,12 @@ class HibernateObserver(vaultUpdates: Observable<Vault.Update<ContractState>>, v
|
||||
|
||||
private fun persistState(stateAndRef: StateAndRef<ContractState>) {
|
||||
val state = stateAndRef.state.data
|
||||
logger.debug { "Asked to persist state ${stateAndRef.ref}" }
|
||||
log.debug { "Asked to persist state ${stateAndRef.ref}" }
|
||||
config.schemaService.selectSchemas(state).forEach { persistStateWithSchema(state, stateAndRef.ref, it) }
|
||||
}
|
||||
|
||||
fun persistStateWithSchema(state: ContractState, stateRef: StateRef, schema: MappedSchema) {
|
||||
@VisibleForTesting
|
||||
internal fun persistStateWithSchema(state: ContractState, stateRef: StateRef, schema: MappedSchema) {
|
||||
val sessionFactory = config.sessionFactoryForSchemas(setOf(schema))
|
||||
val session = sessionFactory.withOptions().
|
||||
connection(DatabaseTransactionManager.current().connection).
|
||||
|
@ -2,7 +2,6 @@ package net.corda.node.services.statemachine
|
||||
|
||||
import net.corda.core.internal.VisibleForTesting
|
||||
import com.google.common.primitives.Primitives
|
||||
import net.corda.core.cordapp.CordappContext
|
||||
import net.corda.core.flows.*
|
||||
import net.corda.core.serialization.CordaSerializable
|
||||
import net.corda.core.serialization.SingletonSerializeAsToken
|
||||
@ -34,7 +33,7 @@ data class FlowLogicRefImpl internal constructor(val flowLogicClassName: String,
|
||||
*/
|
||||
object FlowLogicRefFactoryImpl : SingletonSerializeAsToken(), FlowLogicRefFactory {
|
||||
// TODO: Replace with a per app classloader/cordapp provider/cordapp loader - this will do for now
|
||||
var classloader = javaClass.classLoader
|
||||
var classloader: ClassLoader = javaClass.classLoader
|
||||
|
||||
override fun create(flowClass: Class<out FlowLogic<*>>, vararg args: Any?): FlowLogicRef {
|
||||
if (!flowClass.isAnnotationPresent(SchedulableFlow::class.java)) {
|
||||
|
@ -16,28 +16,46 @@ class FlowSessionImpl(
|
||||
internal lateinit var sessionFlow: FlowLogic<*>
|
||||
|
||||
@Suspendable
|
||||
override fun getCounterpartyFlowInfo(): FlowInfo {
|
||||
return stateMachine.getFlowInfo(counterparty, sessionFlow)
|
||||
override fun getCounterpartyFlowInfo(maySkipCheckpoint: Boolean): FlowInfo {
|
||||
return stateMachine.getFlowInfo(counterparty, sessionFlow, maySkipCheckpoint)
|
||||
}
|
||||
|
||||
@Suspendable
|
||||
override fun <R : Any> sendAndReceive(receiveType: Class<R>, payload: Any): UntrustworthyData<R> {
|
||||
return stateMachine.sendAndReceive(receiveType, counterparty, payload, sessionFlow)
|
||||
override fun getCounterpartyFlowInfo() = getCounterpartyFlowInfo(maySkipCheckpoint = false)
|
||||
|
||||
@Suspendable
|
||||
override fun <R : Any> sendAndReceive(
|
||||
receiveType: Class<R>,
|
||||
payload: Any,
|
||||
maySkipCheckpoint: Boolean
|
||||
): UntrustworthyData<R> {
|
||||
return stateMachine.sendAndReceive(
|
||||
receiveType,
|
||||
counterparty,
|
||||
payload,
|
||||
sessionFlow,
|
||||
retrySend = false,
|
||||
maySkipCheckpoint = maySkipCheckpoint
|
||||
)
|
||||
}
|
||||
|
||||
@Suspendable
|
||||
internal fun <R : Any> sendAndReceiveWithRetry(receiveType: Class<R>, payload: Any): UntrustworthyData<R> {
|
||||
return stateMachine.sendAndReceive(receiveType, counterparty, payload, sessionFlow, retrySend = true)
|
||||
override fun <R : Any> sendAndReceive(receiveType: Class<R>, payload: Any) = sendAndReceive(receiveType, payload, maySkipCheckpoint = false)
|
||||
|
||||
@Suspendable
|
||||
override fun <R : Any> receive(receiveType: Class<R>, maySkipCheckpoint: Boolean): UntrustworthyData<R> {
|
||||
return stateMachine.receive(receiveType, counterparty, sessionFlow, maySkipCheckpoint)
|
||||
}
|
||||
|
||||
@Suspendable
|
||||
override fun <R : Any> receive(receiveType: Class<R>): UntrustworthyData<R> {
|
||||
return stateMachine.receive(receiveType, counterparty, sessionFlow)
|
||||
override fun <R : Any> receive(receiveType: Class<R>) = receive(receiveType, maySkipCheckpoint = false)
|
||||
|
||||
@Suspendable
|
||||
override fun send(payload: Any, maySkipCheckpoint: Boolean) {
|
||||
return stateMachine.send(counterparty, payload, sessionFlow, maySkipCheckpoint)
|
||||
}
|
||||
|
||||
@Suspendable
|
||||
override fun send(payload: Any) {
|
||||
return stateMachine.send(counterparty, payload, sessionFlow)
|
||||
}
|
||||
override fun send(payload: Any) = send(payload, maySkipCheckpoint = false)
|
||||
}
|
||||
|
||||
|
@ -68,12 +68,10 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
* is not necessary.
|
||||
*/
|
||||
override val logger: Logger = LoggerFactory.getLogger("net.corda.flow.$id")
|
||||
|
||||
@Transient private var _resultFuture: OpenFuture<R>? = openFuture()
|
||||
@Transient private var resultFutureTransient: OpenFuture<R>? = openFuture()
|
||||
private val _resultFuture get() = resultFutureTransient ?: openFuture<R>().also { resultFutureTransient = it }
|
||||
/** This future will complete when the call method returns. */
|
||||
override val resultFuture: CordaFuture<R>
|
||||
get() = _resultFuture ?: openFuture<R>().also { _resultFuture = it }
|
||||
|
||||
override val resultFuture: CordaFuture<R> get() = _resultFuture
|
||||
// This state IS serialised, as we need it to know what the fiber is waiting for.
|
||||
internal val openSessions = HashMap<Pair<FlowLogic<*>, Party>, FlowSessionInternal>()
|
||||
internal var waitingForResponse: WaitingRequest? = null
|
||||
@ -115,7 +113,7 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
recordDuration(startTime)
|
||||
// This is to prevent actionOnEnd being called twice if it throws an exception
|
||||
actionOnEnd(Try.Success(result), false)
|
||||
_resultFuture?.set(result)
|
||||
_resultFuture.set(result)
|
||||
logic.progressTracker?.currentStep = ProgressTracker.DONE
|
||||
logger.debug { "Flow finished with result ${result.toString().abbreviate(300)}" }
|
||||
}
|
||||
@ -128,7 +126,7 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
|
||||
private fun processException(exception: Throwable, propagated: Boolean) {
|
||||
actionOnEnd(Try.Failure(exception), propagated)
|
||||
_resultFuture?.setException(exception)
|
||||
_resultFuture.setException(exception)
|
||||
logic.progressTracker?.endWithError(exception)
|
||||
}
|
||||
|
||||
@ -165,7 +163,7 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
}
|
||||
|
||||
@Suspendable
|
||||
override fun getFlowInfo(otherParty: Party, sessionFlow: FlowLogic<*>): FlowInfo {
|
||||
override fun getFlowInfo(otherParty: Party, sessionFlow: FlowLogic<*>, maySkipCheckpoint: Boolean): FlowInfo {
|
||||
val state = getConfirmedSession(otherParty, sessionFlow).state as FlowSessionState.Initiated
|
||||
return state.context
|
||||
}
|
||||
@ -175,7 +173,8 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
otherParty: Party,
|
||||
payload: Any,
|
||||
sessionFlow: FlowLogic<*>,
|
||||
retrySend: Boolean): UntrustworthyData<T> {
|
||||
retrySend: Boolean,
|
||||
maySkipCheckpoint: Boolean): UntrustworthyData<T> {
|
||||
requireNonPrimitive(receiveType)
|
||||
logger.debug { "sendAndReceive(${receiveType.name}, $otherParty, ${payload.toString().abbreviate(300)}) ..." }
|
||||
val session = getConfirmedSessionIfPresent(otherParty, sessionFlow)
|
||||
@ -194,7 +193,8 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
@Suspendable
|
||||
override fun <T : Any> receive(receiveType: Class<T>,
|
||||
otherParty: Party,
|
||||
sessionFlow: FlowLogic<*>): UntrustworthyData<T> {
|
||||
sessionFlow: FlowLogic<*>,
|
||||
maySkipCheckpoint: Boolean): UntrustworthyData<T> {
|
||||
requireNonPrimitive(receiveType)
|
||||
logger.debug { "receive(${receiveType.name}, $otherParty) ..." }
|
||||
val session = getConfirmedSession(otherParty, sessionFlow)
|
||||
@ -210,7 +210,7 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
}
|
||||
|
||||
@Suspendable
|
||||
override fun send(otherParty: Party, payload: Any, sessionFlow: FlowLogic<*>) {
|
||||
override fun send(otherParty: Party, payload: Any, sessionFlow: FlowLogic<*>, maySkipCheckpoint: Boolean) {
|
||||
logger.debug { "send($otherParty, ${payload.toString().abbreviate(300)})" }
|
||||
val session = getConfirmedSessionIfPresent(otherParty, sessionFlow)
|
||||
if (session == null) {
|
||||
@ -222,7 +222,7 @@ class FlowStateMachineImpl<R>(override val id: StateMachineRunId,
|
||||
}
|
||||
|
||||
@Suspendable
|
||||
override fun waitForLedgerCommit(hash: SecureHash, sessionFlow: FlowLogic<*>): SignedTransaction {
|
||||
override fun waitForLedgerCommit(hash: SecureHash, sessionFlow: FlowLogic<*>, maySkipCheckpoint: Boolean): SignedTransaction {
|
||||
logger.debug { "waitForLedgerCommit($hash) ..." }
|
||||
suspend(WaitForLedgerCommit(hash, sessionFlow.stateMachine as FlowStateMachineImpl<*>))
|
||||
val stx = serviceHub.validatedTransactions.getTransaction(hash)
|
||||
|
@ -1,68 +1,24 @@
|
||||
package net.corda.node.services.statemachine
|
||||
|
||||
import co.paralleluniverse.fibers.Fiber
|
||||
import co.paralleluniverse.fibers.FiberExecutorScheduler
|
||||
import co.paralleluniverse.fibers.Suspendable
|
||||
import co.paralleluniverse.fibers.instrument.SuspendableHelper
|
||||
import co.paralleluniverse.strands.Strand
|
||||
import com.codahale.metrics.Gauge
|
||||
import com.esotericsoftware.kryo.KryoException
|
||||
import com.google.common.collect.HashMultimap
|
||||
import com.google.common.util.concurrent.MoreExecutors
|
||||
import net.corda.core.CordaException
|
||||
import net.corda.core.concurrent.CordaFuture
|
||||
import net.corda.core.crypto.SecureHash
|
||||
import net.corda.core.crypto.random63BitValue
|
||||
import net.corda.core.flows.*
|
||||
import net.corda.core.flows.FlowInitiator
|
||||
import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.identity.Party
|
||||
import net.corda.core.internal.*
|
||||
import net.corda.core.internal.FlowStateMachine
|
||||
import net.corda.core.messaging.DataFeed
|
||||
import net.corda.core.serialization.SerializationDefaults.CHECKPOINT_CONTEXT
|
||||
import net.corda.core.serialization.SerializationDefaults.SERIALIZATION_FACTORY
|
||||
import net.corda.core.serialization.SerializedBytes
|
||||
import net.corda.core.serialization.deserialize
|
||||
import net.corda.core.serialization.serialize
|
||||
import net.corda.core.utilities.Try
|
||||
import net.corda.core.utilities.debug
|
||||
import net.corda.core.utilities.loggerFor
|
||||
import net.corda.core.utilities.trace
|
||||
import net.corda.node.internal.InitiatedFlowFactory
|
||||
import net.corda.node.services.api.Checkpoint
|
||||
import net.corda.node.services.api.CheckpointStorage
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.messaging.ReceivedMessage
|
||||
import net.corda.node.services.messaging.TopicSession
|
||||
import net.corda.node.utilities.AffinityExecutor
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
import net.corda.node.utilities.bufferUntilDatabaseCommit
|
||||
import net.corda.node.utilities.wrapWithDatabaseTransaction
|
||||
import net.corda.nodeapi.internal.serialization.SerializeAsTokenContextImpl
|
||||
import net.corda.nodeapi.internal.serialization.withTokenContext
|
||||
import org.apache.activemq.artemis.utils.ReusableLatch
|
||||
import org.slf4j.Logger
|
||||
import rx.Observable
|
||||
import rx.subjects.PublishSubject
|
||||
import java.io.NotSerializableException
|
||||
import java.util.*
|
||||
import java.util.concurrent.ConcurrentHashMap
|
||||
import java.util.concurrent.Executors
|
||||
import java.util.concurrent.TimeUnit.SECONDS
|
||||
import javax.annotation.concurrent.ThreadSafe
|
||||
import kotlin.collections.ArrayList
|
||||
|
||||
/**
|
||||
* A StateMachineManager is responsible for coordination and persistence of multiple [FlowStateMachineImpl] objects.
|
||||
* A StateMachineManager is responsible for coordination and persistence of multiple [FlowStateMachine] objects.
|
||||
* Each such object represents an instantiation of a (two-party) flow that has reached a particular point.
|
||||
*
|
||||
* An implementation of this class will persist state machines to long term storage so they can survive process restarts
|
||||
* and, if run with a single-threaded executor, will ensure no two state machines run concurrently with each other
|
||||
* (bad for performance, good for programmer mental health!).
|
||||
* An implementation of this interface will persist state machines to long term storage so they can survive process
|
||||
* restarts and, if run with a single-threaded executor, will ensure no two state machines run concurrently with each
|
||||
* other (bad for performance, good for programmer mental health!).
|
||||
*
|
||||
* A "state machine" is a class with a single call method. The call method and any others it invokes are rewritten by
|
||||
* a bytecode rewriting engine called Quasar, to ensure the code can be suspended and resumed at any point.
|
||||
*
|
||||
* The SMM will always invoke the flow fibers on the given [AffinityExecutor], regardless of which thread actually
|
||||
* starts them via [add].
|
||||
* A flow is a class with a single call method. The call method and any others it invokes are rewritten by a bytecode
|
||||
* rewriting engine called Quasar, to ensure the code can be suspended and resumed at any point.
|
||||
*
|
||||
* TODO: Consider the issue of continuation identity more deeply: is it a safe assumption that a serialised
|
||||
* continuation is always unique?
|
||||
@ -72,588 +28,51 @@ import kotlin.collections.ArrayList
|
||||
* TODO: Ability to control checkpointing explicitly, for cases where you know replaying a message can't hurt
|
||||
* TODO: Don't store all active flows in memory, load from the database on demand.
|
||||
*/
|
||||
@ThreadSafe
|
||||
class StateMachineManager(val serviceHub: ServiceHubInternal,
|
||||
val checkpointStorage: CheckpointStorage,
|
||||
val executor: AffinityExecutor,
|
||||
val database: CordaPersistence,
|
||||
private val unfinishedFibers: ReusableLatch = ReusableLatch(),
|
||||
private val classloader: ClassLoader = javaClass.classLoader) {
|
||||
interface StateMachineManager {
|
||||
/**
|
||||
* Starts the state machine manager, loading and starting the state machines in storage.
|
||||
*/
|
||||
fun start(tokenizableServices: List<Any>)
|
||||
/**
|
||||
* Stops the state machine manager gracefully, waiting until all but [allowedUnsuspendedFiberCount] flows reach the
|
||||
* next checkpoint.
|
||||
*/
|
||||
fun stop(allowedUnsuspendedFiberCount: Int)
|
||||
|
||||
inner class FiberScheduler : FiberExecutorScheduler("Same thread scheduler", executor)
|
||||
|
||||
companion object {
|
||||
private val logger = loggerFor<StateMachineManager>()
|
||||
internal val sessionTopic = TopicSession("platform.session")
|
||||
|
||||
init {
|
||||
Fiber.setDefaultUncaughtExceptionHandler { fiber, throwable ->
|
||||
(fiber as FlowStateMachineImpl<*>).logger.warn("Caught exception from flow", throwable)
|
||||
}
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Starts a new flow.
|
||||
*
|
||||
* @param flowLogic The flow's code.
|
||||
* @param flowInitiator The initiator of the flow.
|
||||
*/
|
||||
fun <A> startFlow(flowLogic: FlowLogic<A>, flowInitiator: FlowInitiator, ourIdentity: Party? = null): CordaFuture<FlowStateMachine<A>>
|
||||
|
||||
/**
|
||||
* Represents an addition/removal of a state machine.
|
||||
*/
|
||||
sealed class Change {
|
||||
abstract val logic: FlowLogic<*>
|
||||
|
||||
data class Add(override val logic: FlowLogic<*>) : Change()
|
||||
data class Removed(override val logic: FlowLogic<*>, val result: Try<*>) : Change()
|
||||
}
|
||||
|
||||
// A list of all the state machines being managed by this class. We expose snapshots of it via the stateMachines
|
||||
// property.
|
||||
private class InnerState {
|
||||
var started = false
|
||||
val stateMachines = LinkedHashMap<FlowStateMachineImpl<*>, Checkpoint>()
|
||||
val changesPublisher = PublishSubject.create<Change>()!!
|
||||
val fibersWaitingForLedgerCommit = HashMultimap.create<SecureHash, FlowStateMachineImpl<*>>()!!
|
||||
/**
|
||||
* Returns the list of live state machines and a stream of subsequent additions/removals of them.
|
||||
*/
|
||||
fun track(): DataFeed<List<FlowLogic<*>>, Change>
|
||||
|
||||
fun notifyChangeObservers(change: Change) {
|
||||
changesPublisher.bufferUntilDatabaseCommit().onNext(change)
|
||||
}
|
||||
}
|
||||
/**
|
||||
* The stream of additions/removals of flows.
|
||||
*/
|
||||
val changes: Observable<Change>
|
||||
|
||||
private val scheduler = FiberScheduler()
|
||||
private val mutex = ThreadBox(InnerState())
|
||||
// This thread (only enabled in dev mode) deserialises checkpoints in the background to shake out bugs in checkpoint restore.
|
||||
private val checkpointCheckerThread = if (serviceHub.configuration.devMode) Executors.newSingleThreadExecutor() else null
|
||||
|
||||
@Volatile private var unrestorableCheckpoints = false
|
||||
|
||||
// True if we're shutting down, so don't resume anything.
|
||||
@Volatile private var stopping = false
|
||||
// How many Fibers are running and not suspended. If zero and stopping is true, then we are halted.
|
||||
private val liveFibers = ReusableLatch()
|
||||
|
||||
// Monitoring support.
|
||||
private val metrics = serviceHub.monitoringService.metrics
|
||||
|
||||
init {
|
||||
metrics.register("Flows.InFlight", Gauge<Int> { mutex.content.stateMachines.size })
|
||||
}
|
||||
|
||||
private val checkpointingMeter = metrics.meter("Flows.Checkpointing Rate")
|
||||
private val totalStartedFlows = metrics.counter("Flows.Started")
|
||||
private val totalFinishedFlows = metrics.counter("Flows.Finished")
|
||||
|
||||
private val openSessions = ConcurrentHashMap<Long, FlowSessionInternal>()
|
||||
private val recentlyClosedSessions = ConcurrentHashMap<Long, Party>()
|
||||
|
||||
internal val tokenizableServices = ArrayList<Any>()
|
||||
// Context for tokenized services in checkpoints
|
||||
private val serializationContext by lazy {
|
||||
SerializeAsTokenContextImpl(tokenizableServices, SERIALIZATION_FACTORY, CHECKPOINT_CONTEXT, serviceHub)
|
||||
}
|
||||
|
||||
fun findServices(predicate: (Any) -> Boolean) = tokenizableServices.filter(predicate)
|
||||
|
||||
/** Returns a list of all state machines executing the given flow logic at the top level (subflows do not count) */
|
||||
fun <P : FlowLogic<T>, T> findStateMachines(flowClass: Class<P>): List<Pair<P, CordaFuture<T>>> {
|
||||
return mutex.locked {
|
||||
stateMachines.keys.mapNotNull {
|
||||
flowClass.castIfPossible(it.logic)?.let { it to uncheckedCast<FlowStateMachine<*>, FlowStateMachineImpl<T>>(it.stateMachine).resultFuture }
|
||||
}
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Returns the currently live flows of type [flowClass], and their corresponding result future.
|
||||
*/
|
||||
fun <A : FlowLogic<*>> findStateMachines(flowClass: Class<A>): List<Pair<A, CordaFuture<*>>>
|
||||
|
||||
/**
|
||||
* Returns all currently live flows.
|
||||
*/
|
||||
val allStateMachines: List<FlowLogic<*>>
|
||||
get() = mutex.locked { stateMachines.keys.map { it.logic } }
|
||||
|
||||
/**
|
||||
* An observable that emits triples of the changing flow, the type of change, and a process-specific ID number
|
||||
* which may change across restarts.
|
||||
*
|
||||
* We use assignment here so that multiple subscribers share the same wrapped Observable.
|
||||
*/
|
||||
val changes: Observable<Change> = mutex.content.changesPublisher.wrapWithDatabaseTransaction()
|
||||
|
||||
fun start() {
|
||||
checkQuasarJavaAgentPresence()
|
||||
restoreFibersFromCheckpoints()
|
||||
listenToLedgerTransactions()
|
||||
serviceHub.networkMapCache.nodeReady.then { executor.execute(this::resumeRestoredFibers) }
|
||||
}
|
||||
|
||||
private fun checkQuasarJavaAgentPresence() {
|
||||
check(SuspendableHelper.isJavaAgentActive(), {
|
||||
"""Missing the '-javaagent' JVM argument. Make sure you run the tests with the Quasar java agent attached to your JVM.
|
||||
#See https://docs.corda.net/troubleshooting.html - 'Fiber classes not instrumented' for more details.""".trimMargin("#")
|
||||
})
|
||||
}
|
||||
|
||||
private fun listenToLedgerTransactions() {
|
||||
// Observe the stream of committed, validated transactions and resume fibers that are waiting for them.
|
||||
serviceHub.validatedTransactions.updates.subscribe { stx ->
|
||||
val hash = stx.id
|
||||
val fibers: Set<FlowStateMachineImpl<*>> = mutex.locked { fibersWaitingForLedgerCommit.removeAll(hash) }
|
||||
if (fibers.isNotEmpty()) {
|
||||
executor.executeASAP {
|
||||
for (fiber in fibers) {
|
||||
fiber.logger.trace { "Transaction $hash has committed to the ledger, resuming" }
|
||||
fiber.waitingForResponse = null
|
||||
resumeFiber(fiber)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun decrementLiveFibers() {
|
||||
liveFibers.countDown()
|
||||
}
|
||||
|
||||
private fun incrementLiveFibers() {
|
||||
liveFibers.countUp()
|
||||
}
|
||||
|
||||
/**
|
||||
* Start the shutdown process, bringing the [StateMachineManager] to a controlled stop. When this method returns,
|
||||
* all Fibers have been suspended and checkpointed, or have completed.
|
||||
*
|
||||
* @param allowedUnsuspendedFiberCount Optional parameter is used in some tests.
|
||||
*/
|
||||
fun stop(allowedUnsuspendedFiberCount: Int = 0) {
|
||||
require(allowedUnsuspendedFiberCount >= 0)
|
||||
mutex.locked {
|
||||
if (stopping) throw IllegalStateException("Already stopping!")
|
||||
stopping = true
|
||||
}
|
||||
// Account for any expected Fibers in a test scenario.
|
||||
liveFibers.countDown(allowedUnsuspendedFiberCount)
|
||||
liveFibers.await()
|
||||
checkpointCheckerThread?.let { MoreExecutors.shutdownAndAwaitTermination(it, 5, SECONDS) }
|
||||
check(!unrestorableCheckpoints) { "Unrestorable checkpoints where created, please check the logs for details." }
|
||||
}
|
||||
|
||||
/**
|
||||
* Atomic get snapshot + subscribe. This is needed so we don't miss updates between subscriptions to [changes] and
|
||||
* calls to [allStateMachines]
|
||||
*/
|
||||
fun track(): DataFeed<List<FlowStateMachineImpl<*>>, Change> {
|
||||
return mutex.locked {
|
||||
DataFeed(stateMachines.keys.toList(), changesPublisher.bufferUntilSubscribed().wrapWithDatabaseTransaction())
|
||||
}
|
||||
}
|
||||
|
||||
private fun restoreFibersFromCheckpoints() {
|
||||
mutex.locked {
|
||||
checkpointStorage.forEach { checkpoint ->
|
||||
// If a flow is added before start() then don't attempt to restore it
|
||||
if (!stateMachines.containsValue(checkpoint)) {
|
||||
deserializeFiber(checkpoint, logger)?.let {
|
||||
initFiber(it)
|
||||
stateMachines[it] = checkpoint
|
||||
}
|
||||
}
|
||||
true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun resumeRestoredFibers() {
|
||||
mutex.locked {
|
||||
started = true
|
||||
stateMachines.keys.forEach { resumeRestoredFiber(it) }
|
||||
}
|
||||
serviceHub.networkService.addMessageHandler(sessionTopic) { message, _ ->
|
||||
executor.checkOnThread()
|
||||
onSessionMessage(message)
|
||||
}
|
||||
}
|
||||
|
||||
private fun resumeRestoredFiber(fiber: FlowStateMachineImpl<*>) {
|
||||
fiber.openSessions.values.forEach { openSessions[it.ourSessionId] = it }
|
||||
val waitingForResponse = fiber.waitingForResponse
|
||||
if (waitingForResponse != null) {
|
||||
if (waitingForResponse is WaitForLedgerCommit) {
|
||||
val stx = database.transaction {
|
||||
serviceHub.validatedTransactions.getTransaction(waitingForResponse.hash)
|
||||
}
|
||||
if (stx != null) {
|
||||
fiber.logger.info("Resuming fiber as tx ${waitingForResponse.hash} has committed")
|
||||
fiber.waitingForResponse = null
|
||||
resumeFiber(fiber)
|
||||
} else {
|
||||
fiber.logger.info("Restored, pending on ledger commit of ${waitingForResponse.hash}")
|
||||
mutex.locked { fibersWaitingForLedgerCommit.put(waitingForResponse.hash, fiber) }
|
||||
}
|
||||
} else {
|
||||
fiber.logger.info("Restored, pending on receive")
|
||||
}
|
||||
} else {
|
||||
resumeFiber(fiber)
|
||||
}
|
||||
}
|
||||
|
||||
private fun onSessionMessage(message: ReceivedMessage) {
|
||||
val sessionMessage = try {
|
||||
message.data.deserialize<SessionMessage>()
|
||||
} catch (ex: Exception) {
|
||||
logger.error("Received corrupt SessionMessage data from ${message.peer}")
|
||||
return
|
||||
}
|
||||
val sender = serviceHub.networkMapCache.getPeerByLegalName(message.peer)
|
||||
if (sender != null) {
|
||||
when (sessionMessage) {
|
||||
is ExistingSessionMessage -> onExistingSessionMessage(sessionMessage, sender)
|
||||
is SessionInit -> onSessionInit(sessionMessage, message, sender)
|
||||
}
|
||||
} else {
|
||||
logger.error("Unknown peer ${message.peer} in $sessionMessage")
|
||||
}
|
||||
}
|
||||
|
||||
private fun onExistingSessionMessage(message: ExistingSessionMessage, sender: Party) {
|
||||
val session = openSessions[message.recipientSessionId]
|
||||
if (session != null) {
|
||||
session.fiber.logger.trace { "Received $message on $session from $sender" }
|
||||
if (session.retryable) {
|
||||
if (message is SessionConfirm && session.state is FlowSessionState.Initiated) {
|
||||
session.fiber.logger.trace { "Ignoring duplicate confirmation for session ${session.ourSessionId} – session is idempotent" }
|
||||
return
|
||||
}
|
||||
if (message !is SessionConfirm) {
|
||||
serviceHub.networkService.cancelRedelivery(session.ourSessionId)
|
||||
}
|
||||
}
|
||||
if (message is SessionEnd) {
|
||||
openSessions.remove(message.recipientSessionId)
|
||||
}
|
||||
session.receivedMessages += ReceivedSessionMessage(sender, message)
|
||||
if (resumeOnMessage(message, session)) {
|
||||
// It's important that we reset here and not after the fiber's resumed, in case we receive another message
|
||||
// before then.
|
||||
session.fiber.waitingForResponse = null
|
||||
updateCheckpoint(session.fiber)
|
||||
session.fiber.logger.trace { "Resuming due to $message" }
|
||||
resumeFiber(session.fiber)
|
||||
}
|
||||
} else {
|
||||
val peerParty = recentlyClosedSessions.remove(message.recipientSessionId)
|
||||
if (peerParty != null) {
|
||||
if (message is SessionConfirm) {
|
||||
logger.trace { "Received session confirmation but associated fiber has already terminated, so sending session end" }
|
||||
sendSessionMessage(peerParty, NormalSessionEnd(message.initiatedSessionId))
|
||||
} else {
|
||||
logger.trace { "Ignoring session end message for already closed session: $message" }
|
||||
}
|
||||
} else {
|
||||
logger.warn("Received a session message for unknown session: $message, from $sender")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// We resume the fiber if it's received a response for which it was waiting for or it's waiting for a ledger
|
||||
// commit but a counterparty flow has ended with an error (in which case our flow also has to end)
|
||||
private fun resumeOnMessage(message: ExistingSessionMessage, session: FlowSessionInternal): Boolean {
|
||||
val waitingForResponse = session.fiber.waitingForResponse
|
||||
return waitingForResponse?.shouldResume(message, session) ?: false
|
||||
}
|
||||
|
||||
private fun onSessionInit(sessionInit: SessionInit, receivedMessage: ReceivedMessage, sender: Party) {
|
||||
logger.trace { "Received $sessionInit from $sender" }
|
||||
val senderSessionId = sessionInit.initiatorSessionId
|
||||
|
||||
fun sendSessionReject(message: String) = sendSessionMessage(sender, SessionReject(senderSessionId, message))
|
||||
|
||||
val (session, initiatedFlowFactory) = try {
|
||||
val initiatedFlowFactory = getInitiatedFlowFactory(sessionInit)
|
||||
val flowSession = FlowSessionImpl(sender)
|
||||
val flow = initiatedFlowFactory.createFlow(flowSession)
|
||||
val senderFlowVersion = when (initiatedFlowFactory) {
|
||||
is InitiatedFlowFactory.Core -> receivedMessage.platformVersion // The flow version for the core flows is the platform version
|
||||
is InitiatedFlowFactory.CorDapp -> sessionInit.flowVersion
|
||||
}
|
||||
val session = FlowSessionInternal(
|
||||
flow,
|
||||
flowSession,
|
||||
random63BitValue(),
|
||||
sender,
|
||||
FlowSessionState.Initiated(sender, senderSessionId, FlowInfo(senderFlowVersion, sessionInit.appName)))
|
||||
if (sessionInit.firstPayload != null) {
|
||||
session.receivedMessages += ReceivedSessionMessage(sender, SessionData(session.ourSessionId, sessionInit.firstPayload))
|
||||
}
|
||||
openSessions[session.ourSessionId] = session
|
||||
// TODO Perhaps the session-init will specificy which of our multiple identies to use, which we would have to
|
||||
// double-check is actually ours. However, what if we want to control how our identities gets used?
|
||||
val fiber = createFiber(flow, FlowInitiator.Peer(sender))
|
||||
flowSession.sessionFlow = flow
|
||||
flowSession.stateMachine = fiber
|
||||
fiber.openSessions[Pair(flow, sender)] = session
|
||||
updateCheckpoint(fiber)
|
||||
session to initiatedFlowFactory
|
||||
} catch (e: SessionRejectException) {
|
||||
logger.warn("${e.logMessage}: $sessionInit")
|
||||
sendSessionReject(e.rejectMessage)
|
||||
return
|
||||
} catch (e: Exception) {
|
||||
logger.warn("Couldn't start flow session from $sessionInit", e)
|
||||
sendSessionReject("Unable to establish session")
|
||||
return
|
||||
}
|
||||
|
||||
val (ourFlowVersion, appName) = when (initiatedFlowFactory) {
|
||||
// The flow version for the core flows is the platform version
|
||||
is InitiatedFlowFactory.Core -> serviceHub.myInfo.platformVersion to "corda"
|
||||
is InitiatedFlowFactory.CorDapp -> initiatedFlowFactory.flowVersion to initiatedFlowFactory.appName
|
||||
}
|
||||
|
||||
sendSessionMessage(sender, SessionConfirm(senderSessionId, session.ourSessionId, ourFlowVersion, appName), session.fiber)
|
||||
session.fiber.logger.debug { "Initiated by $sender using ${sessionInit.initiatingFlowClass}" }
|
||||
session.fiber.logger.trace { "Initiated from $sessionInit on $session" }
|
||||
resumeFiber(session.fiber)
|
||||
}
|
||||
|
||||
private fun getInitiatedFlowFactory(sessionInit: SessionInit): InitiatedFlowFactory<*> {
|
||||
val initiatingFlowClass = try {
|
||||
Class.forName(sessionInit.initiatingFlowClass, true, classloader).asSubclass(FlowLogic::class.java)
|
||||
} catch (e: ClassNotFoundException) {
|
||||
throw SessionRejectException("Don't know ${sessionInit.initiatingFlowClass}")
|
||||
} catch (e: ClassCastException) {
|
||||
throw SessionRejectException("${sessionInit.initiatingFlowClass} is not a flow")
|
||||
}
|
||||
return serviceHub.getFlowFactory(initiatingFlowClass) ?:
|
||||
throw SessionRejectException("$initiatingFlowClass is not registered")
|
||||
}
|
||||
|
||||
private fun serializeFiber(fiber: FlowStateMachineImpl<*>): SerializedBytes<FlowStateMachineImpl<*>> {
|
||||
return fiber.serialize(context = CHECKPOINT_CONTEXT.withTokenContext(serializationContext))
|
||||
}
|
||||
|
||||
private fun deserializeFiber(checkpoint: Checkpoint, logger: Logger): FlowStateMachineImpl<*>? {
|
||||
return try {
|
||||
checkpoint.serializedFiber.deserialize(context = CHECKPOINT_CONTEXT.withTokenContext(serializationContext)).apply {
|
||||
fromCheckpoint = true
|
||||
}
|
||||
} catch (t: Throwable) {
|
||||
logger.error("Encountered unrestorable checkpoint!", t)
|
||||
null
|
||||
}
|
||||
}
|
||||
|
||||
private fun <T> createFiber(logic: FlowLogic<T>, flowInitiator: FlowInitiator, ourIdentity: Party? = null): FlowStateMachineImpl<T> {
|
||||
val fsm = FlowStateMachineImpl(
|
||||
StateMachineRunId.createRandom(),
|
||||
logic,
|
||||
scheduler,
|
||||
flowInitiator,
|
||||
ourIdentity ?: serviceHub.myInfo.legalIdentities[0])
|
||||
initFiber(fsm)
|
||||
return fsm
|
||||
}
|
||||
|
||||
private fun initFiber(fiber: FlowStateMachineImpl<*>) {
|
||||
verifyFlowLogicIsSuspendable(fiber.logic)
|
||||
fiber.database = database
|
||||
fiber.serviceHub = serviceHub
|
||||
fiber.ourIdentityAndCert = serviceHub.myInfo.legalIdentitiesAndCerts.find { it.party == fiber.ourIdentity }
|
||||
?: throw IllegalStateException("Identity specified by ${fiber.id} (${fiber.ourIdentity}) is not one of ours!")
|
||||
fiber.actionOnSuspend = { ioRequest ->
|
||||
updateCheckpoint(fiber)
|
||||
// We commit on the fibers transaction that was copied across ThreadLocals during suspend
|
||||
// This will free up the ThreadLocal so on return the caller can carry on with other transactions
|
||||
fiber.commitTransaction()
|
||||
processIORequest(ioRequest)
|
||||
decrementLiveFibers()
|
||||
}
|
||||
fiber.actionOnEnd = { result, propagated ->
|
||||
try {
|
||||
mutex.locked {
|
||||
stateMachines.remove(fiber)?.let { checkpointStorage.removeCheckpoint(it) }
|
||||
notifyChangeObservers(Change.Removed(fiber.logic, result))
|
||||
}
|
||||
endAllFiberSessions(fiber, result, propagated)
|
||||
} finally {
|
||||
fiber.commitTransaction()
|
||||
decrementLiveFibers()
|
||||
totalFinishedFlows.inc()
|
||||
unfinishedFibers.countDown()
|
||||
}
|
||||
}
|
||||
mutex.locked {
|
||||
totalStartedFlows.inc()
|
||||
unfinishedFibers.countUp()
|
||||
notifyChangeObservers(Change.Add(fiber.logic))
|
||||
}
|
||||
}
|
||||
|
||||
private fun verifyFlowLogicIsSuspendable(logic: FlowLogic<Any?>) {
|
||||
// Quasar requires (in Java 8) that at least the call method be annotated suspendable. Unfortunately, it's
|
||||
// easy to forget to add this when creating a new flow, so we check here to give the user a better error.
|
||||
//
|
||||
// The Kotlin compiler can sometimes generate a synthetic bridge method from a single call declaration, which
|
||||
// forwards to the void method and then returns Unit. However annotations do not get copied across to this
|
||||
// bridge, so we have to do a more complex scan here.
|
||||
val call = logic.javaClass.methods.first { !it.isSynthetic && it.name == "call" && it.parameterCount == 0 }
|
||||
if (call.getAnnotation(Suspendable::class.java) == null) {
|
||||
throw FlowException("${logic.javaClass.name}.call() is not annotated as @Suspendable. Please fix this.")
|
||||
}
|
||||
}
|
||||
|
||||
private fun endAllFiberSessions(fiber: FlowStateMachineImpl<*>, result: Try<*>, propagated: Boolean) {
|
||||
openSessions.values.removeIf { session ->
|
||||
if (session.fiber == fiber) {
|
||||
session.endSession((result as? Try.Failure)?.exception, propagated)
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun FlowSessionInternal.endSession(exception: Throwable?, propagated: Boolean) {
|
||||
val initiatedState = state as? FlowSessionState.Initiated ?: return
|
||||
val sessionEnd = if (exception == null) {
|
||||
NormalSessionEnd(initiatedState.peerSessionId)
|
||||
} else {
|
||||
val errorResponse = if (exception is FlowException && (!propagated || initiatingParty != null)) {
|
||||
// Only propagate this FlowException if our local flow threw it or it was propagated to us and we only
|
||||
// pass it down invocation chain to the flow that initiated us, not to flows we've started sessions with.
|
||||
exception
|
||||
} else {
|
||||
null
|
||||
}
|
||||
ErrorSessionEnd(initiatedState.peerSessionId, errorResponse)
|
||||
}
|
||||
sendSessionMessage(initiatedState.peerParty, sessionEnd, fiber)
|
||||
recentlyClosedSessions[ourSessionId] = initiatedState.peerParty
|
||||
}
|
||||
|
||||
/**
|
||||
* Kicks off a brand new state machine of the given class.
|
||||
* The state machine will be persisted when it suspends, with automated restart if the StateMachineManager is
|
||||
* restarted with checkpointed state machines in the storage service.
|
||||
*
|
||||
* Note that you must be on the [executor] thread.
|
||||
*/
|
||||
fun <T> add(logic: FlowLogic<T>, flowInitiator: FlowInitiator, ourIdentity: Party? = null): FlowStateMachineImpl<T> {
|
||||
// TODO: Check that logic has @Suspendable on its call method.
|
||||
executor.checkOnThread()
|
||||
val fiber = database.transaction {
|
||||
val fiber = createFiber(logic, flowInitiator, ourIdentity)
|
||||
updateCheckpoint(fiber)
|
||||
fiber
|
||||
}
|
||||
// If we are not started then our checkpoint will be picked up during start
|
||||
mutex.locked {
|
||||
if (started) {
|
||||
resumeFiber(fiber)
|
||||
}
|
||||
}
|
||||
return fiber
|
||||
}
|
||||
|
||||
private fun updateCheckpoint(fiber: FlowStateMachineImpl<*>) {
|
||||
check(fiber.state != Strand.State.RUNNING) { "Fiber cannot be running when checkpointing" }
|
||||
val newCheckpoint = Checkpoint(serializeFiber(fiber))
|
||||
val previousCheckpoint = mutex.locked { stateMachines.put(fiber, newCheckpoint) }
|
||||
if (previousCheckpoint != null) {
|
||||
checkpointStorage.removeCheckpoint(previousCheckpoint)
|
||||
}
|
||||
checkpointStorage.addCheckpoint(newCheckpoint)
|
||||
checkpointingMeter.mark()
|
||||
|
||||
checkpointCheckerThread?.execute {
|
||||
// Immediately check that the checkpoint is valid by deserialising it. The idea is to plug any holes we have
|
||||
// in our testing by failing any test where unrestorable checkpoints are created.
|
||||
if (deserializeFiber(newCheckpoint, fiber.logger) == null) {
|
||||
unrestorableCheckpoints = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun resumeFiber(fiber: FlowStateMachineImpl<*>) {
|
||||
// Avoid race condition when setting stopping to true and then checking liveFibers
|
||||
incrementLiveFibers()
|
||||
if (!stopping) {
|
||||
executor.executeASAP {
|
||||
fiber.resume(scheduler)
|
||||
}
|
||||
} else {
|
||||
fiber.logger.trace("Not resuming as SMM is stopping.")
|
||||
decrementLiveFibers()
|
||||
}
|
||||
}
|
||||
|
||||
private fun processIORequest(ioRequest: FlowIORequest) {
|
||||
executor.checkOnThread()
|
||||
when (ioRequest) {
|
||||
is SendRequest -> processSendRequest(ioRequest)
|
||||
is WaitForLedgerCommit -> processWaitForCommitRequest(ioRequest)
|
||||
is Sleep -> processSleepRequest(ioRequest)
|
||||
}
|
||||
}
|
||||
|
||||
private fun processSendRequest(ioRequest: SendRequest) {
|
||||
val retryId = if (ioRequest.message is SessionInit) {
|
||||
with(ioRequest.session) {
|
||||
openSessions[ourSessionId] = this
|
||||
if (retryable) ourSessionId else null
|
||||
}
|
||||
} else null
|
||||
sendSessionMessage(ioRequest.session.state.sendToParty, ioRequest.message, ioRequest.session.fiber, retryId)
|
||||
if (ioRequest !is ReceiveRequest<*>) {
|
||||
// We sent a message, but don't expect a response, so re-enter the continuation to let it keep going.
|
||||
resumeFiber(ioRequest.session.fiber)
|
||||
}
|
||||
}
|
||||
|
||||
private fun processWaitForCommitRequest(ioRequest: WaitForLedgerCommit) {
|
||||
// Is it already committed?
|
||||
val stx = database.transaction {
|
||||
serviceHub.validatedTransactions.getTransaction(ioRequest.hash)
|
||||
}
|
||||
if (stx != null) {
|
||||
resumeFiber(ioRequest.fiber)
|
||||
} else {
|
||||
// No, then register to wait.
|
||||
//
|
||||
// We assume this code runs on the server thread, which is the only place transactions are committed
|
||||
// currently. When we liberalise our threading somewhat, handing of wait requests will need to be
|
||||
// reworked to make the wait atomic in another way. Otherwise there is a race between checking the
|
||||
// database and updating the waiting list.
|
||||
mutex.locked {
|
||||
fibersWaitingForLedgerCommit[ioRequest.hash] += ioRequest.fiber
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun processSleepRequest(ioRequest: Sleep) {
|
||||
// Resume the fiber now we have checkpointed, so we can sleep on the Fiber.
|
||||
resumeFiber(ioRequest.fiber)
|
||||
}
|
||||
|
||||
private fun sendSessionMessage(party: Party, message: SessionMessage, fiber: FlowStateMachineImpl<*>? = null, retryId: Long? = null) {
|
||||
val partyInfo = serviceHub.networkMapCache.getPartyInfo(party)
|
||||
?: throw IllegalArgumentException("Don't know about party $party")
|
||||
val address = serviceHub.networkService.getAddressOfParty(partyInfo)
|
||||
val logger = fiber?.logger ?: logger
|
||||
logger.trace { "Sending $message to party $party @ $address" + if (retryId != null) " with retry $retryId" else "" }
|
||||
|
||||
val serialized = try {
|
||||
message.serialize()
|
||||
} catch (e: Exception) {
|
||||
when (e) {
|
||||
// Handling Kryo and AMQP serialization problems. Unfortunately the two exception types do not share much of a common exception interface.
|
||||
is KryoException,
|
||||
is NotSerializableException -> {
|
||||
if (message !is ErrorSessionEnd || message.errorResponse == null) throw e
|
||||
logger.warn("Something in ${message.errorResponse.javaClass.name} is not serialisable. " +
|
||||
"Instead sending back an exception which is serialisable to ensure session end occurs properly.", e)
|
||||
// The subclass may have overridden toString so we use that
|
||||
val exMessage = message.errorResponse.let { if (it.javaClass != FlowException::class.java) it.toString() else it.message }
|
||||
message.copy(errorResponse = FlowException(exMessage)).serialize()
|
||||
}
|
||||
else -> throw e
|
||||
}
|
||||
}
|
||||
|
||||
serviceHub.networkService.apply {
|
||||
send(createMessage(sessionTopic, serialized.bytes), address, retryId = retryId)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
class SessionRejectException(val rejectMessage: String, val logMessage: String) : CordaException(rejectMessage) {
|
||||
constructor(message: String) : this(message, message)
|
||||
}
|
||||
}
|
@ -0,0 +1,634 @@
|
||||
package net.corda.node.services.statemachine
|
||||
|
||||
import co.paralleluniverse.fibers.Fiber
|
||||
import co.paralleluniverse.fibers.FiberExecutorScheduler
|
||||
import co.paralleluniverse.fibers.Suspendable
|
||||
import co.paralleluniverse.fibers.instrument.SuspendableHelper
|
||||
import co.paralleluniverse.strands.Strand
|
||||
import com.codahale.metrics.Gauge
|
||||
import com.esotericsoftware.kryo.KryoException
|
||||
import com.google.common.collect.HashMultimap
|
||||
import com.google.common.util.concurrent.MoreExecutors
|
||||
import net.corda.core.CordaException
|
||||
import net.corda.core.concurrent.CordaFuture
|
||||
import net.corda.core.crypto.SecureHash
|
||||
import net.corda.core.crypto.random63BitValue
|
||||
import net.corda.core.flows.*
|
||||
import net.corda.core.identity.Party
|
||||
import net.corda.core.internal.*
|
||||
import net.corda.core.internal.concurrent.doneFuture
|
||||
import net.corda.core.messaging.DataFeed
|
||||
import net.corda.core.serialization.SerializationDefaults.CHECKPOINT_CONTEXT
|
||||
import net.corda.core.serialization.SerializationDefaults.SERIALIZATION_FACTORY
|
||||
import net.corda.core.serialization.SerializedBytes
|
||||
import net.corda.core.serialization.deserialize
|
||||
import net.corda.core.serialization.serialize
|
||||
import net.corda.core.utilities.Try
|
||||
import net.corda.core.utilities.debug
|
||||
import net.corda.core.utilities.loggerFor
|
||||
import net.corda.core.utilities.trace
|
||||
import net.corda.node.internal.InitiatedFlowFactory
|
||||
import net.corda.node.services.api.Checkpoint
|
||||
import net.corda.node.services.api.CheckpointStorage
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.messaging.ReceivedMessage
|
||||
import net.corda.node.services.messaging.TopicSession
|
||||
import net.corda.node.utilities.AffinityExecutor
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
import net.corda.node.utilities.bufferUntilDatabaseCommit
|
||||
import net.corda.node.utilities.wrapWithDatabaseTransaction
|
||||
import net.corda.nodeapi.internal.serialization.SerializeAsTokenContextImpl
|
||||
import net.corda.nodeapi.internal.serialization.withTokenContext
|
||||
import org.apache.activemq.artemis.utils.ReusableLatch
|
||||
import org.slf4j.Logger
|
||||
import rx.Observable
|
||||
import rx.subjects.PublishSubject
|
||||
import java.io.NotSerializableException
|
||||
import java.util.*
|
||||
import java.util.concurrent.ConcurrentHashMap
|
||||
import java.util.concurrent.Executors
|
||||
import java.util.concurrent.TimeUnit.SECONDS
|
||||
import javax.annotation.concurrent.ThreadSafe
|
||||
|
||||
/**
|
||||
* The StateMachineManagerImpl will always invoke the flow fibers on the given [AffinityExecutor], regardless of which
|
||||
* thread actually starts them via [startFlow].
|
||||
*/
|
||||
@ThreadSafe
|
||||
class StateMachineManagerImpl(
|
||||
val serviceHub: ServiceHubInternal,
|
||||
val checkpointStorage: CheckpointStorage,
|
||||
val executor: AffinityExecutor,
|
||||
val database: CordaPersistence,
|
||||
private val unfinishedFibers: ReusableLatch = ReusableLatch(),
|
||||
private val classloader: ClassLoader = StateMachineManagerImpl::class.java.classLoader
|
||||
) : StateMachineManager {
|
||||
inner class FiberScheduler : FiberExecutorScheduler("Same thread scheduler", executor)
|
||||
|
||||
companion object {
|
||||
private val logger = loggerFor<StateMachineManagerImpl>()
|
||||
internal val sessionTopic = TopicSession("platform.session")
|
||||
|
||||
init {
|
||||
Fiber.setDefaultUncaughtExceptionHandler { fiber, throwable ->
|
||||
(fiber as FlowStateMachineImpl<*>).logger.warn("Caught exception from flow", throwable)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// A list of all the state machines being managed by this class. We expose snapshots of it via the stateMachines
|
||||
// property.
|
||||
private class InnerState {
|
||||
var started = false
|
||||
val stateMachines = LinkedHashMap<FlowStateMachineImpl<*>, Checkpoint>()
|
||||
val changesPublisher = PublishSubject.create<StateMachineManager.Change>()!!
|
||||
val fibersWaitingForLedgerCommit = HashMultimap.create<SecureHash, FlowStateMachineImpl<*>>()!!
|
||||
|
||||
fun notifyChangeObservers(change: StateMachineManager.Change) {
|
||||
changesPublisher.bufferUntilDatabaseCommit().onNext(change)
|
||||
}
|
||||
}
|
||||
|
||||
private val scheduler = FiberScheduler()
|
||||
private val mutex = ThreadBox(InnerState())
|
||||
// This thread (only enabled in dev mode) deserialises checkpoints in the background to shake out bugs in checkpoint restore.
|
||||
private val checkpointCheckerThread = if (serviceHub.configuration.devMode) Executors.newSingleThreadExecutor() else null
|
||||
|
||||
@Volatile private var unrestorableCheckpoints = false
|
||||
|
||||
// True if we're shutting down, so don't resume anything.
|
||||
@Volatile private var stopping = false
|
||||
// How many Fibers are running and not suspended. If zero and stopping is true, then we are halted.
|
||||
private val liveFibers = ReusableLatch()
|
||||
|
||||
// Monitoring support.
|
||||
private val metrics = serviceHub.monitoringService.metrics
|
||||
|
||||
init {
|
||||
metrics.register("Flows.InFlight", Gauge<Int> { mutex.content.stateMachines.size })
|
||||
}
|
||||
|
||||
private val checkpointingMeter = metrics.meter("Flows.Checkpointing Rate")
|
||||
private val totalStartedFlows = metrics.counter("Flows.Started")
|
||||
private val totalFinishedFlows = metrics.counter("Flows.Finished")
|
||||
|
||||
private val openSessions = ConcurrentHashMap<Long, FlowSessionInternal>()
|
||||
private val recentlyClosedSessions = ConcurrentHashMap<Long, Party>()
|
||||
|
||||
// Context for tokenized services in checkpoints
|
||||
private lateinit var tokenizableServices: List<Any>
|
||||
private val serializationContext by lazy {
|
||||
SerializeAsTokenContextImpl(tokenizableServices, SERIALIZATION_FACTORY, CHECKPOINT_CONTEXT, serviceHub)
|
||||
}
|
||||
|
||||
/** Returns a list of all state machines executing the given flow logic at the top level (subflows do not count) */
|
||||
override fun <A : FlowLogic<*>> findStateMachines(flowClass: Class<A>): List<Pair<A, CordaFuture<*>>> {
|
||||
return mutex.locked {
|
||||
stateMachines.keys.mapNotNull {
|
||||
flowClass.castIfPossible(it.logic)?.let { it to uncheckedCast<FlowStateMachine<*>, FlowStateMachineImpl<*>>(it.stateMachine).resultFuture }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override val allStateMachines: List<FlowLogic<*>>
|
||||
get() = mutex.locked { stateMachines.keys.map { it.logic } }
|
||||
|
||||
/**
|
||||
* An observable that emits triples of the changing flow, the type of change, and a process-specific ID number
|
||||
* which may change across restarts.
|
||||
*
|
||||
* We use assignment here so that multiple subscribers share the same wrapped Observable.
|
||||
*/
|
||||
override val changes: Observable<StateMachineManager.Change> = mutex.content.changesPublisher.wrapWithDatabaseTransaction()
|
||||
|
||||
override fun start(tokenizableServices: List<Any>) {
|
||||
this.tokenizableServices = tokenizableServices
|
||||
checkQuasarJavaAgentPresence()
|
||||
restoreFibersFromCheckpoints()
|
||||
listenToLedgerTransactions()
|
||||
serviceHub.networkMapCache.nodeReady.then { executor.execute(this::resumeRestoredFibers) }
|
||||
}
|
||||
|
||||
private fun checkQuasarJavaAgentPresence() {
|
||||
check(SuspendableHelper.isJavaAgentActive(), {
|
||||
"""Missing the '-javaagent' JVM argument. Make sure you run the tests with the Quasar java agent attached to your JVM.
|
||||
#See https://docs.corda.net/troubleshooting.html - 'Fiber classes not instrumented' for more details.""".trimMargin("#")
|
||||
})
|
||||
}
|
||||
|
||||
private fun listenToLedgerTransactions() {
|
||||
// Observe the stream of committed, validated transactions and resume fibers that are waiting for them.
|
||||
serviceHub.validatedTransactions.updates.subscribe { stx ->
|
||||
val hash = stx.id
|
||||
val fibers: Set<FlowStateMachineImpl<*>> = mutex.locked { fibersWaitingForLedgerCommit.removeAll(hash) }
|
||||
if (fibers.isNotEmpty()) {
|
||||
executor.executeASAP {
|
||||
for (fiber in fibers) {
|
||||
fiber.logger.trace { "Transaction $hash has committed to the ledger, resuming" }
|
||||
fiber.waitingForResponse = null
|
||||
resumeFiber(fiber)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun decrementLiveFibers() {
|
||||
liveFibers.countDown()
|
||||
}
|
||||
|
||||
private fun incrementLiveFibers() {
|
||||
liveFibers.countUp()
|
||||
}
|
||||
|
||||
/**
|
||||
* Start the shutdown process, bringing the [StateMachineManagerImpl] to a controlled stop. When this method returns,
|
||||
* all Fibers have been suspended and checkpointed, or have completed.
|
||||
*
|
||||
* @param allowedUnsuspendedFiberCount Optional parameter is used in some tests.
|
||||
*/
|
||||
override fun stop(allowedUnsuspendedFiberCount: Int) {
|
||||
require(allowedUnsuspendedFiberCount >= 0)
|
||||
mutex.locked {
|
||||
if (stopping) throw IllegalStateException("Already stopping!")
|
||||
stopping = true
|
||||
}
|
||||
// Account for any expected Fibers in a test scenario.
|
||||
liveFibers.countDown(allowedUnsuspendedFiberCount)
|
||||
liveFibers.await()
|
||||
checkpointCheckerThread?.let { MoreExecutors.shutdownAndAwaitTermination(it, 5, SECONDS) }
|
||||
check(!unrestorableCheckpoints) { "Unrestorable checkpoints where created, please check the logs for details." }
|
||||
}
|
||||
|
||||
/**
|
||||
* Atomic get snapshot + subscribe. This is needed so we don't miss updates between subscriptions to [changes] and
|
||||
* calls to [allStateMachines]
|
||||
*/
|
||||
override fun track(): DataFeed<List<FlowLogic<*>>, StateMachineManager.Change> {
|
||||
return mutex.locked {
|
||||
DataFeed(stateMachines.keys.map { it.logic }, changesPublisher.bufferUntilSubscribed().wrapWithDatabaseTransaction())
|
||||
}
|
||||
}
|
||||
|
||||
private fun restoreFibersFromCheckpoints() {
|
||||
mutex.locked {
|
||||
checkpointStorage.forEach { checkpoint ->
|
||||
// If a flow is added before start() then don't attempt to restore it
|
||||
if (!stateMachines.containsValue(checkpoint)) {
|
||||
deserializeFiber(checkpoint, logger)?.let {
|
||||
initFiber(it)
|
||||
stateMachines[it] = checkpoint
|
||||
}
|
||||
}
|
||||
true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun resumeRestoredFibers() {
|
||||
mutex.locked {
|
||||
started = true
|
||||
stateMachines.keys.forEach { resumeRestoredFiber(it) }
|
||||
}
|
||||
serviceHub.networkService.addMessageHandler(sessionTopic) { message, _ ->
|
||||
executor.checkOnThread()
|
||||
onSessionMessage(message)
|
||||
}
|
||||
}
|
||||
|
||||
private fun resumeRestoredFiber(fiber: FlowStateMachineImpl<*>) {
|
||||
fiber.openSessions.values.forEach { openSessions[it.ourSessionId] = it }
|
||||
val waitingForResponse = fiber.waitingForResponse
|
||||
if (waitingForResponse != null) {
|
||||
if (waitingForResponse is WaitForLedgerCommit) {
|
||||
val stx = database.transaction {
|
||||
serviceHub.validatedTransactions.getTransaction(waitingForResponse.hash)
|
||||
}
|
||||
if (stx != null) {
|
||||
fiber.logger.info("Resuming fiber as tx ${waitingForResponse.hash} has committed")
|
||||
fiber.waitingForResponse = null
|
||||
resumeFiber(fiber)
|
||||
} else {
|
||||
fiber.logger.info("Restored, pending on ledger commit of ${waitingForResponse.hash}")
|
||||
mutex.locked { fibersWaitingForLedgerCommit.put(waitingForResponse.hash, fiber) }
|
||||
}
|
||||
} else {
|
||||
fiber.logger.info("Restored, pending on receive")
|
||||
}
|
||||
} else {
|
||||
resumeFiber(fiber)
|
||||
}
|
||||
}
|
||||
|
||||
private fun onSessionMessage(message: ReceivedMessage) {
|
||||
val sessionMessage = try {
|
||||
message.data.deserialize<SessionMessage>()
|
||||
} catch (ex: Exception) {
|
||||
logger.error("Received corrupt SessionMessage data from ${message.peer}")
|
||||
return
|
||||
}
|
||||
val sender = serviceHub.networkMapCache.getPeerByLegalName(message.peer)
|
||||
if (sender != null) {
|
||||
when (sessionMessage) {
|
||||
is ExistingSessionMessage -> onExistingSessionMessage(sessionMessage, sender)
|
||||
is SessionInit -> onSessionInit(sessionMessage, message, sender)
|
||||
}
|
||||
} else {
|
||||
logger.error("Unknown peer ${message.peer} in $sessionMessage")
|
||||
}
|
||||
}
|
||||
|
||||
private fun onExistingSessionMessage(message: ExistingSessionMessage, sender: Party) {
|
||||
val session = openSessions[message.recipientSessionId]
|
||||
if (session != null) {
|
||||
session.fiber.logger.trace { "Received $message on $session from $sender" }
|
||||
if (session.retryable) {
|
||||
if (message is SessionConfirm && session.state is FlowSessionState.Initiated) {
|
||||
session.fiber.logger.trace { "Ignoring duplicate confirmation for session ${session.ourSessionId} – session is idempotent" }
|
||||
return
|
||||
}
|
||||
if (message !is SessionConfirm) {
|
||||
serviceHub.networkService.cancelRedelivery(session.ourSessionId)
|
||||
}
|
||||
}
|
||||
if (message is SessionEnd) {
|
||||
openSessions.remove(message.recipientSessionId)
|
||||
}
|
||||
session.receivedMessages += ReceivedSessionMessage(sender, message)
|
||||
if (resumeOnMessage(message, session)) {
|
||||
// It's important that we reset here and not after the fiber's resumed, in case we receive another message
|
||||
// before then.
|
||||
session.fiber.waitingForResponse = null
|
||||
updateCheckpoint(session.fiber)
|
||||
session.fiber.logger.trace { "Resuming due to $message" }
|
||||
resumeFiber(session.fiber)
|
||||
}
|
||||
} else {
|
||||
val peerParty = recentlyClosedSessions.remove(message.recipientSessionId)
|
||||
if (peerParty != null) {
|
||||
if (message is SessionConfirm) {
|
||||
logger.trace { "Received session confirmation but associated fiber has already terminated, so sending session end" }
|
||||
sendSessionMessage(peerParty, NormalSessionEnd(message.initiatedSessionId))
|
||||
} else {
|
||||
logger.trace { "Ignoring session end message for already closed session: $message" }
|
||||
}
|
||||
} else {
|
||||
logger.warn("Received a session message for unknown session: $message, from $sender")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// We resume the fiber if it's received a response for which it was waiting for or it's waiting for a ledger
|
||||
// commit but a counterparty flow has ended with an error (in which case our flow also has to end)
|
||||
private fun resumeOnMessage(message: ExistingSessionMessage, session: FlowSessionInternal): Boolean {
|
||||
val waitingForResponse = session.fiber.waitingForResponse
|
||||
return waitingForResponse?.shouldResume(message, session) ?: false
|
||||
}
|
||||
|
||||
private fun onSessionInit(sessionInit: SessionInit, receivedMessage: ReceivedMessage, sender: Party) {
|
||||
logger.trace { "Received $sessionInit from $sender" }
|
||||
val senderSessionId = sessionInit.initiatorSessionId
|
||||
|
||||
fun sendSessionReject(message: String) = sendSessionMessage(sender, SessionReject(senderSessionId, message))
|
||||
|
||||
val (session, initiatedFlowFactory) = try {
|
||||
val initiatedFlowFactory = getInitiatedFlowFactory(sessionInit)
|
||||
val flowSession = FlowSessionImpl(sender)
|
||||
val flow = initiatedFlowFactory.createFlow(flowSession)
|
||||
val senderFlowVersion = when (initiatedFlowFactory) {
|
||||
is InitiatedFlowFactory.Core -> receivedMessage.platformVersion // The flow version for the core flows is the platform version
|
||||
is InitiatedFlowFactory.CorDapp -> sessionInit.flowVersion
|
||||
}
|
||||
val session = FlowSessionInternal(
|
||||
flow,
|
||||
flowSession,
|
||||
random63BitValue(),
|
||||
sender,
|
||||
FlowSessionState.Initiated(sender, senderSessionId, FlowInfo(senderFlowVersion, sessionInit.appName)))
|
||||
if (sessionInit.firstPayload != null) {
|
||||
session.receivedMessages += ReceivedSessionMessage(sender, SessionData(session.ourSessionId, sessionInit.firstPayload))
|
||||
}
|
||||
openSessions[session.ourSessionId] = session
|
||||
// TODO Perhaps the session-init will specificy which of our multiple identies to use, which we would have to
|
||||
// double-check is actually ours. However, what if we want to control how our identities gets used?
|
||||
val fiber = createFiber(flow, FlowInitiator.Peer(sender))
|
||||
flowSession.sessionFlow = flow
|
||||
flowSession.stateMachine = fiber
|
||||
fiber.openSessions[Pair(flow, sender)] = session
|
||||
updateCheckpoint(fiber)
|
||||
session to initiatedFlowFactory
|
||||
} catch (e: SessionRejectException) {
|
||||
logger.warn("${e.logMessage}: $sessionInit")
|
||||
sendSessionReject(e.rejectMessage)
|
||||
return
|
||||
} catch (e: Exception) {
|
||||
logger.warn("Couldn't start flow session from $sessionInit", e)
|
||||
sendSessionReject("Unable to establish session")
|
||||
return
|
||||
}
|
||||
|
||||
val (ourFlowVersion, appName) = when (initiatedFlowFactory) {
|
||||
// The flow version for the core flows is the platform version
|
||||
is InitiatedFlowFactory.Core -> serviceHub.myInfo.platformVersion to "corda"
|
||||
is InitiatedFlowFactory.CorDapp -> initiatedFlowFactory.flowVersion to initiatedFlowFactory.appName
|
||||
}
|
||||
|
||||
sendSessionMessage(sender, SessionConfirm(senderSessionId, session.ourSessionId, ourFlowVersion, appName), session.fiber)
|
||||
session.fiber.logger.debug { "Initiated by $sender using ${sessionInit.initiatingFlowClass}" }
|
||||
session.fiber.logger.trace { "Initiated from $sessionInit on $session" }
|
||||
resumeFiber(session.fiber)
|
||||
}
|
||||
|
||||
private fun getInitiatedFlowFactory(sessionInit: SessionInit): InitiatedFlowFactory<*> {
|
||||
val initiatingFlowClass = try {
|
||||
Class.forName(sessionInit.initiatingFlowClass, true, classloader).asSubclass(FlowLogic::class.java)
|
||||
} catch (e: ClassNotFoundException) {
|
||||
throw SessionRejectException("Don't know ${sessionInit.initiatingFlowClass}")
|
||||
} catch (e: ClassCastException) {
|
||||
throw SessionRejectException("${sessionInit.initiatingFlowClass} is not a flow")
|
||||
}
|
||||
return serviceHub.getFlowFactory(initiatingFlowClass) ?:
|
||||
throw SessionRejectException("$initiatingFlowClass is not registered")
|
||||
}
|
||||
|
||||
private fun serializeFiber(fiber: FlowStateMachineImpl<*>): SerializedBytes<FlowStateMachineImpl<*>> {
|
||||
return fiber.serialize(context = CHECKPOINT_CONTEXT.withTokenContext(serializationContext))
|
||||
}
|
||||
|
||||
private fun deserializeFiber(checkpoint: Checkpoint, logger: Logger): FlowStateMachineImpl<*>? {
|
||||
return try {
|
||||
checkpoint.serializedFiber.deserialize(context = CHECKPOINT_CONTEXT.withTokenContext(serializationContext)).apply {
|
||||
fromCheckpoint = true
|
||||
}
|
||||
} catch (t: Throwable) {
|
||||
logger.error("Encountered unrestorable checkpoint!", t)
|
||||
null
|
||||
}
|
||||
}
|
||||
|
||||
private fun <T> createFiber(logic: FlowLogic<T>, flowInitiator: FlowInitiator, ourIdentity: Party? = null): FlowStateMachineImpl<T> {
|
||||
val fsm = FlowStateMachineImpl(
|
||||
StateMachineRunId.createRandom(),
|
||||
logic,
|
||||
scheduler,
|
||||
flowInitiator,
|
||||
ourIdentity ?: serviceHub.myInfo.legalIdentities[0])
|
||||
initFiber(fsm)
|
||||
return fsm
|
||||
}
|
||||
|
||||
private fun initFiber(fiber: FlowStateMachineImpl<*>) {
|
||||
verifyFlowLogicIsSuspendable(fiber.logic)
|
||||
fiber.database = database
|
||||
fiber.serviceHub = serviceHub
|
||||
fiber.ourIdentityAndCert = serviceHub.myInfo.legalIdentitiesAndCerts.find { it.party == fiber.ourIdentity }
|
||||
?: throw IllegalStateException("Identity specified by ${fiber.id} (${fiber.ourIdentity}) is not one of ours!")
|
||||
fiber.actionOnSuspend = { ioRequest ->
|
||||
updateCheckpoint(fiber)
|
||||
// We commit on the fibers transaction that was copied across ThreadLocals during suspend
|
||||
// This will free up the ThreadLocal so on return the caller can carry on with other transactions
|
||||
fiber.commitTransaction()
|
||||
processIORequest(ioRequest)
|
||||
decrementLiveFibers()
|
||||
}
|
||||
fiber.actionOnEnd = { result, propagated ->
|
||||
try {
|
||||
mutex.locked {
|
||||
stateMachines.remove(fiber)?.let { checkpointStorage.removeCheckpoint(it) }
|
||||
notifyChangeObservers(StateMachineManager.Change.Removed(fiber.logic, result))
|
||||
}
|
||||
endAllFiberSessions(fiber, result, propagated)
|
||||
} finally {
|
||||
fiber.commitTransaction()
|
||||
decrementLiveFibers()
|
||||
totalFinishedFlows.inc()
|
||||
unfinishedFibers.countDown()
|
||||
}
|
||||
}
|
||||
mutex.locked {
|
||||
totalStartedFlows.inc()
|
||||
unfinishedFibers.countUp()
|
||||
notifyChangeObservers(StateMachineManager.Change.Add(fiber.logic))
|
||||
}
|
||||
}
|
||||
|
||||
private fun verifyFlowLogicIsSuspendable(logic: FlowLogic<Any?>) {
|
||||
// Quasar requires (in Java 8) that at least the call method be annotated suspendable. Unfortunately, it's
|
||||
// easy to forget to add this when creating a new flow, so we check here to give the user a better error.
|
||||
//
|
||||
// The Kotlin compiler can sometimes generate a synthetic bridge method from a single call declaration, which
|
||||
// forwards to the void method and then returns Unit. However annotations do not get copied across to this
|
||||
// bridge, so we have to do a more complex scan here.
|
||||
val call = logic.javaClass.methods.first { !it.isSynthetic && it.name == "call" && it.parameterCount == 0 }
|
||||
if (call.getAnnotation(Suspendable::class.java) == null) {
|
||||
throw FlowException("${logic.javaClass.name}.call() is not annotated as @Suspendable. Please fix this.")
|
||||
}
|
||||
}
|
||||
|
||||
private fun endAllFiberSessions(fiber: FlowStateMachineImpl<*>, result: Try<*>, propagated: Boolean) {
|
||||
openSessions.values.removeIf { session ->
|
||||
if (session.fiber == fiber) {
|
||||
session.endSession((result as? Try.Failure)?.exception, propagated)
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun FlowSessionInternal.endSession(exception: Throwable?, propagated: Boolean) {
|
||||
val initiatedState = state as? FlowSessionState.Initiated ?: return
|
||||
val sessionEnd = if (exception == null) {
|
||||
NormalSessionEnd(initiatedState.peerSessionId)
|
||||
} else {
|
||||
val errorResponse = if (exception is FlowException && (!propagated || initiatingParty != null)) {
|
||||
// Only propagate this FlowException if our local flow threw it or it was propagated to us and we only
|
||||
// pass it down invocation chain to the flow that initiated us, not to flows we've started sessions with.
|
||||
exception
|
||||
} else {
|
||||
null
|
||||
}
|
||||
ErrorSessionEnd(initiatedState.peerSessionId, errorResponse)
|
||||
}
|
||||
sendSessionMessage(initiatedState.peerParty, sessionEnd, fiber)
|
||||
recentlyClosedSessions[ourSessionId] = initiatedState.peerParty
|
||||
}
|
||||
|
||||
/**
|
||||
* Kicks off a brand new state machine of the given class.
|
||||
* The state machine will be persisted when it suspends, with automated restart if the StateMachineManager is
|
||||
* restarted with checkpointed state machines in the storage service.
|
||||
*
|
||||
* Note that you must be on the [executor] thread.
|
||||
*/
|
||||
override fun <A> startFlow(flowLogic: FlowLogic<A>, flowInitiator: FlowInitiator, ourIdentity: Party?): CordaFuture<FlowStateMachine<A>> {
|
||||
// TODO: Check that logic has @Suspendable on its call method.
|
||||
executor.checkOnThread()
|
||||
val fiber = database.transaction {
|
||||
val fiber = createFiber(flowLogic, flowInitiator, ourIdentity)
|
||||
updateCheckpoint(fiber)
|
||||
fiber
|
||||
}
|
||||
// If we are not started then our checkpoint will be picked up during start
|
||||
mutex.locked {
|
||||
if (started) {
|
||||
resumeFiber(fiber)
|
||||
}
|
||||
}
|
||||
return doneFuture(fiber)
|
||||
}
|
||||
|
||||
private fun updateCheckpoint(fiber: FlowStateMachineImpl<*>) {
|
||||
check(fiber.state != Strand.State.RUNNING) { "Fiber cannot be running when checkpointing" }
|
||||
val newCheckpoint = Checkpoint(serializeFiber(fiber))
|
||||
val previousCheckpoint = mutex.locked { stateMachines.put(fiber, newCheckpoint) }
|
||||
if (previousCheckpoint != null) {
|
||||
checkpointStorage.removeCheckpoint(previousCheckpoint)
|
||||
}
|
||||
checkpointStorage.addCheckpoint(newCheckpoint)
|
||||
checkpointingMeter.mark()
|
||||
|
||||
checkpointCheckerThread?.execute {
|
||||
// Immediately check that the checkpoint is valid by deserialising it. The idea is to plug any holes we have
|
||||
// in our testing by failing any test where unrestorable checkpoints are created.
|
||||
if (deserializeFiber(newCheckpoint, fiber.logger) == null) {
|
||||
unrestorableCheckpoints = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun resumeFiber(fiber: FlowStateMachineImpl<*>) {
|
||||
// Avoid race condition when setting stopping to true and then checking liveFibers
|
||||
incrementLiveFibers()
|
||||
if (!stopping) {
|
||||
executor.executeASAP {
|
||||
fiber.resume(scheduler)
|
||||
}
|
||||
} else {
|
||||
fiber.logger.trace("Not resuming as SMM is stopping.")
|
||||
decrementLiveFibers()
|
||||
}
|
||||
}
|
||||
|
||||
private fun processIORequest(ioRequest: FlowIORequest) {
|
||||
executor.checkOnThread()
|
||||
when (ioRequest) {
|
||||
is SendRequest -> processSendRequest(ioRequest)
|
||||
is WaitForLedgerCommit -> processWaitForCommitRequest(ioRequest)
|
||||
is Sleep -> processSleepRequest(ioRequest)
|
||||
}
|
||||
}
|
||||
|
||||
private fun processSendRequest(ioRequest: SendRequest) {
|
||||
val retryId = if (ioRequest.message is SessionInit) {
|
||||
with(ioRequest.session) {
|
||||
openSessions[ourSessionId] = this
|
||||
if (retryable) ourSessionId else null
|
||||
}
|
||||
} else null
|
||||
sendSessionMessage(ioRequest.session.state.sendToParty, ioRequest.message, ioRequest.session.fiber, retryId)
|
||||
if (ioRequest !is ReceiveRequest<*>) {
|
||||
// We sent a message, but don't expect a response, so re-enter the continuation to let it keep going.
|
||||
resumeFiber(ioRequest.session.fiber)
|
||||
}
|
||||
}
|
||||
|
||||
private fun processWaitForCommitRequest(ioRequest: WaitForLedgerCommit) {
|
||||
// Is it already committed?
|
||||
val stx = database.transaction {
|
||||
serviceHub.validatedTransactions.getTransaction(ioRequest.hash)
|
||||
}
|
||||
if (stx != null) {
|
||||
resumeFiber(ioRequest.fiber)
|
||||
} else {
|
||||
// No, then register to wait.
|
||||
//
|
||||
// We assume this code runs on the server thread, which is the only place transactions are committed
|
||||
// currently. When we liberalise our threading somewhat, handing of wait requests will need to be
|
||||
// reworked to make the wait atomic in another way. Otherwise there is a race between checking the
|
||||
// database and updating the waiting list.
|
||||
mutex.locked {
|
||||
fibersWaitingForLedgerCommit[ioRequest.hash] += ioRequest.fiber
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun processSleepRequest(ioRequest: Sleep) {
|
||||
// Resume the fiber now we have checkpointed, so we can sleep on the Fiber.
|
||||
resumeFiber(ioRequest.fiber)
|
||||
}
|
||||
|
||||
private fun sendSessionMessage(party: Party, message: SessionMessage, fiber: FlowStateMachineImpl<*>? = null, retryId: Long? = null) {
|
||||
val partyInfo = serviceHub.networkMapCache.getPartyInfo(party)
|
||||
?: throw IllegalArgumentException("Don't know about party $party")
|
||||
val address = serviceHub.networkService.getAddressOfParty(partyInfo)
|
||||
val logger = fiber?.logger ?: logger
|
||||
logger.trace { "Sending $message to party $party @ $address" + if (retryId != null) " with retry $retryId" else "" }
|
||||
|
||||
val serialized = try {
|
||||
message.serialize()
|
||||
} catch (e: Exception) {
|
||||
when (e) {
|
||||
// Handling Kryo and AMQP serialization problems. Unfortunately the two exception types do not share much of a common exception interface.
|
||||
is KryoException,
|
||||
is NotSerializableException -> {
|
||||
if (message !is ErrorSessionEnd || message.errorResponse == null) throw e
|
||||
logger.warn("Something in ${message.errorResponse.javaClass.name} is not serialisable. " +
|
||||
"Instead sending back an exception which is serialisable to ensure session end occurs properly.", e)
|
||||
// The subclass may have overridden toString so we use that
|
||||
val exMessage = message.errorResponse.let { if (it.javaClass != FlowException::class.java) it.toString() else it.message }
|
||||
message.copy(errorResponse = FlowException(exMessage)).serialize()
|
||||
}
|
||||
else -> throw e
|
||||
}
|
||||
}
|
||||
|
||||
serviceHub.networkService.apply {
|
||||
send(createMessage(sessionTopic, serialized.bytes), address, retryId = retryId)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
class SessionRejectException(val rejectMessage: String, val logMessage: String) : CordaException(rejectMessage) {
|
||||
constructor(message: String) : this(message, message)
|
||||
}
|
@ -269,7 +269,7 @@ class NodeVaultService(private val clock: Clock, private val keyManagementServic
|
||||
update.where(stateStatusPredication, lockIdPredicate, *commonPredicates)
|
||||
}
|
||||
if (updatedRows > 0 && updatedRows == stateRefs.size) {
|
||||
log.trace("Reserving soft lock states for $lockId: $stateRefs")
|
||||
log.trace { "Reserving soft lock states for $lockId: $stateRefs" }
|
||||
FlowStateMachineImpl.currentStateMachine()?.hasSoftLockedStates = true
|
||||
} else {
|
||||
// revert partial soft locks
|
||||
@ -280,7 +280,7 @@ class NodeVaultService(private val clock: Clock, private val keyManagementServic
|
||||
update.where(lockUpdateTime, lockIdPredicate, *commonPredicates)
|
||||
}
|
||||
if (revertUpdatedRows > 0) {
|
||||
log.trace("Reverting $revertUpdatedRows partially soft locked states for $lockId")
|
||||
log.trace { "Reverting $revertUpdatedRows partially soft locked states for $lockId" }
|
||||
}
|
||||
throw StatesNotAvailableException("Attempted to reserve $stateRefs for $lockId but only $updatedRows rows available")
|
||||
}
|
||||
@ -309,7 +309,7 @@ class NodeVaultService(private val clock: Clock, private val keyManagementServic
|
||||
update.where(*commonPredicates)
|
||||
}
|
||||
if (update > 0) {
|
||||
log.trace("Releasing $update soft locked states for $lockId")
|
||||
log.trace { "Releasing $update soft locked states for $lockId" }
|
||||
}
|
||||
} else {
|
||||
try {
|
||||
@ -320,7 +320,7 @@ class NodeVaultService(private val clock: Clock, private val keyManagementServic
|
||||
update.where(*commonPredicates, stateRefsPredicate)
|
||||
}
|
||||
if (updatedRows > 0) {
|
||||
log.trace("Releasing $updatedRows soft locked states for $lockId and stateRefs $stateRefs")
|
||||
log.trace { "Releasing $updatedRows soft locked states for $lockId and stateRefs $stateRefs" }
|
||||
}
|
||||
} catch (e: Exception) {
|
||||
log.error("""soft lock update error attempting to release states for $lockId and $stateRefs")
|
||||
|
@ -1,8 +1,8 @@
|
||||
package net.corda.node.services.vault
|
||||
|
||||
import net.corda.core.contracts.FungibleAsset
|
||||
import net.corda.core.contracts.StateRef
|
||||
import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.flows.StateMachineRunId
|
||||
import net.corda.core.node.services.VaultService
|
||||
import net.corda.core.utilities.NonEmptySet
|
||||
import net.corda.core.utilities.loggerFor
|
||||
@ -12,50 +12,50 @@ import net.corda.node.services.statemachine.FlowStateMachineImpl
|
||||
import net.corda.node.services.statemachine.StateMachineManager
|
||||
import java.util.*
|
||||
|
||||
class VaultSoftLockManager(val vault: VaultService, smm: StateMachineManager) {
|
||||
|
||||
private companion object {
|
||||
val log = loggerFor<VaultSoftLockManager>()
|
||||
}
|
||||
|
||||
init {
|
||||
smm.changes.subscribe { change ->
|
||||
if (change is StateMachineManager.Change.Removed && (FlowStateMachineImpl.currentStateMachine())?.hasSoftLockedStates == true) {
|
||||
log.trace { "Remove flow name ${change.logic.javaClass} with id $change.id" }
|
||||
unregisterSoftLocks(change.logic.runId, change.logic)
|
||||
class VaultSoftLockManager private constructor(private val vault: VaultService) {
|
||||
companion object {
|
||||
private val log = loggerFor<VaultSoftLockManager>()
|
||||
@JvmStatic
|
||||
fun install(vault: VaultService, smm: StateMachineManager) {
|
||||
val manager = VaultSoftLockManager(vault)
|
||||
smm.changes.subscribe { change ->
|
||||
if (change is StateMachineManager.Change.Removed) {
|
||||
val logic = change.logic
|
||||
// Don't run potentially expensive query if the flow didn't lock any states:
|
||||
if ((logic.stateMachine as FlowStateMachineImpl<*>).hasSoftLockedStates) {
|
||||
manager.unregisterSoftLocks(logic.runId.uuid, logic)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Discussion
|
||||
//
|
||||
// The intent of the following approach is to support what might be a common pattern in a flow:
|
||||
// 1. Create state
|
||||
// 2. Do something with state
|
||||
// without possibility of another flow intercepting the state between 1 and 2,
|
||||
// since we cannot lock the state before it exists. e.g. Issue and then Move some Cash.
|
||||
//
|
||||
// The downside is we could have a long running flow that holds a lock for a long period of time.
|
||||
// However, the lock can be programmatically released, like any other soft lock,
|
||||
// should we want a long running flow that creates a visible state mid way through.
|
||||
|
||||
vault.rawUpdates.subscribe { (_, produced, flowId) ->
|
||||
flowId?.let {
|
||||
if (produced.isNotEmpty()) {
|
||||
registerSoftLocks(flowId, (produced.map { it.ref }).toNonEmptySet())
|
||||
// Discussion
|
||||
//
|
||||
// The intent of the following approach is to support what might be a common pattern in a flow:
|
||||
// 1. Create state
|
||||
// 2. Do something with state
|
||||
// without possibility of another flow intercepting the state between 1 and 2,
|
||||
// since we cannot lock the state before it exists. e.g. Issue and then Move some Cash.
|
||||
//
|
||||
// The downside is we could have a long running flow that holds a lock for a long period of time.
|
||||
// However, the lock can be programmatically released, like any other soft lock,
|
||||
// should we want a long running flow that creates a visible state mid way through.
|
||||
vault.rawUpdates.subscribe { (_, produced, flowId) ->
|
||||
if (flowId != null) {
|
||||
val fungible = produced.filter { it.state.data is FungibleAsset<*> }
|
||||
if (fungible.isNotEmpty()) {
|
||||
manager.registerSoftLocks(flowId, fungible.map { it.ref }.toNonEmptySet())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun registerSoftLocks(flowId: UUID, stateRefs: NonEmptySet<StateRef>) {
|
||||
log.trace("Reserving soft locks for flow id $flowId and states $stateRefs")
|
||||
log.trace { "Reserving soft locks for flow id $flowId and states $stateRefs" }
|
||||
vault.softLockReserve(flowId, stateRefs)
|
||||
}
|
||||
|
||||
private fun unregisterSoftLocks(id: StateMachineRunId, logic: FlowLogic<*>) {
|
||||
val flowClassName = logic.javaClass.simpleName
|
||||
log.trace("Releasing soft locks for flow $flowClassName with flow id ${id.uuid}")
|
||||
vault.softLockRelease(id.uuid)
|
||||
|
||||
private fun unregisterSoftLocks(flowId: UUID, logic: FlowLogic<*>) {
|
||||
log.trace { "Releasing soft locks for flow ${logic.javaClass.simpleName} with flow id $flowId" }
|
||||
vault.softLockRelease(flowId)
|
||||
}
|
||||
}
|
@ -7,23 +7,21 @@ import com.fasterxml.jackson.databind.*
|
||||
import com.fasterxml.jackson.databind.module.SimpleModule
|
||||
import com.fasterxml.jackson.dataformat.yaml.YAMLFactory
|
||||
import com.google.common.io.Closeables
|
||||
import net.corda.client.jackson.JacksonSupport
|
||||
import net.corda.client.jackson.StringToMethodCallParser
|
||||
import net.corda.core.CordaException
|
||||
import net.corda.core.concurrent.CordaFuture
|
||||
import net.corda.core.contracts.UniqueIdentifier
|
||||
import net.corda.core.flows.FlowInitiator
|
||||
import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.internal.FlowStateMachine
|
||||
import net.corda.core.internal.*
|
||||
import net.corda.core.internal.concurrent.OpenFuture
|
||||
import net.corda.core.internal.concurrent.openFuture
|
||||
import net.corda.core.internal.createDirectories
|
||||
import net.corda.core.internal.div
|
||||
import net.corda.core.internal.write
|
||||
import net.corda.core.internal.*
|
||||
import net.corda.core.messaging.CordaRPCOps
|
||||
import net.corda.core.messaging.DataFeed
|
||||
import net.corda.core.messaging.StateMachineUpdate
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.core.utilities.loggerFor
|
||||
import net.corda.client.jackson.JacksonSupport
|
||||
import net.corda.client.jackson.StringToMethodCallParser
|
||||
import net.corda.core.CordaException
|
||||
import net.corda.node.internal.Node
|
||||
import net.corda.node.internal.StartedNode
|
||||
import net.corda.node.services.messaging.CURRENT_RPC_CONTEXT
|
||||
@ -200,7 +198,7 @@ object InteractiveShell {
|
||||
}
|
||||
|
||||
private fun createOutputMapper(factory: JsonFactory): ObjectMapper {
|
||||
return JacksonSupport.createNonRpcMapper(factory).apply({
|
||||
return JacksonSupport.createNonRpcMapper(factory).apply {
|
||||
// Register serializers for stateful objects from libraries that are special to the RPC system and don't
|
||||
// make sense to print out to the screen. For classes we own, annotations can be used instead.
|
||||
val rpcModule = SimpleModule()
|
||||
@ -210,7 +208,7 @@ object InteractiveShell {
|
||||
|
||||
disable(SerializationFeature.FAIL_ON_EMPTY_BEANS)
|
||||
enable(SerializationFeature.INDENT_OUTPUT)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: This should become the default renderer rather than something used specifically by commands.
|
||||
@ -237,7 +235,7 @@ object InteractiveShell {
|
||||
val clazz: Class<FlowLogic<*>> = uncheckedCast(matches.single())
|
||||
try {
|
||||
// TODO Flow invocation should use startFlowDynamic.
|
||||
val fsm = runFlowFromString({ node.services.startFlow(it, FlowInitiator.Shell) }, inputData, clazz)
|
||||
val fsm = runFlowFromString({ node.services.startFlow(it, FlowInitiator.Shell).getOrThrow() }, inputData, clazz)
|
||||
// Show the progress tracker on the console until the flow completes or is interrupted with a
|
||||
// Ctrl-C keypress.
|
||||
val latch = CountDownLatch(1)
|
||||
@ -397,7 +395,7 @@ object InteractiveShell {
|
||||
}
|
||||
|
||||
private fun printAndFollowRPCResponse(response: Any?, toStream: PrintWriter): CordaFuture<Unit>? {
|
||||
val printerFun = { obj: Any? -> yamlMapper.writeValueAsString(obj) }
|
||||
val printerFun = yamlMapper::writeValueAsString
|
||||
toStream.println(printerFun(response))
|
||||
toStream.flush()
|
||||
return maybeFollow(response, printerFun, toStream)
|
||||
@ -443,13 +441,9 @@ object InteractiveShell {
|
||||
|
||||
val observable: Observable<*> = when (response) {
|
||||
is Observable<*> -> response
|
||||
is Pair<*, *> -> when {
|
||||
response.first is Observable<*> -> response.first as Observable<*>
|
||||
response.second is Observable<*> -> response.second as Observable<*>
|
||||
else -> null
|
||||
}
|
||||
else -> null
|
||||
} ?: return null
|
||||
is DataFeed<*, *> -> response.updates
|
||||
else -> return null
|
||||
}
|
||||
|
||||
val subscriber = PrintingSubscriber(printerFun, toStream)
|
||||
uncheckedCast(observable).subscribe(subscriber)
|
||||
@ -500,8 +494,8 @@ object InteractiveShell {
|
||||
gen.writeString("<not saved>")
|
||||
} else {
|
||||
val path = Paths.get(toPath)
|
||||
path.write { value.copyTo(it) }
|
||||
gen.writeString("<saved to: $path>")
|
||||
value.copyTo(path)
|
||||
gen.writeString("<saved to: ${path.toAbsolutePath()}>")
|
||||
}
|
||||
} finally {
|
||||
try {
|
||||
|
@ -20,6 +20,14 @@ import java.util.concurrent.CopyOnWriteArrayList
|
||||
*/
|
||||
const val NODE_DATABASE_PREFIX = "node_"
|
||||
|
||||
/**
|
||||
* The maximum supported field-size for hash HEX-encoded outputs (e.g. database fields).
|
||||
* This value is enough to support hash functions with outputs up to 512 bits (e.g. SHA3-512), in which
|
||||
* case 128 HEX characters are required.
|
||||
* 130 was selected instead of 128, to allow for 2 extra characters that will be used as hash-scheme identifiers.
|
||||
*/
|
||||
internal const val MAX_HASH_HEX_SIZE = 130
|
||||
|
||||
//HikariDataSource implements Closeable which allows CordaPersistence to be Closeable
|
||||
class CordaPersistence(var dataSource: HikariDataSource, private val schemaService: SchemaService,
|
||||
private val createIdentityService: () -> IdentityService, databaseProperties: Properties) : Closeable {
|
||||
|
@ -29,9 +29,12 @@ import net.corda.node.services.FlowPermissions.Companion.startFlowPermission
|
||||
import net.corda.node.services.messaging.CURRENT_RPC_CONTEXT
|
||||
import net.corda.node.services.messaging.RpcContext
|
||||
import net.corda.nodeapi.User
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.chooseIdentity
|
||||
import net.corda.testing.expect
|
||||
import net.corda.testing.expectEvents
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNetwork.MockNode
|
||||
import net.corda.testing.sequence
|
||||
import org.apache.commons.io.IOUtils
|
||||
import org.assertj.core.api.Assertions.assertThatExceptionOfType
|
||||
import org.junit.After
|
||||
@ -65,14 +68,13 @@ class CordaRPCOpsImplTest {
|
||||
mockNet = MockNetwork(cordappPackages = listOf("net.corda.finance.contracts.asset"))
|
||||
aliceNode = mockNet.createNode()
|
||||
notaryNode = mockNet.createNotaryNode(validating = false)
|
||||
rpc = CordaRPCOpsImpl(aliceNode.services, aliceNode.smm, aliceNode.database)
|
||||
rpc = CordaRPCOpsImpl(aliceNode.services, aliceNode.smm, aliceNode.database, aliceNode.services)
|
||||
CURRENT_RPC_CONTEXT.set(RpcContext(User("user", "pwd", permissions = setOf(
|
||||
startFlowPermission<CashIssueFlow>(),
|
||||
startFlowPermission<CashPaymentFlow>()
|
||||
))))
|
||||
|
||||
mockNet.runNetwork()
|
||||
mockNet.networkMapNode.internals.ensureRegistered()
|
||||
notary = rpc.notaryIdentities().first()
|
||||
}
|
||||
|
||||
|
@ -1,7 +1,6 @@
|
||||
package net.corda.node
|
||||
|
||||
import com.fasterxml.jackson.dataformat.yaml.YAMLFactory
|
||||
import com.nhaarman.mockito_kotlin.mock
|
||||
import net.corda.client.jackson.JacksonSupport
|
||||
import net.corda.core.contracts.Amount
|
||||
import net.corda.core.crypto.SecureHash
|
||||
@ -17,6 +16,7 @@ import net.corda.testing.MEGA_CORP
|
||||
import net.corda.testing.MEGA_CORP_IDENTITY
|
||||
import net.corda.testing.node.MockServices
|
||||
import net.corda.testing.node.MockServices.Companion.makeTestIdentityService
|
||||
import net.corda.testing.rigorousMock
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
import org.junit.Test
|
||||
@ -52,7 +52,7 @@ class InteractiveShellTest {
|
||||
private fun check(input: String, expected: String) {
|
||||
var output: DummyFSM? = null
|
||||
InteractiveShell.runFlowFromString({ DummyFSM(it as FlowA).apply { output = this } }, input, FlowA::class.java, om)
|
||||
assertEquals(expected, output!!.flowA.a, input)
|
||||
assertEquals(expected, output!!.logic.a, input)
|
||||
}
|
||||
|
||||
@Test
|
||||
@ -83,5 +83,5 @@ class InteractiveShellTest {
|
||||
@Test
|
||||
fun party() = check("party: \"${MEGA_CORP.name}\"", MEGA_CORP.name.toString())
|
||||
|
||||
class DummyFSM(val flowA: FlowA) : FlowStateMachine<Any?> by mock()
|
||||
class DummyFSM(override val logic: FlowA) : FlowStateMachine<Any?> by rigorousMock()
|
||||
}
|
||||
|
@ -1,6 +1,5 @@
|
||||
package net.corda.node.internal.cordapp
|
||||
|
||||
import com.nhaarman.mockito_kotlin.mock
|
||||
import net.corda.core.node.services.AttachmentStorage
|
||||
import net.corda.testing.node.MockAttachmentStorage
|
||||
import org.junit.Assert
|
||||
@ -40,7 +39,7 @@ class CordappProviderImplTests {
|
||||
@Test
|
||||
fun `test that we find a cordapp class that is loaded into the store`() {
|
||||
val loader = CordappLoader.createDevMode(listOf(isolatedJAR))
|
||||
val provider = CordappProviderImpl(loader, mock())
|
||||
val provider = CordappProviderImpl(loader, attachmentStore)
|
||||
val className = "net.corda.finance.contracts.isolated.AnotherDummyContract"
|
||||
|
||||
val expected = provider.cordapps.first()
|
||||
|
@ -47,23 +47,17 @@ class InMemoryMessagingTests {
|
||||
|
||||
@Test
|
||||
fun basics() {
|
||||
val node1 = mockNet.networkMapNode
|
||||
val node1 = mockNet.createNode()
|
||||
val node2 = mockNet.createNode()
|
||||
val node3 = mockNet.createNode()
|
||||
|
||||
val bits = "test-content".toByteArray()
|
||||
var finalDelivery: Message? = null
|
||||
|
||||
with(node2) {
|
||||
node2.network.addMessageHandler { msg, _ ->
|
||||
node2.network.send(msg, node3.network.myAddress)
|
||||
}
|
||||
node2.network.addMessageHandler { msg, _ ->
|
||||
node2.network.send(msg, node3.network.myAddress)
|
||||
}
|
||||
|
||||
with(node3) {
|
||||
node2.network.addMessageHandler { msg, _ ->
|
||||
finalDelivery = msg
|
||||
}
|
||||
node3.network.addMessageHandler { msg, _ ->
|
||||
finalDelivery = msg
|
||||
}
|
||||
|
||||
// Node 1 sends a message and it should end up in finalDelivery, after we run the network
|
||||
@ -76,7 +70,7 @@ class InMemoryMessagingTests {
|
||||
|
||||
@Test
|
||||
fun broadcast() {
|
||||
val node1 = mockNet.networkMapNode
|
||||
val node1 = mockNet.createNode()
|
||||
val node2 = mockNet.createNode()
|
||||
val node3 = mockNet.createNode()
|
||||
|
||||
@ -95,7 +89,7 @@ class InMemoryMessagingTests {
|
||||
*/
|
||||
@Test
|
||||
fun `skip unhandled messages`() {
|
||||
val node1 = mockNet.networkMapNode
|
||||
val node1 = mockNet.createNode()
|
||||
val node2 = mockNet.createNode()
|
||||
var received = 0
|
||||
|
||||
|
@ -13,7 +13,6 @@ import net.corda.core.internal.FlowStateMachine
|
||||
import net.corda.core.internal.concurrent.map
|
||||
import net.corda.core.internal.rootCause
|
||||
import net.corda.core.messaging.DataFeed
|
||||
import net.corda.core.messaging.SingleMessageRecipient
|
||||
import net.corda.core.messaging.StateMachineTransactionMapping
|
||||
import net.corda.core.node.services.Vault
|
||||
import net.corda.core.serialization.CordaSerializable
|
||||
@ -35,17 +34,12 @@ import net.corda.finance.flows.TwoPartyTradeFlow.Buyer
|
||||
import net.corda.finance.flows.TwoPartyTradeFlow.Seller
|
||||
import net.corda.node.internal.StartedNode
|
||||
import net.corda.node.services.api.WritableTransactionStorage
|
||||
import net.corda.node.services.config.NodeConfiguration
|
||||
import net.corda.node.services.persistence.DBTransactionStorage
|
||||
import net.corda.node.services.persistence.checkpoints
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
import net.corda.nodeapi.internal.ServiceInfo
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.contracts.fillWithSomeTestCash
|
||||
import net.corda.testing.node.InMemoryMessagingNetwork
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockServices
|
||||
import net.corda.testing.node.pumpReceive
|
||||
import net.corda.testing.node.*
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
@ -55,8 +49,6 @@ import org.junit.runners.Parameterized
|
||||
import rx.Observable
|
||||
import java.io.ByteArrayInputStream
|
||||
import java.io.ByteArrayOutputStream
|
||||
import java.math.BigInteger
|
||||
import java.security.KeyPair
|
||||
import java.util.*
|
||||
import java.util.jar.JarOutputStream
|
||||
import java.util.zip.ZipEntry
|
||||
@ -75,7 +67,7 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
companion object {
|
||||
private val cordappPackages = listOf("net.corda.finance.contracts")
|
||||
@JvmStatic
|
||||
@Parameterized.Parameters
|
||||
@Parameterized.Parameters(name = "Anonymous = {0}")
|
||||
fun data(): Collection<Boolean> {
|
||||
return listOf(true, false)
|
||||
}
|
||||
@ -99,7 +91,7 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
// We run this in parallel threads to help catch any race conditions that may exist. The other tests
|
||||
// we run in the unit test thread exclusively to speed things up, ensure deterministic results and
|
||||
// allow interruption half way through.
|
||||
mockNet = MockNetwork(false, true, cordappPackages = cordappPackages)
|
||||
mockNet = MockNetwork(threadPerNode = true, cordappPackages = cordappPackages)
|
||||
ledger(MockServices(cordappPackages), initialiseSerialization = false) {
|
||||
val notaryNode = mockNet.createNotaryNode()
|
||||
val aliceNode = mockNet.createPartyNode(ALICE_NAME)
|
||||
@ -149,7 +141,7 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
|
||||
@Test(expected = InsufficientBalanceException::class)
|
||||
fun `trade cash for commercial paper fails using soft locking`() {
|
||||
mockNet = MockNetwork(false, true, cordappPackages = cordappPackages)
|
||||
mockNet = MockNetwork(threadPerNode = true, cordappPackages = cordappPackages)
|
||||
ledger(MockServices(cordappPackages), initialiseSerialization = false) {
|
||||
val notaryNode = mockNet.createNotaryNode()
|
||||
val aliceNode = mockNet.createPartyNode(ALICE_NAME)
|
||||
@ -205,7 +197,7 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
|
||||
@Test
|
||||
fun `shutdown and restore`() {
|
||||
mockNet = MockNetwork(false, cordappPackages = cordappPackages)
|
||||
mockNet = MockNetwork(cordappPackages = cordappPackages)
|
||||
ledger(MockServices(cordappPackages), initialiseSerialization = false) {
|
||||
val notaryNode = mockNet.createNotaryNode()
|
||||
val aliceNode = mockNet.createPartyNode(ALICE_NAME)
|
||||
@ -268,13 +260,7 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
|
||||
// ... bring the node back up ... the act of constructing the SMM will re-register the message handlers
|
||||
// that Bob was waiting on before the reboot occurred.
|
||||
bobNode = mockNet.createNode(bobAddr.id, object : MockNetwork.Factory<MockNetwork.MockNode> {
|
||||
override fun create(config: NodeConfiguration, network: MockNetwork, networkMapAddr: SingleMessageRecipient?,
|
||||
id: Int, notaryIdentity: Pair<ServiceInfo, KeyPair>?, entropyRoot: BigInteger): MockNetwork.MockNode {
|
||||
return MockNetwork.MockNode(config, network, networkMapAddr, bobAddr.id, notaryIdentity, entropyRoot)
|
||||
}
|
||||
}, BOB_NAME)
|
||||
|
||||
bobNode = mockNet.createNode(MockNodeParameters(bobAddr.id, BOB_NAME))
|
||||
// Find the future representing the result of this state machine again.
|
||||
val bobFuture = bobNode.smm.findStateMachines(BuyerAcceptor::class.java).single().second
|
||||
|
||||
@ -308,31 +294,26 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
// of gets and puts.
|
||||
private fun makeNodeWithTracking(name: CordaX500Name): StartedNode<MockNetwork.MockNode> {
|
||||
// Create a node in the mock network ...
|
||||
return mockNet.createNode(nodeFactory = object : MockNetwork.Factory<MockNetwork.MockNode> {
|
||||
override fun create(config: NodeConfiguration,
|
||||
network: MockNetwork,
|
||||
networkMapAddr: SingleMessageRecipient?,
|
||||
id: Int, notaryIdentity: Pair<ServiceInfo, KeyPair>?,
|
||||
entropyRoot: BigInteger): MockNetwork.MockNode {
|
||||
return object : MockNetwork.MockNode(config, network, networkMapAddr, id, notaryIdentity, entropyRoot) {
|
||||
return mockNet.createNode(MockNodeParameters(legalName = name), nodeFactory = object : MockNetwork.Factory<MockNetwork.MockNode> {
|
||||
override fun create(args: MockNodeArgs): MockNetwork.MockNode {
|
||||
return object : MockNetwork.MockNode(args) {
|
||||
// That constructs a recording tx storage
|
||||
override fun makeTransactionStorage(): WritableTransactionStorage {
|
||||
return RecordingTransactionStorage(database, super.makeTransactionStorage())
|
||||
}
|
||||
}
|
||||
}
|
||||
}, legalName = name)
|
||||
})
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `check dependencies of sale asset are resolved`() {
|
||||
mockNet = MockNetwork(false, cordappPackages = cordappPackages)
|
||||
mockNet = MockNetwork(cordappPackages = cordappPackages)
|
||||
val notaryNode = mockNet.createNotaryNode()
|
||||
val aliceNode = makeNodeWithTracking(ALICE_NAME)
|
||||
val bobNode = makeNodeWithTracking(BOB_NAME)
|
||||
val bankNode = makeNodeWithTracking(BOC_NAME)
|
||||
mockNet.runNetwork()
|
||||
notaryNode.internals.ensureRegistered()
|
||||
val notary = aliceNode.services.getDefaultNotary()
|
||||
val alice = aliceNode.info.singleIdentity()
|
||||
val bob = bobNode.info.singleIdentity()
|
||||
@ -433,14 +414,13 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
|
||||
@Test
|
||||
fun `track works`() {
|
||||
mockNet = MockNetwork(false, cordappPackages = cordappPackages)
|
||||
mockNet = MockNetwork(cordappPackages = cordappPackages)
|
||||
val notaryNode = mockNet.createNotaryNode()
|
||||
val aliceNode = makeNodeWithTracking(ALICE_NAME)
|
||||
val bobNode = makeNodeWithTracking(BOB_NAME)
|
||||
val bankNode = makeNodeWithTracking(BOC_NAME)
|
||||
|
||||
mockNet.runNetwork()
|
||||
notaryNode.internals.ensureRegistered()
|
||||
val notary = aliceNode.services.getDefaultNotary()
|
||||
val alice: Party = aliceNode.info.singleIdentity()
|
||||
val bank: Party = bankNode.info.singleIdentity()
|
||||
@ -515,7 +495,7 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
|
||||
@Test
|
||||
fun `dependency with error on buyer side`() {
|
||||
mockNet = MockNetwork(false, cordappPackages = cordappPackages)
|
||||
mockNet = MockNetwork(cordappPackages = cordappPackages)
|
||||
ledger(MockServices(cordappPackages), initialiseSerialization = false) {
|
||||
runWithError(true, false, "at least one cash input")
|
||||
}
|
||||
@ -523,7 +503,7 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
|
||||
@Test
|
||||
fun `dependency with error on seller side`() {
|
||||
mockNet = MockNetwork(false, cordappPackages = cordappPackages)
|
||||
mockNet = MockNetwork(cordappPackages = cordappPackages)
|
||||
ledger(MockServices(cordappPackages), initialiseSerialization = false) {
|
||||
runWithError(false, true, "Issuances have a time-window")
|
||||
}
|
||||
@ -596,7 +576,6 @@ class TwoPartyTradeFlowTests(val anonymous: Boolean) {
|
||||
val bankNode = mockNet.createPartyNode(BOC_NAME)
|
||||
|
||||
mockNet.runNetwork()
|
||||
notaryNode.internals.ensureRegistered()
|
||||
val notary = aliceNode.services.getDefaultNotary()
|
||||
val alice = aliceNode.info.singleIdentity()
|
||||
val bob = bobNode.info.singleIdentity()
|
||||
|
@ -13,7 +13,6 @@ import net.corda.core.transactions.WireTransaction
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.core.utilities.seconds
|
||||
import net.corda.node.internal.StartedNode
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.contracts.DummyContract
|
||||
import net.corda.testing.node.MockNetwork
|
||||
@ -43,7 +42,6 @@ class NotaryChangeTests {
|
||||
clientNodeB = mockNet.createNode()
|
||||
newNotaryNode = mockNet.createNotaryNode(legalName = DUMMY_NOTARY.name.copy(organisation = "Dummy Notary 2"))
|
||||
mockNet.runNetwork() // Clear network map registration messages
|
||||
oldNotaryNode.internals.ensureRegistered()
|
||||
oldNotaryParty = newNotaryNode.services.networkMapCache.getNotary(DUMMY_NOTARY_SERVICE_NAME)!!
|
||||
newNotaryParty = newNotaryNode.services.networkMapCache.getNotary(DUMMY_NOTARY_SERVICE_NAME.copy(organisation = "Dummy Notary 2"))!!
|
||||
}
|
||||
|
@ -1,6 +1,8 @@
|
||||
package net.corda.node.services.events
|
||||
|
||||
import co.paralleluniverse.fibers.Suspendable
|
||||
import com.codahale.metrics.MetricRegistry
|
||||
import com.nhaarman.mockito_kotlin.*
|
||||
import net.corda.core.contracts.*
|
||||
import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.flows.FlowLogicRef
|
||||
@ -8,35 +10,40 @@ import net.corda.core.flows.FlowLogicRefFactory
|
||||
import net.corda.core.identity.AbstractParty
|
||||
import net.corda.core.identity.CordaX500Name
|
||||
import net.corda.core.identity.Party
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.node.ServiceHub
|
||||
import net.corda.core.serialization.SingletonSerializeAsToken
|
||||
import net.corda.core.transactions.SignedTransaction
|
||||
import net.corda.core.transactions.TransactionBuilder
|
||||
import net.corda.core.utilities.days
|
||||
import net.corda.node.internal.FlowStarterImpl
|
||||
import net.corda.node.internal.StateLoaderImpl
|
||||
import net.corda.node.internal.cordapp.CordappLoader
|
||||
import net.corda.node.internal.cordapp.CordappProviderImpl
|
||||
import net.corda.node.services.api.VaultServiceInternal
|
||||
import net.corda.node.services.api.MonitoringService
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.identity.InMemoryIdentityService
|
||||
import net.corda.node.services.network.NetworkMapCacheImpl
|
||||
import net.corda.node.services.persistence.DBCheckpointStorage
|
||||
import net.corda.node.services.statemachine.FlowLogicRefFactoryImpl
|
||||
import net.corda.node.services.statemachine.StateMachineManager
|
||||
import net.corda.node.services.statemachine.StateMachineManagerImpl
|
||||
import net.corda.node.services.vault.NodeVaultService
|
||||
import net.corda.node.testing.MockServiceHubInternal
|
||||
import net.corda.node.utilities.AffinityExecutor
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
import net.corda.node.utilities.configureDatabase
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.contracts.DummyContract
|
||||
import net.corda.testing.node.InMemoryMessagingNetwork
|
||||
import net.corda.testing.node.MockKeyManagementService
|
||||
import net.corda.testing.node.*
|
||||
import net.corda.testing.node.MockServices.Companion.makeTestDataSourceProperties
|
||||
import net.corda.testing.node.MockServices.Companion.makeTestDatabaseProperties
|
||||
import net.corda.testing.node.MockServices.Companion.makeTestIdentityService
|
||||
import net.corda.testing.node.TestClock
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
import org.junit.Test
|
||||
import java.nio.file.Paths
|
||||
import java.security.PublicKey
|
||||
import java.time.Clock
|
||||
import java.time.Instant
|
||||
import java.util.concurrent.CountDownLatch
|
||||
@ -45,20 +52,26 @@ import java.util.concurrent.TimeUnit
|
||||
import kotlin.test.assertTrue
|
||||
|
||||
class NodeSchedulerServiceTest : SingletonSerializeAsToken() {
|
||||
companion object {
|
||||
private val myInfo = NodeInfo(listOf(MOCK_HOST_AND_PORT), listOf(DUMMY_IDENTITY_1), 1, serial = 1L)
|
||||
}
|
||||
|
||||
private val realClock: Clock = Clock.systemUTC()
|
||||
private val stoppedClock: Clock = Clock.fixed(realClock.instant(), realClock.zone)
|
||||
private val testClock = TestClock(stoppedClock)
|
||||
|
||||
private val schedulerGatedExecutor = AffinityExecutor.Gate(true)
|
||||
|
||||
private lateinit var services: MockServiceHubInternal
|
||||
abstract class Services : ServiceHubInternal, TestReference
|
||||
|
||||
private lateinit var services: Services
|
||||
private lateinit var scheduler: NodeSchedulerService
|
||||
private lateinit var smmExecutor: AffinityExecutor.ServiceAffinityExecutor
|
||||
private lateinit var database: CordaPersistence
|
||||
private lateinit var countDown: CountDownLatch
|
||||
private lateinit var smmHasRemovedAllFlows: CountDownLatch
|
||||
|
||||
private lateinit var kms: MockKeyManagementService
|
||||
private lateinit var mockSMM: StateMachineManager
|
||||
var calls: Int = 0
|
||||
|
||||
/**
|
||||
@ -80,35 +93,35 @@ class NodeSchedulerServiceTest : SingletonSerializeAsToken() {
|
||||
val databaseProperties = makeTestDatabaseProperties()
|
||||
database = configureDatabase(dataSourceProps, databaseProperties, ::makeTestIdentityService)
|
||||
val identityService = InMemoryIdentityService(trustRoot = DEV_TRUST_ROOT)
|
||||
val kms = MockKeyManagementService(identityService, ALICE_KEY)
|
||||
|
||||
kms = MockKeyManagementService(identityService, ALICE_KEY)
|
||||
val configuration = testNodeConfiguration(Paths.get("."), CordaX500Name("Alice", "London", "GB"))
|
||||
val validatedTransactions = MockTransactionStorage()
|
||||
val stateLoader = StateLoaderImpl(validatedTransactions)
|
||||
database.transaction {
|
||||
val nullIdentity = CordaX500Name(organisation = "None", locality = "None", country = "GB")
|
||||
val mockMessagingService = InMemoryMessagingNetwork(false).InMemoryMessaging(
|
||||
false,
|
||||
InMemoryMessagingNetwork.PeerHandle(0, nullIdentity),
|
||||
AffinityExecutor.ServiceAffinityExecutor("test", 1),
|
||||
database)
|
||||
services = object : MockServiceHubInternal(
|
||||
database,
|
||||
testNodeConfiguration(Paths.get("."), CordaX500Name(organisation = "Alice", locality = "London", country = "GB")),
|
||||
overrideClock = testClock,
|
||||
keyManagement = kms,
|
||||
network = mockMessagingService), TestReference {
|
||||
override val vaultService: VaultServiceInternal = NodeVaultService(testClock, kms, stateLoader, database.hibernateConfig)
|
||||
override val testReference = this@NodeSchedulerServiceTest
|
||||
override val cordappProvider = CordappProviderImpl(CordappLoader.createWithTestPackages(listOf("net.corda.testing.contracts")), attachments)
|
||||
services = rigorousMock<Services>().also {
|
||||
doReturn(configuration).whenever(it).configuration
|
||||
doReturn(MonitoringService(MetricRegistry())).whenever(it).monitoringService
|
||||
doReturn(validatedTransactions).whenever(it).validatedTransactions
|
||||
doReturn(NetworkMapCacheImpl(MockNetworkMapCache(database, configuration), identityService)).whenever(it).networkMapCache
|
||||
doCallRealMethod().whenever(it).signInitialTransaction(any(), any<PublicKey>())
|
||||
doReturn(myInfo).whenever(it).myInfo
|
||||
doReturn(kms).whenever(it).keyManagementService
|
||||
doReturn(CordappProviderImpl(CordappLoader.createWithTestPackages(listOf("net.corda.testing.contracts")), MockAttachmentStorage())).whenever(it).cordappProvider
|
||||
doCallRealMethod().whenever(it).recordTransactions(any<SignedTransaction>())
|
||||
doCallRealMethod().whenever(it).recordTransactions(any<Boolean>(), any<SignedTransaction>())
|
||||
doCallRealMethod().whenever(it).recordTransactions(any(), any<Iterable<SignedTransaction>>())
|
||||
doReturn(NodeVaultService(testClock, kms, stateLoader, database.hibernateConfig)).whenever(it).vaultService
|
||||
doReturn(this@NodeSchedulerServiceTest).whenever(it).testReference
|
||||
}
|
||||
smmExecutor = AffinityExecutor.ServiceAffinityExecutor("test", 1)
|
||||
scheduler = NodeSchedulerService(services, schedulerGatedExecutor, serverThread = smmExecutor)
|
||||
val mockSMM = StateMachineManager(services, DBCheckpointStorage(), smmExecutor, database)
|
||||
mockSMM = StateMachineManagerImpl(services, DBCheckpointStorage(), smmExecutor, database)
|
||||
scheduler = NodeSchedulerService(testClock, database, FlowStarterImpl(smmExecutor, mockSMM), stateLoader, schedulerGatedExecutor, serverThread = smmExecutor)
|
||||
mockSMM.changes.subscribe { change ->
|
||||
if (change is StateMachineManager.Change.Removed && mockSMM.allStateMachines.isEmpty()) {
|
||||
smmHasRemovedAllFlows.countDown()
|
||||
}
|
||||
}
|
||||
mockSMM.start()
|
||||
services.smm = mockSMM
|
||||
mockSMM.start(emptyList())
|
||||
scheduler.start()
|
||||
}
|
||||
}
|
||||
@ -116,7 +129,7 @@ class NodeSchedulerServiceTest : SingletonSerializeAsToken() {
|
||||
@After
|
||||
fun tearDown() {
|
||||
// We need to make sure the StateMachineManager is done before shutting down executors.
|
||||
if (services.smm.allStateMachines.isNotEmpty()) {
|
||||
if (mockSMM.allStateMachines.isNotEmpty()) {
|
||||
smmHasRemovedAllFlows.await()
|
||||
}
|
||||
smmExecutor.shutdown()
|
||||
@ -125,6 +138,8 @@ class NodeSchedulerServiceTest : SingletonSerializeAsToken() {
|
||||
resetTestSerialization()
|
||||
}
|
||||
|
||||
// Ignore IntelliJ when it says these properties can be private, if they are we cannot serialise them
|
||||
// in AMQP.
|
||||
class TestState(val flowLogicRef: FlowLogicRef, val instant: Instant, val myIdentity: Party) : LinearState, SchedulableState {
|
||||
override val participants: List<AbstractParty>
|
||||
get() = listOf(myIdentity)
|
||||
@ -136,7 +151,7 @@ class NodeSchedulerServiceTest : SingletonSerializeAsToken() {
|
||||
}
|
||||
}
|
||||
|
||||
class TestFlowLogic(val increment: Int = 1) : FlowLogic<Unit>() {
|
||||
class TestFlowLogic(private val increment: Int = 1) : FlowLogic<Unit>() {
|
||||
@Suspendable
|
||||
override fun call() {
|
||||
(serviceHub as TestReference).testReference.calls += increment
|
||||
@ -279,8 +294,8 @@ class NodeSchedulerServiceTest : SingletonSerializeAsToken() {
|
||||
var scheduledRef: ScheduledStateRef? = null
|
||||
database.transaction {
|
||||
apply {
|
||||
val freshKey = services.keyManagementService.freshKey()
|
||||
val state = TestState(FlowLogicRefFactoryImpl.createForRPC(TestFlowLogic::class.java, increment), instant, services.myInfo.chooseIdentity())
|
||||
val freshKey = kms.freshKey()
|
||||
val state = TestState(FlowLogicRefFactoryImpl.createForRPC(TestFlowLogic::class.java, increment), instant, myInfo.chooseIdentity())
|
||||
val builder = TransactionBuilder(null).apply {
|
||||
addOutputState(state, DummyContract.PROGRAM_ID, DUMMY_NOTARY)
|
||||
addCommand(Command(), freshKey)
|
||||
|
@ -16,8 +16,11 @@ import net.corda.core.transactions.TransactionBuilder
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.node.internal.StartedNode
|
||||
import net.corda.node.services.statemachine.StateMachineManager
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.DUMMY_NOTARY
|
||||
import net.corda.testing.chooseIdentity
|
||||
import net.corda.testing.contracts.DummyContract
|
||||
import net.corda.testing.dummyCommand
|
||||
import net.corda.testing.getDefaultNotary
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import org.junit.After
|
||||
import org.junit.Assert.*
|
||||
@ -96,8 +99,6 @@ class ScheduledFlowTests {
|
||||
val a = mockNet.createUnstartedNode()
|
||||
val b = mockNet.createUnstartedNode()
|
||||
|
||||
notaryNode.internals.ensureRegistered()
|
||||
|
||||
mockNet.startNodes()
|
||||
nodeA = a.started!!
|
||||
nodeB = b.started!!
|
||||
|
@ -12,10 +12,10 @@ import net.corda.node.services.RPCUserServiceImpl
|
||||
import net.corda.node.services.api.MonitoringService
|
||||
import net.corda.node.services.config.NodeConfiguration
|
||||
import net.corda.node.services.config.configureWithDevSSLCertificate
|
||||
import net.corda.node.services.network.NetworkMapCacheImpl
|
||||
import net.corda.node.services.network.PersistentNetworkMapCache
|
||||
import net.corda.node.services.network.NetworkMapService
|
||||
import net.corda.node.services.transactions.PersistentUniquenessProvider
|
||||
import net.corda.node.testing.MockServiceHubInternal
|
||||
import net.corda.node.utilities.AffinityExecutor.ServiceAffinityExecutor
|
||||
import net.corda.node.utilities.CordaPersistence
|
||||
import net.corda.node.utilities.configureDatabase
|
||||
@ -57,7 +57,7 @@ class ArtemisMessagingTests : TestDependencyInjectionBase() {
|
||||
var messagingClient: NodeMessagingClient? = null
|
||||
var messagingServer: ArtemisMessagingServer? = null
|
||||
|
||||
lateinit var networkMapCache: PersistentNetworkMapCache
|
||||
lateinit var networkMapCache: NetworkMapCacheImpl
|
||||
|
||||
val rpcOps = object : RPCOps {
|
||||
override val protocolVersion: Int get() = throw UnsupportedOperationException()
|
||||
@ -73,7 +73,7 @@ class ArtemisMessagingTests : TestDependencyInjectionBase() {
|
||||
LogHelper.setLevel(PersistentUniquenessProvider::class)
|
||||
database = configureDatabase(makeTestDataSourceProperties(), makeTestDatabaseProperties(), ::makeTestIdentityService)
|
||||
networkMapRegistrationFuture = doneFuture(Unit)
|
||||
networkMapCache = PersistentNetworkMapCache(serviceHub = object : MockServiceHubInternal(database, config) {})
|
||||
networkMapCache = NetworkMapCacheImpl(PersistentNetworkMapCache(database, config), rigorousMock())
|
||||
}
|
||||
|
||||
@After
|
||||
|
@ -1,281 +0,0 @@
|
||||
package net.corda.node.services.network
|
||||
|
||||
import net.corda.core.concurrent.CordaFuture
|
||||
import net.corda.core.identity.CordaX500Name
|
||||
import net.corda.core.messaging.SingleMessageRecipient
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.serialization.deserialize
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.node.internal.StartedNode
|
||||
import net.corda.node.services.api.NetworkMapCacheInternal
|
||||
import net.corda.node.services.config.NodeConfiguration
|
||||
import net.corda.node.services.messaging.MessagingService
|
||||
import net.corda.node.services.messaging.send
|
||||
import net.corda.node.services.messaging.sendRequest
|
||||
import net.corda.node.services.network.AbstractNetworkMapServiceTest.Changed.Added
|
||||
import net.corda.node.services.network.AbstractNetworkMapServiceTest.Changed.Removed
|
||||
import net.corda.node.services.network.NetworkMapService.*
|
||||
import net.corda.node.services.network.NetworkMapService.Companion.FETCH_TOPIC
|
||||
import net.corda.node.services.network.NetworkMapService.Companion.PUSH_ACK_TOPIC
|
||||
import net.corda.node.services.network.NetworkMapService.Companion.PUSH_TOPIC
|
||||
import net.corda.node.services.network.NetworkMapService.Companion.QUERY_TOPIC
|
||||
import net.corda.node.services.network.NetworkMapService.Companion.REGISTER_TOPIC
|
||||
import net.corda.node.services.network.NetworkMapService.Companion.SUBSCRIPTION_TOPIC
|
||||
import net.corda.node.utilities.AddOrRemove
|
||||
import net.corda.node.utilities.AddOrRemove.ADD
|
||||
import net.corda.node.utilities.AddOrRemove.REMOVE
|
||||
import net.corda.nodeapi.internal.ServiceInfo
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNetwork.MockNode
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
import org.junit.Test
|
||||
import java.math.BigInteger
|
||||
import java.security.KeyPair
|
||||
import java.time.Instant
|
||||
import java.util.*
|
||||
import java.util.concurrent.LinkedBlockingQueue
|
||||
|
||||
abstract class AbstractNetworkMapServiceTest<out S : AbstractNetworkMapService> {
|
||||
lateinit var mockNet: MockNetwork
|
||||
lateinit var mapServiceNode: StartedNode<MockNode>
|
||||
lateinit var alice: StartedNode<MockNode>
|
||||
|
||||
companion object {
|
||||
val subscriberLegalName = CordaX500Name(organisation = "Subscriber", locality = "New York", country = "US")
|
||||
}
|
||||
|
||||
@Before
|
||||
fun setup() {
|
||||
mockNet = MockNetwork(defaultFactory = nodeFactory)
|
||||
mapServiceNode = mockNet.networkMapNode
|
||||
alice = mockNet.createNode(nodeFactory = nodeFactory, legalName = ALICE.name)
|
||||
mockNet.runNetwork()
|
||||
lastSerial = System.currentTimeMillis()
|
||||
}
|
||||
|
||||
@After
|
||||
fun tearDown() {
|
||||
mockNet.stopNodes()
|
||||
}
|
||||
|
||||
protected abstract val nodeFactory: MockNetwork.Factory<*>
|
||||
|
||||
protected abstract val networkMapService: S
|
||||
|
||||
// For persistent service, switch out the implementation for a newly instantiated one so we can check the state is preserved.
|
||||
protected abstract fun swizzle()
|
||||
|
||||
@Test
|
||||
fun `all nodes register themselves`() {
|
||||
// setup has run the network and so we immediately expect the network map service to be correctly populated
|
||||
assertThat(alice.fetchMap()).containsOnly(Added(mapServiceNode), Added(alice))
|
||||
assertThat(alice.identityQuery()).isEqualTo(alice.info)
|
||||
assertThat(mapServiceNode.identityQuery()).isEqualTo(mapServiceNode.info)
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `re-register the same node`() {
|
||||
val response = alice.registration(ADD)
|
||||
swizzle()
|
||||
assertThat(response.getOrThrow().error).isNull()
|
||||
assertThat(alice.fetchMap()).containsOnly(Added(mapServiceNode), Added(alice)) // Confirm it's a no-op
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `re-register with smaller serial value`() {
|
||||
val response = alice.registration(ADD, serial = 1)
|
||||
swizzle()
|
||||
assertThat(response.getOrThrow().error).isNotNull() // Make sure send error message is sent back
|
||||
assertThat(alice.fetchMap()).containsOnly(Added(mapServiceNode), Added(alice)) // Confirm it's a no-op
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `de-register node`() {
|
||||
val response = alice.registration(REMOVE)
|
||||
swizzle()
|
||||
assertThat(response.getOrThrow().error).isNull()
|
||||
assertThat(alice.fetchMap()).containsOnly(Added(mapServiceNode), Removed(alice))
|
||||
swizzle()
|
||||
assertThat(alice.identityQuery()).isNull()
|
||||
assertThat(mapServiceNode.identityQuery()).isEqualTo(mapServiceNode.info)
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `de-register same node again`() {
|
||||
alice.registration(REMOVE)
|
||||
val response = alice.registration(REMOVE)
|
||||
swizzle()
|
||||
assertThat(response.getOrThrow().error).isNotNull() // Make sure send error message is sent back
|
||||
assertThat(alice.fetchMap()).containsOnly(Added(mapServiceNode), Removed(alice))
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `de-register unknown node`() {
|
||||
val bob = newNodeSeparateFromNetworkMap(BOB.name)
|
||||
val response = bob.registration(REMOVE)
|
||||
swizzle()
|
||||
assertThat(response.getOrThrow().error).isNotNull() // Make sure send error message is sent back
|
||||
assertThat(alice.fetchMap()).containsOnly(Added(mapServiceNode), Added(alice))
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `subscribed while new node registers`() {
|
||||
val updates = alice.subscribe()
|
||||
swizzle()
|
||||
val bob = addNewNodeToNetworkMap(BOB.name)
|
||||
swizzle()
|
||||
val update = updates.single()
|
||||
assertThat(update.mapVersion).isEqualTo(networkMapService.mapVersion)
|
||||
assertThat(update.wireReg.verified().toChanged()).isEqualTo(Added(bob.info))
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `subscribed while node de-registers`() {
|
||||
val bob = addNewNodeToNetworkMap(BOB.name)
|
||||
val updates = alice.subscribe()
|
||||
bob.registration(REMOVE)
|
||||
swizzle()
|
||||
assertThat(updates.map { it.wireReg.verified().toChanged() }).containsOnly(Removed(bob.info))
|
||||
}
|
||||
|
||||
@Test
|
||||
fun unsubscribe() {
|
||||
val updates = alice.subscribe()
|
||||
val bob = addNewNodeToNetworkMap(BOB.name)
|
||||
alice.unsubscribe()
|
||||
addNewNodeToNetworkMap(CHARLIE.name)
|
||||
swizzle()
|
||||
assertThat(updates.map { it.wireReg.verified().toChanged() }).containsOnly(Added(bob.info))
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `surpass unacknowledged update limit`() {
|
||||
val subscriber = newNodeSeparateFromNetworkMap(subscriberLegalName)
|
||||
val updates = subscriber.subscribe()
|
||||
val bob = addNewNodeToNetworkMap(BOB.name)
|
||||
var serial = updates.first().wireReg.verified().serial
|
||||
repeat(networkMapService.maxUnacknowledgedUpdates) {
|
||||
bob.registration(ADD, serial = ++serial)
|
||||
swizzle()
|
||||
}
|
||||
// We sent maxUnacknowledgedUpdates + 1 updates - the last one will be missed
|
||||
assertThat(updates).hasSize(networkMapService.maxUnacknowledgedUpdates)
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `delay sending update ack until just before unacknowledged update limit`() {
|
||||
val subscriber = newNodeSeparateFromNetworkMap(subscriberLegalName)
|
||||
val updates = subscriber.subscribe()
|
||||
val bob = addNewNodeToNetworkMap(BOB.name)
|
||||
var serial = updates.first().wireReg.verified().serial
|
||||
repeat(networkMapService.maxUnacknowledgedUpdates - 1) {
|
||||
bob.registration(ADD, serial = ++serial)
|
||||
swizzle()
|
||||
}
|
||||
// Subscriber will receive maxUnacknowledgedUpdates updates before sending ack
|
||||
subscriber.ackUpdate(updates.last().mapVersion)
|
||||
swizzle()
|
||||
bob.registration(ADD, serial = ++serial)
|
||||
assertThat(updates).hasSize(networkMapService.maxUnacknowledgedUpdates + 1)
|
||||
assertThat(updates.last().wireReg.verified().serial).isEqualTo(serial)
|
||||
}
|
||||
|
||||
private fun StartedNode<*>.fetchMap(subscribe: Boolean = false, ifChangedSinceVersion: Int? = null): List<Changed> {
|
||||
val request = FetchMapRequest(subscribe, ifChangedSinceVersion, network.myAddress)
|
||||
val response = services.networkService.sendRequest<FetchMapResponse>(FETCH_TOPIC, request, mapServiceNode.network.myAddress)
|
||||
mockNet.runNetwork()
|
||||
return response.getOrThrow().nodes?.map { it.toChanged() } ?: emptyList()
|
||||
}
|
||||
|
||||
private fun NodeRegistration.toChanged(): Changed = when (type) {
|
||||
ADD -> Added(node)
|
||||
REMOVE -> Removed(node)
|
||||
}
|
||||
|
||||
private fun StartedNode<*>.identityQuery(): NodeInfo? {
|
||||
val request = QueryIdentityRequest(services.myInfo.chooseIdentityAndCert(), network.myAddress)
|
||||
val response = services.networkService.sendRequest<QueryIdentityResponse>(QUERY_TOPIC, request, mapServiceNode.network.myAddress)
|
||||
mockNet.runNetwork()
|
||||
return response.getOrThrow().node
|
||||
}
|
||||
|
||||
private var lastSerial = Long.MIN_VALUE
|
||||
|
||||
private fun StartedNode<*>.registration(addOrRemove: AddOrRemove,
|
||||
serial: Long? = null): CordaFuture<RegistrationResponse> {
|
||||
val distinctSerial = if (serial == null) {
|
||||
++lastSerial
|
||||
} else {
|
||||
lastSerial = serial
|
||||
serial
|
||||
}
|
||||
val expires = Instant.now() + NetworkMapService.DEFAULT_EXPIRATION_PERIOD
|
||||
val nodeRegistration = NodeRegistration(info, distinctSerial, addOrRemove, expires)
|
||||
val request = RegistrationRequest(nodeRegistration.toWire(services.keyManagementService, info.chooseIdentity().owningKey), network.myAddress)
|
||||
val response = services.networkService.sendRequest<RegistrationResponse>(REGISTER_TOPIC, request, mapServiceNode.network.myAddress)
|
||||
mockNet.runNetwork()
|
||||
return response
|
||||
}
|
||||
|
||||
private fun StartedNode<*>.subscribe(): Queue<Update> {
|
||||
val request = SubscribeRequest(true, network.myAddress)
|
||||
val updates = LinkedBlockingQueue<Update>()
|
||||
services.networkService.addMessageHandler(PUSH_TOPIC) { message, _ ->
|
||||
updates += message.data.deserialize<Update>()
|
||||
}
|
||||
val response = services.networkService.sendRequest<SubscribeResponse>(SUBSCRIPTION_TOPIC, request, mapServiceNode.network.myAddress)
|
||||
mockNet.runNetwork()
|
||||
assertThat(response.getOrThrow().confirmed).isTrue()
|
||||
return updates
|
||||
}
|
||||
|
||||
private fun StartedNode<*>.unsubscribe() {
|
||||
val request = SubscribeRequest(false, network.myAddress)
|
||||
val response = services.networkService.sendRequest<SubscribeResponse>(SUBSCRIPTION_TOPIC, request, mapServiceNode.network.myAddress)
|
||||
mockNet.runNetwork()
|
||||
assertThat(response.getOrThrow().confirmed).isTrue()
|
||||
}
|
||||
|
||||
private fun StartedNode<*>.ackUpdate(mapVersion: Int) {
|
||||
val request = UpdateAcknowledge(mapVersion, services.networkService.myAddress)
|
||||
services.networkService.send(PUSH_ACK_TOPIC, MessagingService.DEFAULT_SESSION_ID, request, mapServiceNode.network.myAddress)
|
||||
mockNet.runNetwork()
|
||||
}
|
||||
|
||||
private fun addNewNodeToNetworkMap(legalName: CordaX500Name): StartedNode<MockNode> {
|
||||
val node = mockNet.createNode(legalName = legalName)
|
||||
mockNet.runNetwork()
|
||||
lastSerial = System.currentTimeMillis()
|
||||
return node
|
||||
}
|
||||
|
||||
private fun newNodeSeparateFromNetworkMap(legalName: CordaX500Name): StartedNode<MockNode> {
|
||||
return mockNet.createNode(legalName = legalName, nodeFactory = NoNMSNodeFactory)
|
||||
}
|
||||
|
||||
sealed class Changed {
|
||||
data class Added(val node: NodeInfo) : Changed() {
|
||||
constructor(node: StartedNode<*>) : this(node.info)
|
||||
}
|
||||
|
||||
data class Removed(val node: NodeInfo) : Changed() {
|
||||
constructor(node: StartedNode<*>) : this(node.info)
|
||||
}
|
||||
}
|
||||
|
||||
private object NoNMSNodeFactory : MockNetwork.Factory<MockNode> {
|
||||
override fun create(config: NodeConfiguration,
|
||||
network: MockNetwork,
|
||||
networkMapAddr: SingleMessageRecipient?,
|
||||
id: Int,
|
||||
notaryIdentity: Pair<ServiceInfo, KeyPair>?,
|
||||
entropyRoot: BigInteger): MockNode {
|
||||
return object : MockNode(config, network, null, id, notaryIdentity, entropyRoot) {
|
||||
override fun makeNetworkMapService(network: MessagingService, networkMapCache: NetworkMapCacheInternal) = NullNetworkMapService
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,154 @@
|
||||
package net.corda.node.services.network
|
||||
|
||||
import com.fasterxml.jackson.databind.ObjectMapper
|
||||
import net.corda.core.crypto.*
|
||||
import net.corda.core.identity.CordaX500Name
|
||||
import net.corda.core.identity.PartyAndCertificate
|
||||
import net.corda.core.node.NodeInfo
|
||||
import net.corda.core.serialization.deserialize
|
||||
import net.corda.core.serialization.serialize
|
||||
import net.corda.core.utilities.NetworkHostAndPort
|
||||
import net.corda.node.utilities.CertificateType
|
||||
import net.corda.node.utilities.X509Utilities
|
||||
import net.corda.testing.TestDependencyInjectionBase
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.bouncycastle.asn1.x500.X500Name
|
||||
import org.bouncycastle.cert.X509CertificateHolder
|
||||
import org.eclipse.jetty.server.Server
|
||||
import org.eclipse.jetty.server.ServerConnector
|
||||
import org.eclipse.jetty.server.handler.HandlerCollection
|
||||
import org.eclipse.jetty.servlet.ServletContextHandler
|
||||
import org.eclipse.jetty.servlet.ServletHolder
|
||||
import org.glassfish.jersey.server.ResourceConfig
|
||||
import org.glassfish.jersey.servlet.ServletContainer
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
import org.junit.Test
|
||||
import java.io.ByteArrayInputStream
|
||||
import java.io.InputStream
|
||||
import java.net.InetSocketAddress
|
||||
import java.security.cert.CertPath
|
||||
import java.security.cert.Certificate
|
||||
import java.security.cert.CertificateFactory
|
||||
import java.security.cert.X509Certificate
|
||||
import javax.ws.rs.*
|
||||
import javax.ws.rs.core.MediaType
|
||||
import javax.ws.rs.core.Response
|
||||
import javax.ws.rs.core.Response.ok
|
||||
import kotlin.test.assertEquals
|
||||
|
||||
class HTTPNetworkMapClientTest : TestDependencyInjectionBase() {
|
||||
private lateinit var server: Server
|
||||
|
||||
private lateinit var networkMapClient: NetworkMapClient
|
||||
private val rootCAKey = Crypto.generateKeyPair(X509Utilities.DEFAULT_TLS_SIGNATURE_SCHEME)
|
||||
private val rootCACert = X509Utilities.createSelfSignedCACertificate(CordaX500Name(commonName = "Corda Node Root CA", organisation = "R3 LTD", locality = "London", country = "GB"), rootCAKey)
|
||||
private val intermediateCAKey = Crypto.generateKeyPair(X509Utilities.DEFAULT_TLS_SIGNATURE_SCHEME)
|
||||
private val intermediateCACert = X509Utilities.createCertificate(CertificateType.INTERMEDIATE_CA, rootCACert, rootCAKey, X500Name("CN=Corda Node Intermediate CA,L=London"), intermediateCAKey.public)
|
||||
|
||||
@Before
|
||||
fun setUp() {
|
||||
server = Server(InetSocketAddress("localhost", 0)).apply {
|
||||
handler = HandlerCollection().apply {
|
||||
addHandler(ServletContextHandler().apply {
|
||||
contextPath = "/"
|
||||
val resourceConfig = ResourceConfig().apply {
|
||||
// Add your API provider classes (annotated for JAX-RS) here
|
||||
register(MockNetworkMapServer())
|
||||
}
|
||||
val jerseyServlet = ServletHolder(ServletContainer(resourceConfig)).apply { initOrder = 0 }// Initialise at server start
|
||||
addServlet(jerseyServlet, "/api/*")
|
||||
})
|
||||
}
|
||||
}
|
||||
server.start()
|
||||
|
||||
while (!server.isStarted) {
|
||||
Thread.sleep(100)
|
||||
}
|
||||
|
||||
val hostAndPort = server.connectors.mapNotNull { it as? ServerConnector }.first()
|
||||
networkMapClient = HTTPNetworkMapClient("http://${hostAndPort.host}:${hostAndPort.localPort}/api/network-map")
|
||||
}
|
||||
|
||||
@After
|
||||
fun tearDown() {
|
||||
server.stop()
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `registered node is added to the network map`() {
|
||||
// Create node info.
|
||||
val signedNodeInfo = createNodeInfo("Test1")
|
||||
val nodeInfo = signedNodeInfo.verified()
|
||||
|
||||
networkMapClient.publish(signedNodeInfo)
|
||||
|
||||
val nodeInfoHash = nodeInfo.serialize().sha256()
|
||||
|
||||
assertThat(networkMapClient.getNetworkMap()).containsExactly(nodeInfoHash)
|
||||
assertEquals(nodeInfo, networkMapClient.getNodeInfo(nodeInfoHash))
|
||||
|
||||
val signedNodeInfo2 = createNodeInfo("Test2")
|
||||
val nodeInfo2 = signedNodeInfo2.verified()
|
||||
networkMapClient.publish(signedNodeInfo2)
|
||||
|
||||
val nodeInfoHash2 = nodeInfo2.serialize().sha256()
|
||||
assertThat(networkMapClient.getNetworkMap()).containsExactly(nodeInfoHash, nodeInfoHash2)
|
||||
assertEquals(nodeInfo2, networkMapClient.getNodeInfo(nodeInfoHash2))
|
||||
}
|
||||
|
||||
private fun createNodeInfo(organisation: String): SignedData<NodeInfo> {
|
||||
val keyPair = Crypto.generateKeyPair(X509Utilities.DEFAULT_TLS_SIGNATURE_SCHEME)
|
||||
val clientCert = X509Utilities.createCertificate(CertificateType.CLIENT_CA, intermediateCACert, intermediateCAKey, CordaX500Name(organisation = organisation, locality = "London", country = "GB"), keyPair.public)
|
||||
val certPath = buildCertPath(clientCert.toX509Certificate(), intermediateCACert.toX509Certificate(), rootCACert.toX509Certificate())
|
||||
val nodeInfo = NodeInfo(listOf(NetworkHostAndPort("my.$organisation.com", 1234)), listOf(PartyAndCertificate(certPath)), 1, serial = 1L)
|
||||
|
||||
// Create digital signature.
|
||||
val digitalSignature = DigitalSignature.WithKey(keyPair.public, Crypto.doSign(keyPair.private, nodeInfo.serialize().bytes))
|
||||
|
||||
return SignedData(nodeInfo.serialize(), digitalSignature)
|
||||
}
|
||||
}
|
||||
|
||||
@Path("network-map")
|
||||
// This is a stub implementation of the network map rest API.
|
||||
internal class MockNetworkMapServer {
|
||||
private val nodeInfos = mutableMapOf<SecureHash, NodeInfo>()
|
||||
@POST
|
||||
@Path("publish")
|
||||
@Consumes(MediaType.APPLICATION_OCTET_STREAM)
|
||||
fun publishNodeInfo(input: InputStream): Response {
|
||||
val registrationData = input.readBytes().deserialize<SignedData<NodeInfo>>()
|
||||
val nodeInfo = registrationData.verified()
|
||||
val nodeInfoHash = nodeInfo.serialize().sha256()
|
||||
nodeInfos.put(nodeInfoHash, nodeInfo)
|
||||
return ok().build()
|
||||
}
|
||||
|
||||
@GET
|
||||
@Produces(MediaType.APPLICATION_JSON)
|
||||
fun getNetworkMap(): Response {
|
||||
return Response.ok(ObjectMapper().writeValueAsString(nodeInfos.keys.map { it.toString() })).build()
|
||||
}
|
||||
|
||||
@GET
|
||||
@Path("{var}")
|
||||
@Produces(MediaType.APPLICATION_OCTET_STREAM)
|
||||
fun getNodeInfo(@PathParam("var") nodeInfoHash: String): Response {
|
||||
val nodeInfo = nodeInfos[SecureHash.parse(nodeInfoHash)]
|
||||
return if (nodeInfo != null) {
|
||||
Response.ok(nodeInfo.serialize().bytes)
|
||||
} else {
|
||||
Response.status(Response.Status.NOT_FOUND)
|
||||
}.build()
|
||||
}
|
||||
}
|
||||
|
||||
private fun buildCertPath(vararg certificates: Certificate): CertPath {
|
||||
return CertificateFactory.getInstance("X509").generateCertPath(certificates.asList())
|
||||
}
|
||||
|
||||
private fun X509CertificateHolder.toX509Certificate(): X509Certificate {
|
||||
return CertificateFactory.getInstance("X509").generateCertificate(ByteArrayInputStream(encoded)) as X509Certificate
|
||||
}
|
@ -1,9 +0,0 @@
|
||||
package net.corda.node.services.network
|
||||
|
||||
import net.corda.testing.node.MockNetwork
|
||||
|
||||
class InMemoryNetworkMapServiceTest : AbstractNetworkMapServiceTest<InMemoryNetworkMapService>() {
|
||||
override val nodeFactory get() = MockNetwork.DefaultFactory
|
||||
override val networkMapService: InMemoryNetworkMapService get() = mapServiceNode.inNodeNetworkMapService as InMemoryNetworkMapService
|
||||
override fun swizzle() = Unit
|
||||
}
|
@ -1,49 +1,34 @@
|
||||
package net.corda.node.services.network
|
||||
|
||||
import net.corda.core.node.services.NetworkMapCache
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.testing.ALICE
|
||||
import net.corda.testing.BOB
|
||||
import net.corda.testing.chooseIdentity
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNodeParameters
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
import org.junit.Test
|
||||
import java.math.BigInteger
|
||||
import kotlin.test.assertEquals
|
||||
|
||||
class NetworkMapCacheTest {
|
||||
lateinit var mockNet: MockNetwork
|
||||
|
||||
@Before
|
||||
fun setUp() {
|
||||
mockNet = MockNetwork()
|
||||
}
|
||||
val mockNet: MockNetwork = MockNetwork()
|
||||
|
||||
@After
|
||||
fun teardown() {
|
||||
mockNet.stopNodes()
|
||||
}
|
||||
|
||||
@Test
|
||||
fun registerWithNetwork() {
|
||||
mockNet.createNotaryNode()
|
||||
val aliceNode = mockNet.createPartyNode(ALICE.name)
|
||||
val future = aliceNode.services.networkMapCache.addMapService(aliceNode.network, mockNet.networkMapNode.network.myAddress, false, null)
|
||||
mockNet.runNetwork()
|
||||
future.getOrThrow()
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `key collision`() {
|
||||
val entropy = BigInteger.valueOf(24012017L)
|
||||
val aliceNode = mockNet.createNode(nodeFactory = MockNetwork.DefaultFactory, legalName = ALICE.name, entropyRoot = entropy)
|
||||
val aliceNode = mockNet.createNode(MockNodeParameters(legalName = ALICE.name, entropyRoot = entropy))
|
||||
mockNet.runNetwork()
|
||||
|
||||
// Node A currently knows only about itself, so this returns node A
|
||||
assertEquals(aliceNode.services.networkMapCache.getNodesByLegalIdentityKey(aliceNode.info.chooseIdentity().owningKey).singleOrNull(), aliceNode.info)
|
||||
val bobNode = mockNet.createNode(nodeFactory = MockNetwork.DefaultFactory, legalName = BOB.name, entropyRoot = entropy)
|
||||
val bobNode = mockNet.createNode(MockNodeParameters(legalName = BOB.name, entropyRoot = entropy))
|
||||
assertEquals(aliceNode.info.chooseIdentity(), bobNode.info.chooseIdentity())
|
||||
|
||||
aliceNode.services.networkMapCache.addNode(bobNode.info)
|
||||
@ -83,7 +68,7 @@ class NetworkMapCacheTest {
|
||||
val aliceNode = mockNet.createPartyNode(ALICE.name)
|
||||
val notaryLegalIdentity = notaryNode.info.chooseIdentity()
|
||||
val alice = aliceNode.info.chooseIdentity()
|
||||
val notaryCache = notaryNode.services.networkMapCache as PersistentNetworkMapCache
|
||||
val notaryCache = notaryNode.services.networkMapCache
|
||||
mockNet.runNetwork()
|
||||
notaryNode.database.transaction {
|
||||
assertThat(notaryCache.getNodeByLegalIdentity(alice) != null)
|
||||
|
@ -1,56 +0,0 @@
|
||||
package net.corda.node.services.network
|
||||
|
||||
import net.corda.core.messaging.SingleMessageRecipient
|
||||
import net.corda.node.services.api.NetworkMapCacheInternal
|
||||
import net.corda.node.services.config.NodeConfiguration
|
||||
import net.corda.node.services.messaging.MessagingService
|
||||
import net.corda.nodeapi.internal.ServiceInfo
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNetwork.MockNode
|
||||
import java.math.BigInteger
|
||||
import java.security.KeyPair
|
||||
|
||||
/**
|
||||
* This class mirrors [InMemoryNetworkMapServiceTest] but switches in a [PersistentNetworkMapService] and
|
||||
* repeatedly replaces it with new instances to check that the service correctly restores the most recent state.
|
||||
*/
|
||||
class PersistentNetworkMapServiceTest : AbstractNetworkMapServiceTest<PersistentNetworkMapService>() {
|
||||
|
||||
override val nodeFactory: MockNetwork.Factory<*> get() = NodeFactory
|
||||
|
||||
override val networkMapService: PersistentNetworkMapService
|
||||
get() = (mapServiceNode.inNodeNetworkMapService as SwizzleNetworkMapService).delegate
|
||||
|
||||
override fun swizzle() {
|
||||
mapServiceNode.database.transaction {
|
||||
(mapServiceNode.inNodeNetworkMapService as SwizzleNetworkMapService).swizzle()
|
||||
}
|
||||
}
|
||||
|
||||
private object NodeFactory : MockNetwork.Factory<MockNode> {
|
||||
override fun create(config: NodeConfiguration,
|
||||
network: MockNetwork,
|
||||
networkMapAddr: SingleMessageRecipient?,
|
||||
id: Int,
|
||||
notaryIdentity: Pair<ServiceInfo, KeyPair>?,
|
||||
entropyRoot: BigInteger): MockNode {
|
||||
return object : MockNode(config, network, networkMapAddr, id, notaryIdentity, entropyRoot) {
|
||||
override fun makeNetworkMapService(network: MessagingService, networkMapCache: NetworkMapCacheInternal) = SwizzleNetworkMapService(network, networkMapCache)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* We use a special [NetworkMapService] that allows us to switch in a new instance at any time to check that the
|
||||
* state within it is correctly restored.
|
||||
*/
|
||||
private class SwizzleNetworkMapService(private val delegateFactory: () -> PersistentNetworkMapService) : NetworkMapService {
|
||||
constructor(network: MessagingService, networkMapCache: NetworkMapCacheInternal) : this({ PersistentNetworkMapService(network, networkMapCache, 1) })
|
||||
|
||||
var delegate = delegateFactory()
|
||||
fun swizzle() {
|
||||
delegate.unregisterNetworkHandlers()
|
||||
delegate = delegateFactory()
|
||||
}
|
||||
}
|
||||
}
|
@ -45,7 +45,7 @@ class DBTransactionStorageTests : TestDependencyInjectionBase() {
|
||||
override val vaultService: VaultServiceInternal
|
||||
get() {
|
||||
val vaultService = NodeVaultService(clock, keyManagementService, stateLoader, database.hibernateConfig)
|
||||
hibernatePersister = HibernateObserver(vaultService.rawUpdates, database.hibernateConfig)
|
||||
hibernatePersister = HibernateObserver.install(vaultService.rawUpdates, database.hibernateConfig)
|
||||
return vaultService
|
||||
}
|
||||
|
||||
|
@ -69,8 +69,7 @@ class HibernateObserverTests {
|
||||
}
|
||||
}
|
||||
val database = configureDatabase(makeTestDataSourceProperties(), makeTestDatabaseProperties(), ::makeTestIdentityService, schemaService)
|
||||
@Suppress("UNUSED_VARIABLE")
|
||||
val observer = HibernateObserver(rawUpdatesPublisher, database.hibernateConfig)
|
||||
HibernateObserver.install(rawUpdatesPublisher, database.hibernateConfig)
|
||||
database.transaction {
|
||||
rawUpdatesPublisher.onNext(Vault.Update(emptySet(), setOf(StateAndRef(TransactionState(TestState(), DummyContract.PROGRAM_ID, MEGA_CORP), StateRef(SecureHash.sha256("dummy"), 0)))))
|
||||
val parentRowCountResult = DatabaseTransactionManager.current().connection.prepareStatement("select count(*) from Parents").executeQuery()
|
||||
|
@ -33,12 +33,14 @@ import net.corda.testing.node.InMemoryMessagingNetwork.MessageTransfer
|
||||
import net.corda.testing.node.InMemoryMessagingNetwork.ServicePeerAllocationStrategy.RoundRobin
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNetwork.MockNode
|
||||
import net.corda.testing.node.MockNodeParameters
|
||||
import net.corda.testing.node.pumpReceive
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.assertj.core.api.Assertions.assertThatThrownBy
|
||||
import org.assertj.core.api.AssertionsForClassTypes.assertThatExceptionOfType
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
import org.junit.Ignore
|
||||
import org.junit.Test
|
||||
import rx.Notification
|
||||
import rx.Observable
|
||||
@ -64,14 +66,16 @@ class FlowFrameworkTests {
|
||||
private lateinit var alice: Party
|
||||
private lateinit var bob: Party
|
||||
|
||||
private fun StartedNode<*>.flushSmm() {
|
||||
(this.smm as StateMachineManagerImpl).executor.flush()
|
||||
}
|
||||
|
||||
@Before
|
||||
fun start() {
|
||||
mockNet = MockNetwork(servicePeerAllocationStrategy = RoundRobin(), cordappPackages = listOf("net.corda.finance.contracts", "net.corda.testing.contracts"))
|
||||
aliceNode = mockNet.createNode(legalName = ALICE_NAME)
|
||||
bobNode = mockNet.createNode(legalName = BOB_NAME)
|
||||
|
||||
aliceNode = mockNet.createNode(MockNodeParameters(legalName = ALICE_NAME))
|
||||
bobNode = mockNet.createNode(MockNodeParameters(legalName = BOB_NAME))
|
||||
mockNet.runNetwork()
|
||||
aliceNode.internals.ensureRegistered()
|
||||
|
||||
// We intentionally create our own notary and ignore the one provided by the network
|
||||
// Note that these notaries don't operate correctly as they don't share their state. They are only used for testing
|
||||
@ -152,45 +156,19 @@ class FlowFrameworkTests {
|
||||
assertEquals(true, flow.flowStarted) // Now we should have run the flow
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `flow added before network map will be init checkpointed`() {
|
||||
var charlieNode = mockNet.createNode() //create vanilla node
|
||||
val flow = NoOpFlow()
|
||||
charlieNode.services.startFlow(flow)
|
||||
assertEquals(false, flow.flowStarted) // Not started yet as no network activity has been allowed yet
|
||||
charlieNode.internals.disableDBCloseOnStop()
|
||||
charlieNode.services.networkMapCache.clearNetworkMapCache() // zap persisted NetworkMapCache to force use of network.
|
||||
charlieNode.dispose()
|
||||
|
||||
charlieNode = mockNet.createNode(charlieNode.internals.id)
|
||||
val restoredFlow = charlieNode.getSingleFlow<NoOpFlow>().first
|
||||
assertEquals(false, restoredFlow.flowStarted) // Not started yet as no network activity has been allowed yet
|
||||
mockNet.runNetwork() // Allow network map messages to flow
|
||||
charlieNode.smm.executor.flush()
|
||||
assertEquals(true, restoredFlow.flowStarted) // Now we should have run the flow and hopefully cleared the init checkpoint
|
||||
charlieNode.internals.disableDBCloseOnStop()
|
||||
charlieNode.services.networkMapCache.clearNetworkMapCache() // zap persisted NetworkMapCache to force use of network.
|
||||
charlieNode.dispose()
|
||||
|
||||
// Now it is completed the flow should leave no Checkpoint.
|
||||
charlieNode = mockNet.createNode(charlieNode.internals.id)
|
||||
mockNet.runNetwork() // Allow network map messages to flow
|
||||
charlieNode.smm.executor.flush()
|
||||
assertTrue(charlieNode.smm.findStateMachines(NoOpFlow::class.java).isEmpty())
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `flow loaded from checkpoint will respond to messages from before start`() {
|
||||
aliceNode.registerFlowFactory(ReceiveFlow::class) { InitiatedSendFlow("Hello", it) }
|
||||
bobNode.services.startFlow(ReceiveFlow(alice).nonTerminating()) // Prepare checkpointed receive flow
|
||||
// Make sure the add() has finished initial processing.
|
||||
bobNode.smm.executor.flush()
|
||||
bobNode.flushSmm()
|
||||
bobNode.internals.disableDBCloseOnStop()
|
||||
bobNode.dispose() // kill receiver
|
||||
val restoredFlow = bobNode.restartAndGetRestoredFlow<ReceiveFlow>()
|
||||
assertThat(restoredFlow.receivedPayloads[0]).isEqualTo("Hello")
|
||||
}
|
||||
|
||||
@Ignore("Some changes in startup order make this test's assumptions fail.")
|
||||
@Test
|
||||
fun `flow with send will resend on interrupted restart`() {
|
||||
val payload = random63BitValue()
|
||||
@ -198,8 +176,7 @@ class FlowFrameworkTests {
|
||||
|
||||
var sentCount = 0
|
||||
mockNet.messagingNetwork.sentMessages.toSessionTransfers().filter { it.isPayloadTransfer }.forEach { sentCount++ }
|
||||
|
||||
val charlieNode = mockNet.createNode(legalName = CHARLIE_NAME)
|
||||
val charlieNode = mockNet.createNode(MockNodeParameters(legalName = CHARLIE_NAME))
|
||||
val secondFlow = charlieNode.registerFlowFactory(PingPongFlow::class) { PingPongFlow(it, payload2) }
|
||||
mockNet.runNetwork()
|
||||
val charlie = charlieNode.info.singleIdentity()
|
||||
@ -210,7 +187,7 @@ class FlowFrameworkTests {
|
||||
assertEquals(1, bobNode.checkpointStorage.checkpoints().size)
|
||||
}
|
||||
// Make sure the add() has finished initial processing.
|
||||
bobNode.smm.executor.flush()
|
||||
bobNode.flushSmm()
|
||||
bobNode.internals.disableDBCloseOnStop()
|
||||
// Restart node and thus reload the checkpoint and resend the message with same UUID
|
||||
bobNode.dispose()
|
||||
@ -218,12 +195,12 @@ class FlowFrameworkTests {
|
||||
assertEquals(1, bobNode.checkpointStorage.checkpoints().size) // confirm checkpoint
|
||||
bobNode.services.networkMapCache.clearNetworkMapCache()
|
||||
}
|
||||
val node2b = mockNet.createNode(bobNode.internals.id)
|
||||
val node2b = mockNet.createNode(MockNodeParameters(bobNode.internals.id))
|
||||
bobNode.internals.manuallyCloseDB()
|
||||
val (firstAgain, fut1) = node2b.getSingleFlow<PingPongFlow>()
|
||||
// Run the network which will also fire up the second flow. First message should get deduped. So message data stays in sync.
|
||||
mockNet.runNetwork()
|
||||
node2b.smm.executor.flush()
|
||||
node2b.flushSmm()
|
||||
fut1.getOrThrow()
|
||||
|
||||
val receivedCount = receivedSessionMessages.count { it.isPayloadTransfer }
|
||||
@ -245,7 +222,7 @@ class FlowFrameworkTests {
|
||||
|
||||
@Test
|
||||
fun `sending to multiple parties`() {
|
||||
val charlieNode = mockNet.createNode(legalName = CHARLIE_NAME)
|
||||
val charlieNode = mockNet.createNode(MockNodeParameters(legalName = CHARLIE_NAME))
|
||||
mockNet.runNetwork()
|
||||
val charlie = charlieNode.info.singleIdentity()
|
||||
bobNode.registerFlowFactory(SendFlow::class) { InitiatedReceiveFlow(it).nonTerminating() }
|
||||
@ -278,7 +255,7 @@ class FlowFrameworkTests {
|
||||
|
||||
@Test
|
||||
fun `receiving from multiple parties`() {
|
||||
val charlieNode = mockNet.createNode(legalName = CHARLIE_NAME)
|
||||
val charlieNode = mockNet.createNode(MockNodeParameters(legalName = CHARLIE_NAME))
|
||||
mockNet.runNetwork()
|
||||
val charlie = charlieNode.info.singleIdentity()
|
||||
val bobPayload = "Test 1"
|
||||
@ -432,7 +409,7 @@ class FlowFrameworkTests {
|
||||
|
||||
@Test
|
||||
fun `FlowException propagated in invocation chain`() {
|
||||
val charlieNode = mockNet.createNode(legalName = CHARLIE_NAME)
|
||||
val charlieNode = mockNet.createNode(MockNodeParameters(legalName = CHARLIE_NAME))
|
||||
mockNet.runNetwork()
|
||||
val charlie = charlieNode.info.singleIdentity()
|
||||
|
||||
@ -447,7 +424,7 @@ class FlowFrameworkTests {
|
||||
|
||||
@Test
|
||||
fun `FlowException thrown and there is a 3rd unrelated party flow`() {
|
||||
val charlieNode = mockNet.createNode(legalName = CHARLIE_NAME)
|
||||
val charlieNode = mockNet.createNode(MockNodeParameters(legalName = CHARLIE_NAME))
|
||||
mockNet.runNetwork()
|
||||
val charlie = charlieNode.info.singleIdentity()
|
||||
|
||||
@ -696,7 +673,7 @@ class FlowFrameworkTests {
|
||||
private inline fun <reified P : FlowLogic<*>> StartedNode<MockNode>.restartAndGetRestoredFlow() = internals.run {
|
||||
disableDBCloseOnStop() // Handover DB to new node copy
|
||||
stop()
|
||||
val newNode = mockNet.createNode(id)
|
||||
val newNode = mockNet.createNode(MockNodeParameters(id))
|
||||
newNode.internals.acceptableLiveFiberCountOnStop = 1
|
||||
manuallyCloseDB()
|
||||
mockNet.runNetwork() // allow NetworkMapService messages to stabilise and thus start the state machine
|
||||
@ -731,7 +708,7 @@ class FlowFrameworkTests {
|
||||
private fun StartedNode<*>.sendSessionMessage(message: SessionMessage, destination: Party) {
|
||||
services.networkService.apply {
|
||||
val address = getAddressOfParty(PartyInfo.SingleNode(destination, emptyList()))
|
||||
send(createMessage(StateMachineManager.sessionTopic, message.serialize().bytes), address)
|
||||
send(createMessage(StateMachineManagerImpl.sessionTopic, message.serialize().bytes), address)
|
||||
}
|
||||
}
|
||||
|
||||
@ -755,7 +732,7 @@ class FlowFrameworkTests {
|
||||
}
|
||||
|
||||
private fun Observable<MessageTransfer>.toSessionTransfers(): Observable<SessionTransfer> {
|
||||
return filter { it.message.topicSession == StateMachineManager.sessionTopic }.map {
|
||||
return filter { it.message.topicSession == StateMachineManagerImpl.sessionTopic }.map {
|
||||
val from = it.sender.id
|
||||
val message = it.message.data.deserialize<SessionMessage>()
|
||||
SessionTransfer(from, sanitise(message), it.recipients)
|
||||
|
@ -13,10 +13,11 @@ import net.corda.core.transactions.SignedTransaction
|
||||
import net.corda.core.transactions.TransactionBuilder
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.core.utilities.seconds
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.api.StartedNodeServices
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.contracts.DummyContract
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNodeParameters
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
@ -28,8 +29,8 @@ import kotlin.test.assertFailsWith
|
||||
|
||||
class NotaryServiceTests {
|
||||
lateinit var mockNet: MockNetwork
|
||||
lateinit var notaryServices: ServiceHubInternal
|
||||
lateinit var aliceServices: ServiceHubInternal
|
||||
lateinit var notaryServices: StartedNodeServices
|
||||
lateinit var aliceServices: StartedNodeServices
|
||||
lateinit var notary: Party
|
||||
lateinit var alice: Party
|
||||
|
||||
@ -37,9 +38,8 @@ class NotaryServiceTests {
|
||||
fun setup() {
|
||||
mockNet = MockNetwork(cordappPackages = listOf("net.corda.testing.contracts"))
|
||||
val notaryNode = mockNet.createNotaryNode(legalName = DUMMY_NOTARY.name, validating = false)
|
||||
aliceServices = mockNet.createNode(legalName = ALICE_NAME).services
|
||||
aliceServices = mockNet.createNode(MockNodeParameters(legalName = ALICE_NAME)).services
|
||||
mockNet.runNetwork() // Clear network map registration messages
|
||||
notaryNode.internals.ensureRegistered()
|
||||
notaryServices = notaryNode.services
|
||||
notary = notaryServices.getDefaultNotary()
|
||||
alice = aliceServices.myInfo.singleIdentity()
|
||||
|
@ -13,11 +13,12 @@ import net.corda.core.node.ServiceHub
|
||||
import net.corda.core.transactions.SignedTransaction
|
||||
import net.corda.core.transactions.TransactionBuilder
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.node.services.api.ServiceHubInternal
|
||||
import net.corda.node.services.api.StartedNodeServices
|
||||
import net.corda.node.services.issueInvalidState
|
||||
import net.corda.testing.*
|
||||
import net.corda.testing.contracts.DummyContract
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.node.MockNodeParameters
|
||||
import org.assertj.core.api.Assertions.assertThat
|
||||
import org.junit.After
|
||||
import org.junit.Before
|
||||
@ -28,8 +29,8 @@ import kotlin.test.assertFailsWith
|
||||
|
||||
class ValidatingNotaryServiceTests {
|
||||
lateinit var mockNet: MockNetwork
|
||||
lateinit var notaryServices: ServiceHubInternal
|
||||
lateinit var aliceServices: ServiceHubInternal
|
||||
lateinit var notaryServices: StartedNodeServices
|
||||
lateinit var aliceServices: StartedNodeServices
|
||||
lateinit var notary: Party
|
||||
lateinit var alice: Party
|
||||
|
||||
@ -37,9 +38,8 @@ class ValidatingNotaryServiceTests {
|
||||
fun setup() {
|
||||
mockNet = MockNetwork(cordappPackages = listOf("net.corda.testing.contracts"))
|
||||
val notaryNode = mockNet.createNotaryNode(legalName = DUMMY_NOTARY.name)
|
||||
val aliceNode = mockNet.createNode(legalName = ALICE_NAME)
|
||||
val aliceNode = mockNet.createNode(MockNodeParameters(legalName = ALICE_NAME))
|
||||
mockNet.runNetwork() // Clear network map registration messages
|
||||
notaryNode.internals.ensureRegistered()
|
||||
notaryServices = notaryNode.services
|
||||
aliceServices = aliceNode.services
|
||||
notary = notaryServices.getDefaultNotary()
|
||||
|
@ -0,0 +1,162 @@
|
||||
package net.corda.node.services.vault
|
||||
|
||||
import co.paralleluniverse.fibers.Suspendable
|
||||
import com.nhaarman.mockito_kotlin.*
|
||||
import net.corda.core.contracts.*
|
||||
import net.corda.core.flows.FinalityFlow
|
||||
import net.corda.core.flows.FlowLogic
|
||||
import net.corda.core.flows.FlowSession
|
||||
import net.corda.core.flows.InitiatingFlow
|
||||
import net.corda.core.identity.AbstractParty
|
||||
import net.corda.core.internal.FlowStateMachine
|
||||
import net.corda.core.internal.packageName
|
||||
import net.corda.core.internal.uncheckedCast
|
||||
import net.corda.core.node.StateLoader
|
||||
import net.corda.core.node.services.KeyManagementService
|
||||
import net.corda.core.node.services.queryBy
|
||||
import net.corda.core.node.services.vault.QueryCriteria.SoftLockingCondition
|
||||
import net.corda.core.node.services.vault.QueryCriteria.SoftLockingType.LOCKED_ONLY
|
||||
import net.corda.core.node.services.vault.QueryCriteria.VaultQueryCriteria
|
||||
import net.corda.core.transactions.LedgerTransaction
|
||||
import net.corda.core.transactions.TransactionBuilder
|
||||
import net.corda.core.utilities.NonEmptySet
|
||||
import net.corda.core.utilities.OpaqueBytes
|
||||
import net.corda.core.utilities.getOrThrow
|
||||
import net.corda.core.utilities.unwrap
|
||||
import net.corda.node.internal.InitiatedFlowFactory
|
||||
import net.corda.node.services.api.VaultServiceInternal
|
||||
import net.corda.testing.chooseIdentity
|
||||
import net.corda.testing.node.MockNetwork
|
||||
import net.corda.testing.rigorousMock
|
||||
import net.corda.testing.node.MockNodeArgs
|
||||
import net.corda.testing.node.MockNodeParameters
|
||||
import org.junit.After
|
||||
import org.junit.Test
|
||||
import java.util.*
|
||||
import java.util.concurrent.atomic.AtomicBoolean
|
||||
import kotlin.reflect.jvm.jvmName
|
||||
import kotlin.test.assertEquals
|
||||
|
||||
class NodePair(private val mockNet: MockNetwork) {
|
||||
private class ServerLogic(private val session: FlowSession, private val running: AtomicBoolean) : FlowLogic<Unit>() {
|
||||
@Suspendable
|
||||
override fun call() {
|
||||
running.set(true)
|
||||
session.receive<String>().unwrap { assertEquals("ping", it) }
|
||||
session.send("pong")
|
||||
}
|
||||
}
|
||||
|
||||
@InitiatingFlow
|
||||
abstract class AbstractClientLogic<out T>(nodePair: NodePair) : FlowLogic<T>() {
|
||||
protected val server = nodePair.server.info.chooseIdentity()
|
||||
protected abstract fun callImpl(): T
|
||||
@Suspendable
|
||||
override fun call() = callImpl().also {
|
||||
initiateFlow(server).sendAndReceive<String>("ping").unwrap { assertEquals("pong", it) }
|
||||
}
|
||||
}
|
||||
|
||||
private val serverRunning = AtomicBoolean()
|
||||
val server = mockNet.createNode()
|
||||
var client = mockNet.createNode().apply {
|
||||
internals.disableDBCloseOnStop() // Otherwise the in-memory database may disappear (taking the checkpoint with it) while we reboot the client.
|
||||
}
|
||||
private set
|
||||
|
||||
fun <T> communicate(clientLogic: AbstractClientLogic<T>, rebootClient: Boolean): FlowStateMachine<T> {
|
||||
server.internals.internalRegisterFlowFactory(AbstractClientLogic::class.java, InitiatedFlowFactory.Core { ServerLogic(it, serverRunning) }, ServerLogic::class.java, false)
|
||||
client.services.startFlow(clientLogic)
|
||||
while (!serverRunning.get()) mockNet.runNetwork(1)
|
||||
if (rebootClient) {
|
||||
client.dispose()
|
||||
client = mockNet.createNode(MockNodeParameters(client.internals.id))
|
||||
}
|
||||
return uncheckedCast(client.smm.allStateMachines.single().stateMachine)
|
||||
}
|
||||
}
|
||||
|
||||
class VaultSoftLockManagerTest {
|
||||
private val mockVault = rigorousMock<VaultServiceInternal>().also {
|
||||
doNothing().whenever(it).softLockRelease(any(), anyOrNull())
|
||||
}
|
||||
private val mockNet = MockNetwork(cordappPackages = listOf(ContractImpl::class.packageName), defaultFactory = object : MockNetwork.Factory<MockNetwork.MockNode> {
|
||||
override fun create(args: MockNodeArgs): MockNetwork.MockNode {
|
||||
return object : MockNetwork.MockNode(args) {
|
||||
override fun makeVaultService(keyManagementService: KeyManagementService, stateLoader: StateLoader): VaultServiceInternal {
|
||||
val realVault = super.makeVaultService(keyManagementService, stateLoader)
|
||||
return object : VaultServiceInternal by realVault {
|
||||
override fun softLockRelease(lockId: UUID, stateRefs: NonEmptySet<StateRef>?) {
|
||||
mockVault.softLockRelease(lockId, stateRefs) // No need to also call the real one for these tests.
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
private val nodePair = NodePair(mockNet)
|
||||
@After
|
||||
fun tearDown() {
|
||||
mockNet.stopNodes()
|
||||
}
|
||||
|
||||
object CommandDataImpl : CommandData
|
||||
class ClientLogic(nodePair: NodePair, val state: ContractState) : NodePair.AbstractClientLogic<List<ContractState>>(nodePair) {
|
||||
override fun callImpl() = run {
|
||||
subFlow(FinalityFlow(serviceHub.signInitialTransaction(TransactionBuilder(notary = ourIdentity).apply {
|
||||
addOutputState(state, ContractImpl::class.jvmName)
|
||||
addCommand(CommandDataImpl, ourIdentity.owningKey)
|
||||
})))
|
||||
serviceHub.vaultService.queryBy<ContractState>(VaultQueryCriteria(softLockingCondition = SoftLockingCondition(LOCKED_ONLY))).states.map {
|
||||
it.state.data
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private abstract class ParticipantState(override val participants: List<AbstractParty>) : ContractState
|
||||
|
||||
private class PlainOldState(participants: List<AbstractParty>) : ParticipantState(participants) {
|
||||
constructor(nodePair: NodePair) : this(listOf(nodePair.client.info.chooseIdentity()))
|
||||
}
|
||||
|
||||
private class FungibleAssetImpl(participants: List<AbstractParty>) : ParticipantState(participants), FungibleAsset<Unit> {
|
||||
constructor(nodePair: NodePair) : this(listOf(nodePair.client.info.chooseIdentity()))
|
||||
|
||||
override val owner get() = participants[0]
|
||||
override fun withNewOwner(newOwner: AbstractParty) = throw UnsupportedOperationException()
|
||||
override val amount get() = Amount(1, Issued(PartyAndReference(owner, OpaqueBytes.of(1)), Unit))
|
||||
override val exitKeys get() = throw UnsupportedOperationException()
|
||||
override fun withNewOwnerAndAmount(newAmount: Amount<Issued<Unit>>, newOwner: AbstractParty) = throw UnsupportedOperationException()
|
||||
override fun equals(other: Any?) = other is FungibleAssetImpl && participants == other.participants
|
||||
override fun hashCode() = participants.hashCode()
|
||||
}
|
||||
|
||||
class ContractImpl : Contract {
|
||||
override fun verify(tx: LedgerTransaction) {}
|
||||
}
|
||||
|
||||
private fun run(expectSoftLock: Boolean, state: ContractState, checkpoint: Boolean) {
|
||||
val fsm = nodePair.communicate(ClientLogic(nodePair, state), checkpoint)
|
||||
mockNet.runNetwork()
|
||||
if (expectSoftLock) {
|
||||
assertEquals(listOf(state), fsm.resultFuture.getOrThrow())
|
||||
verify(mockVault).softLockRelease(fsm.id.uuid, null)
|
||||
} else {
|
||||
assertEquals(emptyList(), fsm.resultFuture.getOrThrow())
|
||||
// In this case we don't want softLockRelease called so that we avoid its expensive query, even after restore from checkpoint.
|
||||
}
|
||||
verifyNoMoreInteractions(mockVault)
|
||||
}
|
||||
|
||||
@Test
|
||||
fun `plain old state is not soft locked`() = run(false, PlainOldState(nodePair), false)
|
||||
|
||||
@Test
|
||||
fun `plain old state is not soft locked with checkpoint`() = run(false, PlainOldState(nodePair), true)
|
||||
|
||||
@Test
|
||||
fun `fungible asset is soft locked`() = run(true, FungibleAssetImpl(nodePair), false)
|
||||
|
||||
@Test
|
||||
fun `fungible asset is soft locked with checkpoint`() = run(true, FungibleAssetImpl(nodePair), true)
|
||||
}
|
@ -1,8 +1,6 @@
|
||||
package net.corda.node.utilities.registration
|
||||
|
||||
import com.nhaarman.mockito_kotlin.any
|
||||
import com.nhaarman.mockito_kotlin.eq
|
||||
import com.nhaarman.mockito_kotlin.mock
|
||||
import com.nhaarman.mockito_kotlin.*
|
||||
import net.corda.core.crypto.Crypto
|
||||
import net.corda.core.crypto.SecureHash
|
||||
import net.corda.core.identity.CordaX500Name
|
||||
@ -11,6 +9,7 @@ import net.corda.node.utilities.X509Utilities
|
||||
import net.corda.node.utilities.getX509Certificate
|
||||
import net.corda.node.utilities.loadKeyStore
|
||||
import net.corda.testing.ALICE
|
||||
import net.corda.testing.rigorousMock
|
||||
import net.corda.testing.testNodeConfiguration
|
||||
import org.bouncycastle.asn1.x500.X500Name
|
||||
import org.bouncycastle.asn1.x500.style.BCStyle
|
||||
@ -38,10 +37,9 @@ class NetworkRegistrationHelperTest {
|
||||
.map { CordaX500Name(commonName = it, organisation = "R3 Ltd", locality = "London", country = "GB") }
|
||||
val certs = identities.stream().map { X509Utilities.createSelfSignedCACertificate(it, Crypto.generateKeyPair(X509Utilities.DEFAULT_TLS_SIGNATURE_SCHEME)) }
|
||||
.map { it.cert }.toTypedArray()
|
||||
|
||||
val certService: NetworkRegistrationService = mock {
|
||||
on { submitRequest(any()) }.then { id }
|
||||
on { retrieveCertificates(eq(id)) }.then { certs }
|
||||
val certService = rigorousMock<NetworkRegistrationService>().also {
|
||||
doReturn(id).whenever(it).submitRequest(any())
|
||||
doReturn(certs).whenever(it).retrieveCertificates(eq(id))
|
||||
}
|
||||
|
||||
val config = testNodeConfiguration(
|
||||
|
Reference in New Issue
Block a user