This is so that the node archiving service, which scans for tables containing "transaction_id" column, can automatically archive the sender and receiver distribution record information with the transaction.
* ENT-9875: Added new network parameters
- Added `transactionRecoveryPeriod`
- Added `confidentialIdentityPreGenerationPeriod`
These new parameters are currently set to be nullable meaning they can be ignored and the duration if not specified will be null rather than, e.g., 0. This currently allows for nothing changing/breaking in the node-api
_Note: if these params can stay as nullable then the deprecated constructor might not even be needed (since the existing one will still work), needs to be discussed._
* ENT-6893: Added interface for clients to grab openetelemetry handle.
* ENT-6893: Make detekt happy.
* ENT-6893: Fix warnings.
* ENT-6893: Make detekt happy.
* ENT-6893: Now shutdown opentelemetry when node stops or client is closed.
* ENT-6893: OpenTelemetryDriver is not not a singleton.
First cut of telemetry integration.
Open telemetry can be enabled in two ways, first is via an opentelemetry java agent specified on the command line. With this way you get the advantage of spans created from other libraries, like hibernate. The java agent does byte code rewriting to insert spans.
The second way is with the open telemetry driver (that links with the opentelemetry sdk). This is a fat jar provided with this project and needs to go into the node drivers directory.
There was a mistake made when we first introduced notary request signature checking, in that we didn't wrap it in SerializedBytes so it always got deserialized as part of the flow message payload. So to check the signature, it has to be re-serialized. This means for cross-version compatibility we can never change the serialized format of NotarisationRequest. In this case we need make sure that every SecureHash mentioned in that data structure is a distinct instance, even if the values are repeated / identical, as that is how it was in Corda 1.
With the introduction of interning of SecureHash, this ceased to be true once again, including undoing the attempts to force it on the sending side that had been introduced in previous versions of Corda. So here we introduce a way to force it, and consolidate the forcing to distinct SecureHash instances in the NotarisationRequest itself, rather than leaving to the caller of the constructor to remember to do it, so that serialized form will always be as per Corda 1.
ENT-6947: Implement interning for SecureHash, CordaX500Name, PublicKey, AsbtractParty and SignatureAttachmentConstraint, including automatic detection of internable types off companion objects in AMQP & Kyro deserialization. In some cases, add new factory methods to companion objects, and make main code base use them.
Performance tested in performance cluster with no negative impact visible (so default concurrency setting seems okay).
Testing suggests 5-6x memory saving for tokens in TokensSDK in memory selector. Should see approx. 1 million tokens per GB or better (1.5 million for the tokens we tested with).
This commit ports the previously implemented fix from Corda ENT. Due to
the unrelated changes and merge conflict, the fix has been manually
copied rather than cherry-picked.
* NOTICK - Don't know what the JIRA is but wanted to share.
* Updates to resolve bukld issues
* NOTICK: Fixed JDK11 version to prevent capsule version error
* ENT-6711: Added comment for use of jackson_kotlin_version.
* ENT-6711: Avoid deprecation warning, switched to the default method.
Co-authored-by: Chris Cochrane <chris.cochrane@r3.com>
Co-authored-by: Adel El-Beik <adel.el-beik@r3.com>
* ENT-6588 Restrict database operations platform flag
Put the restricting of database operations in `RestrictedConnection` and
`RestrictedEntityManager` behind a platform version flag.
`RESTRICTED_DATABASE_OPERATIONS = 7` was added to signify this.
If the version is less than 7, then the database operations will not be
restricted. A warning is logged to indicate that they are using
potentially dangerous methods.
If the version is 7 or greater, then the database operations are
restricted and throw an error if called.
Co-authored-by: Dan Newton <dan.newton@r3.com>
* ENT-6588 Restrict database operations platform flag
Put the restricting of database operations in `RestrictedConnection` and
`RestrictedEntityManager` behind a platform version flag.
`RESTRICTED_DATABASE_OPERATIONS = 7` was added to signify this.
If the version is less than 7, then the database operations will not be
restricted. A warning is logged to indicate that they are using
potentially dangerous methods.
If the version is 7 or greater, then the database operations are
restricted and throw an error if called.
Co-authored-by: Dan Newton <dan.newton@r3.com>
Co-authored-by: Dan Newton <dan.newton@r3.com>
* ENT-6588 Restrict database operations platform flag
Put the restricting of database operations in `RestrictedConnection` and
`RestrictedEntityManager` behind a platform version flag.
`RESTRICTED_DATABASE_OPERATIONS = 7` was added to signify this.
If the version is less than 7, then the database operations will not be
restricted. A warning is logged to indicate that they are using
potentially dangerous methods.
If the version is 7 or greater, then the database operations are
restricted and throw an error if called.
Co-authored-by: Dan Newton <dan.newton@r3.com>
* ENT-6588 Restrict database operations platform flag
Put the restricting of database operations in `RestrictedConnection` and
`RestrictedEntityManager` behind a platform version flag.
`RESTRICTED_DATABASE_OPERATIONS = 7` was added to signify this.
If the version is less than 7, then the database operations will not be
restricted. A warning is logged to indicate that they are using
potentially dangerous methods.
If the version is 7 or greater, then the database operations are
restricted and throw an error if called.
Fixes DDoS attack mentioned on the Jira ticket.
PR upgrades Artemis library to version 2.19.1.
This is our own release of Apache Artemis library which has vulnerability fix for v2.20 applied.
**_Breaking changes discovered during Artemis upgrade:_**
1. When the queue is created as temporary, it needs to explicitly be specified as non-durable.
2. By default, Artemis Client performs Host DNS name check against the certificate presented by the server. Our TLS certificates fail this check and this verification has to be explicitly disabled, see use of: `TransportConstants.VERIFY_HOST_PROP_NAME`.
3. Artemis Server now caches login attempts, even unsuccessful ones. When we add RPC users dynamically via DB insert this may have an unexpected outcome if the user with the same `userName` and `password` was not available previously.
To workaround permissions changing dynamically, authorization and authentication caches had to be disabled.
4. When computing `maxMessageSize`, the size of the headers content is now taken into account as well.
5. Artemis handling of start-up errors has changed. E.g. when the port is already bound.
6. A number of deprecated APIs like: `createTemporaryQueue`, `failoverOnInitialAttempt`, `NullOutputStream`, `CoreQueueConfiguration`.
7. Log warning message is produced like: `AMQ212080: Using legacy SSL store provider value: JKS. Please use either 'keyStoreType' or 'trustStoreType' instead as appropriate.`
8. As reported by QA, Artemis now produces more audit logging more details [here](https://r3-cev.atlassian.net/browse/ENT-6540). Log configuration been adjusted to reduce such output.
The `corda-shell` jar will already be installed if it exists in the the
node's `/drivers` directory. There is no need to include a
`URLClassLoader` to load its classes.
Rely on the process's main classloader.
* ENT-6025 remote artemis channel does not exist resulting in infinite retry loop
* ENT-6025 rename test
* ENT-6025 fix detekt and add description
* ENT-6025 add check on count of connected stack
Do not keep a flow in for observation if it receives an unexpected
session end message while in `ReceiveFinalityFlow` and
`ReceiveTransactionFlow` (due to being called by the former).
This is done by checking the message of the `UnexpectedFlowEndException`
that is thrown when a session end message instead of a data message and
if the stacktrace has `ReceiveTransactionFlow` at the top, after
removing statemachine stack frames.
Checking the stacktrace for `ReceiveTransactionFlow` is important
because the unexpected session end session message is only ok if a
transaction has not already been received. For example, if
`ResolveTransactionsFlow` is in the stack, then this indicates failure
when receiving transaction dependencies on a transaction that should be
recorded.
Also added a test that highlights that the `UnexpectedFlowEndException`
caused by the session end message can be caught, therefore users can
determine their own behaviour if desired.
Remove the shell code from the OS code base, this includes the modules:
- `:tools:shell`
- `:tools:shell-cli`
The shell will be run within a node if it exists within the node's `drivers` directory.
This is done by using a `URLClassloader` to load the `InteractiveShell` class into Corda's JVM process and running `startShell` and `runLocalShell`.
Running the shell within the `:samples` will require adding:
```
cordaDriver "net.corda:corda-shell:<corda_shell_version>"
```
To the module's `build.gradle` containing `deployNodes`. The script will then include the shell in the created nodes.
Checkpoint dumping of paused flows was not working because the dumper
expects a flow to have a `FlowState` of `Unstarted` or `Started`,
however due to a memory optimisation paused flows have their `FlowState`
set to `Paused`. This was causing causing an exception as well as a loss
of potentially useful information.
A flag `alwaysDeserializeCheckpoint` has been added to
`Checkpoint.Serialized.deserialize` which skips the memory optimisation
and forces the deserialization of the flow's `FlowState`.
Paused flows are now included in the dumped output along with their real
`FlowState` which is useful to users even if the flow is paused rather
than waiting for something to complete.
The status of the flow has also been added to the JSON output to assist
users in debugging their flows.
A public version of `FlowManagerRPCOps` which does not live in an
internal package has been added. This new interface shares the same name
as the internal one.
Because of the name sharing, the internal version has been
`@Deprecated`.
`FlowManagerRPCOpsImpl` implements both the new and old interfaces. This
allows for backwards compatibility, allowing old shells or clients to
call the old interface on newer nodes without breaking.
A user passed in a `FlowLogic` as an argument into another `FlowLogic`
called `subFlow` on it and had it throw an exception.
This all occurred before the first checkpoint, causing the state machine
to try and persist a FAILED checkpoint containing the flow's arguments.
Because the arguments contained a `FlowLogic` that had been started via
`subFlow` it held a reference to `FlowLogic._stateMachine` which cannot
be serialized.
This caused the flow to fail when trying to persist the fact that it
failed.
The flow arguments are now emptied during `ErrorFlowTransition` to
resolve this issue which mimics the behaviour of the first suspend.
Note, this only takes the arguments out of the serialized checkpoint, it
does not affect the flow metadata and therefore a flow's arguments can
still be viewed.
Co-authored-by: Dan Newton <dan.newton@r3.com>