Add experimental kubernetes support (#617)

This commit is contained in:
Thomas Schroeter 2018-03-27 13:45:44 +01:00 committed by GitHub
parent f6e14b8d4d
commit 3ce5dac90f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
83 changed files with 1927 additions and 0 deletions

2
experimental/kubernetes/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
built/
config/

View File

@ -0,0 +1,25 @@
From b24ad7306e9e3c7faff032aa20fc7b42848ec815 Mon Sep 17 00:00:00 2001
From: Thomas Schroeter <thomas.schroeter@r3.com>
Date: Fri, 16 Feb 2018 11:07:58 +0000
Subject: [PATCH] Read corda rev from environment var
---
build.gradle | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/build.gradle b/build.gradle
index 206335645f..0ffab4eac1 100644
--- a/build.gradle
+++ b/build.gradle
@@ -109,7 +109,7 @@ plugins {
}
ext {
- corda_revision = org.ajoberstar.grgit.Grgit.open(file('.')).head().id
+ corda_revision = System.getenv("rev")
}
apply plugin: 'project-report'
--
2.14.3 (Apple Git-98)

View File

@ -0,0 +1,91 @@
# Compatibility Testing
We can build any release we want to test, be it for performance or for compatibility, using the approach outlined below. The test environment includes a driver that lets the nodes issue and pay to a random peer. This way, the notary is put under stress and all combinations of deployed versions are tested eventually.
We need to peak into the vaults to make sure the data is in good shape (TODO).
#### The tools
This package contains scripts and configuration files to
* build and publish the components of a Corda network,
* generate YAML files declaring a deployment for Kubernetes,
* deploy to a cluster, and
* remove the deployment.
#### Preparation
To run locally you need to have kubectl and docker-cli setup and pointing to a cluster and you need to know your namespace. However, we can move the tasks to team-city, so installing kubectl locally is not required any more.
#### Parameters
| Name | Description |
|------------------------|-------------------------------------------------------------------------------|
| revison | What to build, Git commit-ish, e.g. branch, tag, SHA1 |
| namespace | The kubernetes namespace to deploy to (TODO can be autogenerated) |
| storage-class | The storage class used to provision volumes via persistent volume claims |
| docker-registry | The docker registry to push and pull container images |
| kubernetes cluster | The kubernetes context to use |
#### Assumptions
- You (or the build environment) have access to a kubernetes cluster
- build environment has push access to docker registry
- kubernetes cluster has pull access to docker registry via `regsecret` secret (TODO: setup howto)
- storage can be provisioned (TODO: howto setup)
#### Build the container images from a git commit(-ish), e.g. branch-name, tag-name or SHA1.
```
./bin/build-and-publish/node.sh [<commit-ish>]
./bin/build-and-publish/cordapps.sh [<commit-ish>]
./bin/build-and-publish/doorman.sh [<commit-ish>]
./bin/build-and-publish/notary.sh [<commit-ish>]
./bin/build-and-publish/healthcheck.sh origin/thomas-compatibility-testing
```
The build is carried out in a new temporary git worktree that is removed after
successful builds.
This appends to a file `built/node-images.txt`. To build a couple of releases
you can specify the releases in a file (one per line) and use
```
for r in $(cat releases.txt); do ./bin/build-and-publish/node.sh $r; done
```
#### Generate the deployment config files using
```
python generate_config.py -n <namespace> -s <storage-class>
```
#### Deploy
You can either use `bin/start.sh` to boot and `bin/delete-all.sh` to tear down the network,
```
bin/start.sh
bin/delete-all.sh
```
or apply the config manually
```
kubectl apply -f config/<version>
kubectl scale statefulset <version> --replicas=1
```
### Notes
The notary jar is the same as the one of the regular node, but we currently include the
notary init script for the interaction with the doorman and we include the node.conf
in the container images. In the future we could use config maps for both the init
script and the node.conf.
### TODO
- extend the workload to test exercise notary changes and contract upgrades
- write the generate-config part in Kotlin
- design an write better tooling in Kotlin
- multi notary network
- generate distributed notary singular identity
- perhaps config maps support for config
- publish metrics from the load generators, e.g. error count
- investigate Helm
- add more databases
- use deployments

View File

@ -0,0 +1,67 @@
#!/bin/bash
set -eux
doorman_name="doorman"
notary_name="notary"
kubectl create -f config/doorman/service.yml
kubectl create -f config/notary/service.yml
kubectl create -f config/doorman/pod-init.yml
wait_for_doorman() {
set +e
# TODO: use wait-for
while :; do
sleep 5
kubectl logs $doorman_name | grep 'services started on doorman'
if [[ $? -eq 0 ]]; then
break
fi
done
set -e
}
wait_for_doorman
kubectl cp ${doorman_name}:/data/doorman/certificates/distribute-nodes/network-root-truststore.jks .
kubectl delete secret truststore-3.0.0 || true
kubectl create secret generic truststore-3.0.0 --from-file=network-root-truststore.jks
rm network-root-truststore.jks
set +e
kubectl create -f config/notary/pod-init.yml
while :; do
sleep 5
kubectl logs ${notary_name} | grep 'DONE_BOOTSTRAPPING'
if [[ $? -eq 0 ]]; then
break
fi
done
set -e
kubectl cp ${notary_name}:/data/notary-node-info .
kubectl cp notary-node-info ${doorman_name}:/data/
rm notary-node-info
kubectl delete po ${notary_name} ${doorman_name}
# TODO: wait for containers to be terminated, e.g. with grep on kubectl get po
while :; do
sleep 5
n=$(kubectl get po | wc -l)
if [[ n -eq 0 ]]; then
break
fi
done
kubectl create -f config/doorman/pod.yml
wait_for_doorman
kubectl create -f config/notary/pod.yml

View File

@ -0,0 +1,45 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
container_name=${2:-"bft-notary"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
workspace=$(mktemp -d -t kubeform-XXX)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
container_image="${registry}/r3/${container_name}:$rev"
(
cd $workspace
git apply $patch
./gradlew --debug jar
JAR=$(ls -S node/capsule/build/libs | head -n1)
cp node/capsule/build/libs/$JAR build-contexts/bft-notary/corda.jar
cd build-contexts/bft-notary
${docker_cmd} build -t $container_image .
${docker_cmd} push $container_image
)
mkdir -p built
echo "r3-$rev_short $container_image" > built/bft-notary.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,45 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
workspace=$(mktemp -d -t kubeform-XXX)
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
load_gen_image="${registry}/r3/load-gen-cordapps:$rev"
(
cd $workspace
git apply $patch
# Build Healthcheck Cordapps
./gradlew finance:jar
JAR=$(ls -S finance/build/libs | head -n1)
cp finance/build/libs/$JAR build-contexts/cordapps/
cd build-contexts/cordapps
${docker_cmd} build -t $load_gen_image .
${docker_cmd} push $load_gen_image
)
mkdir -p built
echo "$load_gen_image" > built/cordapps.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,43 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
workspace=$(mktemp -d -t kubeform-XXX)
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
container_image="${registry}/r3/doorman:$rev"
(
cd $workspace
git apply $patch
./gradlew network-management:capsule:buildDoormanJAR
JAR=$(ls -S network-management/capsule/build/libs/doorman-*.jar | head -n1)
cp $JAR build-contexts/doorman/doorman.jar
cd build-contexts/doorman
${docker_cmd} build -t $container_image .
${docker_cmd} push $container_image
)
mkdir -p built
echo "doorman-r3-$rev_short $container_image" > built/doorman.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,45 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
container_name=${2:-"ha-node"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
workspace=$(mktemp -d -t kubeform-XXX)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
container_image="${registry}/r3/${container_name}:$rev"
(
cd $workspace
git apply $patch
./gradlew jar
JAR=$(ls -S node/capsule/build/libs | head -n1)
cp node/capsule/build/libs/$JAR build-contexts/ha-node/corda.jar
cd build-contexts/ha-node
${docker_cmd} build -t $container_image .
${docker_cmd} push $container_image
)
mkdir -p built
echo "$container_name-$rev_short $container_image" > built/ha-node-image.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,44 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
workspace=$(mktemp -d -t kubeform-XXX)
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
load_gen_image="${registry}/r3/load-generator:$rev"
(
cd $workspace
git apply $patch
# Build Healthcheck
./gradlew tools:notaryhealthcheck:shadowJar
cp tools/notaryhealthcheck/build/libs/shadow.jar build-contexts/load-generator/app.jar
cd build-contexts/load-generator
${docker_cmd} build -t $load_gen_image .
${docker_cmd} push $load_gen_image
)
mkdir -p built
echo "$load_gen_image" > built/load_generator.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,45 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
container_name=${2:-"hot-warm-node"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
workspace=$(mktemp -d -t kubeform-XXX)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
container_image="${registry}/r3/${container_name}:$rev"
(
cd $workspace
git apply $patch
./gradlew jar
JAR=$(ls -S node/capsule/build/libs | head -n1)
cp node/capsule/build/libs/$JAR build-contexts/hot-warm/corda.jar
cd build-contexts/hot-warm
${docker_cmd} build -t $container_image .
${docker_cmd} push $container_image
)
mkdir -p built
echo "$container_name-$rev_short $container_image" > built/hot-warm-image.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,45 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
container_name=${2:-"r3-corda"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
workspace=$(mktemp -d -t kubeform-XXX)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
container_image="${registry}/r3/${container_name}:$rev"
(
cd $workspace
git apply $patch
./gradlew jar
JAR=$(ls -S node/capsule/build/libs | head -n1)
cp node/capsule/build/libs/$JAR build-contexts/node/corda.jar
cd build-contexts/node
${docker_cmd} build -t $container_image .
${docker_cmd} push $container_image
)
mkdir -p built
echo "r3-$rev_short $container_image" >> built/node-images.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,43 @@
#!/bin/sh
set -eux
target_rev=${1:-"HEAD"}
registry=${CONTAINER_REGISTRY:-"ctesting.azurecr.io"}
workspace=$(mktemp -d -t kubeform-XXX)
export rev=$(git rev-parse $target_rev)
rev_short=$(git rev-parse --short $target_rev)
git worktree add $workspace $rev
cp -r build-contexts $workspace
docker_cmd="docker"
if [ "$(uname)" = "Linux" ]; then
docker_cmd="sudo docker"
fi
patch="0001-Read-corda-rev-from-environment-var.patch"
cp $patch $workspace
container_image="${registry}/r3/r3-notary:$rev"
(
cd $workspace
git apply $patch
./gradlew jar
JAR=$(ls -S node/capsule/build/libs | head -n1)
cp node/capsule/build/libs/$JAR build-contexts/notary/corda.jar
cd build-contexts/notary
${docker_cmd} build -t $container_image .
${docker_cmd} push $container_image
)
mkdir -p built
echo "notary-r3-$rev_short $container_image" > built/notary.txt
rm -rf $workspace
git worktree prune

View File

@ -0,0 +1,23 @@
#!/bin/sh
set -eux
# TODO: perhaps delte the namespace and recreate the PVCs?
kubectl delete configmap corda
kubectl delete configmap doorman
kubectl delete --all statefulsets
kubectl delete --all deployments
kubectl delete --all services
kubectl delete --all pods
kubectl delete --all jobs
while :; do
sleep 5
n=$(kubectl get pods | wc -l)
if [[ n -eq 0 ]]; then
break
fi
done
kubectl delete --all persistentvolumeclaims

View File

@ -0,0 +1,6 @@
#!/bin/sh
kubectl apply -f config/ha/db-service.yml
kubectl apply -f config/ha/db-pod.yml
kubectl apply -f config/ha/service.yml
kubectl apply -f config/ha/node-a.yml

View File

@ -0,0 +1,8 @@
#!/bin/sh
kubectl apply -f config/ha/db-service.yml
kubectl apply -f config/ha/db-pod.yml
kubectl apply -f config/ha/zk-service.yml
kubectl apply -f config/ha/zk-pod.yml
kubectl apply -f templates/services/hotwarm.yml
kubectl apply -f config/ha/hot-warm.yml

View File

@ -0,0 +1,45 @@
#!/bin/sh
set -eux
# Configuration files
kubectl create configmap corda --from-file=config-files
kubectl create configmap doorman --from-file=config-files/doorman
# Claim volumes.
kubectl apply -f config/persistent-volume-claims/
# TODO: do we need to wait for the claims to be bound?
# Distribute cordapps.
kubectl apply -f config/load-generator/distribute-cordapp-job.yml
set +e
while :; do
sleep 5
kubectl describe job distribute-cordapps | grep '1 Succeeded'
if [[ $? -eq 0 ]]; then
break
fi
done
set -e
kubectl delete job distribute-cordapps
# Apply config.
for d in $(find config -name 'r3-*' -type d); do
kubectl apply -f $d
done
for d in $(find config -name 'corda-*' -type d); do
kubectl apply -f $d
done
# Bootup doorman and notary.
./bin/bootstrap.sh
# Startup corda nodes and load generator.
for d in $(find config -name 'r3-*' -type d); do
kubectl scale sts $(basename $d) --replicas=1
done
for d in $(find config -name 'corda-*' -type d); do
kubectl scale sts $(basename $d) --replicas=1
done
kubectl apply -f config/load-generator/load-generator.yml
kubectl scale deployment load-generator --replicas=1

View File

@ -0,0 +1,13 @@
FROM openjdk:8u151-jre-alpine
COPY corda.jar /app/corda.jar
COPY entrypoint.sh /app/entrypoint.sh
COPY node.conf /app/node.conf
WORKDIR /app
EXPOSE 10001 10002
CMD ["/bin/sh", "entrypoint.sh"]

View File

@ -0,0 +1,7 @@
#!/bin/sh
set -eux
export REPLICA_ID=$(echo $HOSTNAME | grep -o "\d\+")
java -Xmx700m -jar corda.jar --log-to-console --no-local-shell

View File

@ -0,0 +1,25 @@
database {
runMigration=true
}
myLegalName="O=NotaryService"${REPLICA_ID}", L=Zurich, C=CH"
notary {
bftSMaRt {
clusterAddresses=[
"bft-0.bft.thomas.svc.cluster.local:12000",
"bft-1.bft.thomas.svc.cluster.local:12000",
"bft-2.bft.thomas.svc.cluster.local:12000",
"bft-3.bft.thomas.svc.cluster.local:12000"
]
debug=false
exposeRaces=false
replicaId=${REPLICA_ID}
}
custom=false
validating=false
}
p2pAddress="localhost:10002"
rpcSettings {
address="localhost:10022"
adminAddress="localhost:10122"
}
rpcUsers=[]

View File

@ -0,0 +1,5 @@
FROM busybox
RUN mkdir /cordapps
COPY *.jar /cordapps

View File

@ -0,0 +1,10 @@
FROM openjdk:8u151-jre-alpine
RUN mkdir /app
COPY doorman.jar /app
COPY init.sh /app
COPY entrypoint.sh /app
WORKDIR /app
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]

View File

@ -0,0 +1,9 @@
#!/bin/sh
set -eux
cd /data
java -Xmx700m -jar /app/doorman.jar --config-file /config/doorman.conf \
--set-network-parameters /config/network-parameters.conf || true
exec java -Xmx700m -jar /app/doorman.jar --config-file /config/doorman.conf

View File

@ -0,0 +1,14 @@
#!/bin/sh
set -eux
cd /data
rm -rf *
mkdir -p /data/doorman
yes '' | java -jar /app/doorman.jar --config-file ${NODE_INIT_CONFIG:-"/config/doorman-init.conf"} --mode ROOT_KEYGEN
yes '' | java -jar /app/doorman.jar --config-file ${NODE_INIT_CONFIG:-"/config/doorman-init.conf"} --mode CA_KEYGEN
java -Xmx700m -jar /app/doorman.jar --config-file ${NODE_INIT_CONFIG:-"/config/doorman-init.conf"}

View File

@ -0,0 +1,12 @@
FROM openjdk:8u151-jre-alpine
COPY corda.jar /app/corda.jar
COPY plugins /app/plugins/
COPY entrypoint.sh /app/entrypoint.sh
COPY node.conf /app/node.conf
WORKDIR /app
EXPOSE 10001 10002
CMD ["/bin/sh", "entrypoint.sh"]

View File

@ -0,0 +1,8 @@
#!/bin/bash
set -eux
img="ctesting.azurecr.io/r3/hanode:$(git rev-parse --short HEAD)"
docker build -t $img .
docker push $img

View File

@ -0,0 +1,11 @@
#!/bin/sh
set -eux
java -Xmx700m -jar corda.jar --initial-registration \
--network-root-truststore /truststore/network-root-truststore.jks \
--network-root-truststore-password '' || true
exec java -Xmx700m -jar corda.jar \
--no-local-shell \
--log-to-console

View File

@ -0,0 +1,41 @@
p2pAddress : "hanode:10002"
rpcSettings {
address : ${HOSTNAME}":10003"
adminAddress: ${HOSTNAME}":12000"
}
myLegalName : "O=Corda HA, L=London, C=GB"
keyStorePassword : "cordacadevpass"
trustStorePassword : "trustpass"
devMode : true
jarDirs = ["plugins", "cordapps"]
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://db:5432/postgres"
dataSource.user = postgres
dataSource.password = ""
}
database = {
transactionIsolationLevel = READ_COMMITTED
runMigration = true
}
enterpriseConfiguration = {
mutualExclusionConfiguration = {
on = true
machineName = ${HOSTNAME}
updateInterval = 20000
waitInterval = 40000
}
}
rpcUsers=[
{
user=demou
password=demop
permissions=[
ALL
]
}
]
compatibilityZoneURL=${COMPATIBILITY_ZONE_URL}

View File

@ -0,0 +1,12 @@
FROM openjdk:8u151-jre-alpine
COPY corda.jar /app/corda.jar
COPY plugins /app/plugins/
COPY entrypoint.sh /app/entrypoint.sh
COPY node.conf /app/node.conf
WORKDIR /app
EXPOSE 10001 10002
CMD ["/bin/sh", "entrypoint.sh"]

View File

@ -0,0 +1,8 @@
#!/bin/bash
set -eux
img="ctesting.azurecr.io/r3/hanode:$(git rev-parse --short HEAD)"
docker build -t $img .
docker push $img

View File

@ -0,0 +1,14 @@
#!/bin/sh
set -eux
java -Xmx700m -jar corda.jar --initial-registration \
--config-file=${CONFIG_FILE} \
--network-root-truststore /truststore/network-root-truststore.jks \
--network-root-truststore-password '' || true
exec java -Xmx700m -jar corda.jar \
--config-file=${CONFIG_FILE} \
--no-local-shell \
--log-to-console \
"$@"

View File

@ -0,0 +1,39 @@
p2pAddress : "hanode:10002"
rpcSettings {
address : ${HOSTNAME}":10003"
adminAddress: ${HOSTNAME}":12000"
}
myLegalName : "O=Corda HA, L=London, C=GB"
keyStorePassword : "cordacadevpass"
trustStorePassword : "trustpass"
devMode : true
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://db:5432/postgres"
dataSource.user = postgres
dataSource.password = ""
}
database = {
transactionIsolationLevel = READ_COMMITTED
runMigration = true
}
enterpriseConfiguration = {
mutualExclusionConfiguration = {
on = true
machineName = ${HOSTNAME}
updateInterval = 20000
waitInterval = 40000
}
}
rpcUsers=[
{
user=demou
password=demop
permissions=[
ALL
]
}
]
compatibilityZoneURL=${COMPATIBILITY_ZONE_URL}

View File

@ -0,0 +1,9 @@
FROM openjdk:8u151-jre-alpine
COPY app.jar /app/app.jar
COPY entrypoint.sh /app/entrypoint.sh
WORKDIR /app
CMD ["/bin/sh", "entrypoint.sh"]

View File

@ -0,0 +1,5 @@
#!/bin/sh
set -eux
exec java -Xmx700m -jar app.jar ${TARGET_HOST}:${RPC_PORT}

View File

@ -0,0 +1,11 @@
FROM openjdk:8u151-jre-alpine
COPY corda.jar /app/corda.jar
COPY entrypoint.sh /app/entrypoint.sh
WORKDIR /app
EXPOSE 10001 10002
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]

View File

@ -0,0 +1,28 @@
#!/bin/sh
set -eux
export LEGAL_NAME="C=GB,L=London,O=T-$HOSTNAME"
export P2P_ADDRESS=$(eval "echo $P2P_ADDRESS")
export RPC_ADDRESS=$(eval "echo $RPC_ADDRESS")
export ADMIN_ADDRESS=$(eval "echo $ADMIN_ADDRESS")
export COMPATIBILITY_ZONE_URL="http://$DOORMAN_SERVICE_HOST:1300"
env
# TODO: fix
cp ${CONFIG_FILE} /app/node.conf
java -Xmx700m -jar corda.jar --initial-registration \
--config-file=${CONFIG_FILE} \
--network-root-truststore /truststore/network-root-truststore.jks \
--network-root-truststore-password ''
exec java -Xmx700m -jar corda.jar \
--config-file=${CONFIG_FILE} \
--no-local-shell \
--log-to-console \
"$@"

View File

@ -0,0 +1,11 @@
FROM openjdk:8u151-jre-alpine
RUN mkdir /app
COPY corda.jar /app
COPY node.conf /app
COPY init.sh /app
COPY entrypoint.sh /app
WORKDIR /app
CMD ["/bin/sh", "entrypoint.sh"]

View File

@ -0,0 +1,11 @@
set -eux
export COMPATIBILITY_ZONE_URL=$(eval "echo http://${DOORMAN_PORT_1300_TCP_ADDR}:1300")
export P2P_ADDRESS=$(eval "echo ${HOSTNAME}:10001")
export RPC_ADDRESS=$(eval "echo ${HOSTNAME}:10002")
exec java -Xmx1g -jar /app/corda.jar --log-to-console \
--no-local-shell \
--base-directory=/data \
--network-root-truststore=/truststore/network-root-truststore.jks

View File

@ -0,0 +1,27 @@
#!/bin/sh
set -eux
export COMPATIBILITY_ZONE_URL=$(eval "echo http://${DOORMAN_PORT_1300_TCP_ADDR}:1300")
export P2P_ADDRESS=$(eval "echo ${HOSTNAME}:10001")
export RPC_ADDRESS=$(eval "echo ${HOSTNAME}:10002")
cd /data
rm -rf *
cp /app/node.conf .
echo 'hello'
java -Xmx1g -jar /app/corda.jar --initial-registration \
--base-directory=/data \
--network-root-truststore /truststore/network-root-truststore.jks \
--network-root-truststore-password ''
java -Xmx1g -jar /app/corda.jar --just-generate-node-info --base-directory=/data
cp nodeInfo* notary-node-info
echo 'DONE_BOOTSTRAPPING'
sleep 3600

View File

@ -0,0 +1,22 @@
myLegalName="C=GB,L=London,O=NotaryService0"
notary {
validating=false
}
devMode=false
p2pAddress=${P2P_ADDRESS}
rpcAddress=${RPC_ADDRESS}
rpcSettings {
adminAddress=${HOSTNAME}":7777"
}
rpcUsers=[]
compatibilityZoneURL=${COMPATIBILITY_ZONE_URL}
enterpriseConfiguration {
tuning {
flowThreadPoolSize = 4
}
}

View File

@ -0,0 +1,32 @@
basedir = "/data/doorman"
address = ${HOSTNAME}":1300"
#For local signing
rootStorePath = ${basedir}"/certificates/rootstore.jks"
keystorePath = ${basedir}"/certificates/caKeystore.jks"
keystorePassword = "password"
caPrivateKeyPassword = "password"
# Database config
dataSourceProperties {
dataSourceClassName = org.h2.jdbcx.JdbcDataSource
"dataSource.url" = "jdbc:h2:file:"${basedir}"/persistence;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=10000;WRITE_DELAY=0;AUTO_SERVER_PORT="${h2port}
"dataSource.user" = sa
"dataSource.password" = ""
}
database { runMigration = true }
h2port = 0
# Doorman config
# Comment out this section if running without doorman service
doorman {
approveInterval = 20
approveAll = true
}
# Network map config
# Comment out this section if running without network map service
#networkMap {
# cacheTimeout = 1000
# signInterval = 3000
#}

View File

@ -0,0 +1,32 @@
basedir = "/data/doorman"
address = ${HOSTNAME}":1300"
#For local signing
rootStorePath = ${basedir}"/certificates/rootstore.jks"
keystorePath = ${basedir}"/certificates/caKeystore.jks"
keystorePassword = "password"
caPrivateKeyPassword = "password"
# Database config
dataSourceProperties {
dataSourceClassName = org.h2.jdbcx.JdbcDataSource
"dataSource.url" = "jdbc:h2:file:"${basedir}"/persistence;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=10000;WRITE_DELAY=0;AUTO_SERVER_PORT="${h2port}
"dataSource.user" = sa
"dataSource.password" = ""
}
database { runMigration = true }
h2port = 0
# Doorman config
# Comment out this section if running without doorman service
doorman {
approveInterval = 20
approveAll = true
}
# Network map config
# Comment out this section if running without network map service
networkMap {
cacheTimeout = 1000
signInterval = 3000
}

View File

@ -0,0 +1,7 @@
notaries : [{
notaryNodeInfoFile: "/data/notary-node-info"
validating: false
}]
minimumPlatformVersion = 1
maxMessageSize = 10485760
maxTransactionSize = 10485760

View File

@ -0,0 +1,41 @@
p2pAddress : "hot-warm:10002"
rpcSettings {
address : ${HOSTNAME}":10003"
adminAddress: ${HOSTNAME}":12000"
}
myLegalName : "O=Corda HA, L=London, C=GB"
keyStorePassword : "cordacadevpass"
trustStorePassword : "trustpass"
devMode : true
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://db:5432/postgres"
dataSource.user = postgres
dataSource.password = ""
}
database = {
transactionIsolationLevel = READ_COMMITTED
runMigration = true
}
enterpriseConfiguration = {
haConfiguration {
hotWarm {
connectString = "zk"
electionPath = "/example/leader"
nodeID = ${HOSTNAME}
priority = 1
}
}
}
rpcUsers=[
{
user=demou
password=demop
permissions=[
ALL
]
}
]
compatibilityZoneURL=${COMPATIBILITY_ZONE_URL}

View File

@ -0,0 +1,19 @@
myLegalName=${LEGAL_NAME}
p2pAddress=${P2P_ADDRESS}
rpcSettings {
address=${RPC_ADDRESS}
adminAddress=${ADMIN_ADDRESS}
}
jarDirs = ["cordapps"]
rpcUsers=[
{
password=demop
permissions=[
ALL
]
username=demou
}
]
compatibilityZoneURL=${COMPATIBILITY_ZONE_URL}

View File

@ -0,0 +1,113 @@
from argparse import ArgumentParser
import glob
import os
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader('templates'))
service_template = env.get_template('service.yml.j2')
statefulset_template = env.get_template('statefulset.yml.j2')
loadgenerator_template = env.get_template('load-generator.yml')
distribute_cordapp_tmpl = env.get_template('distribute-healthcheck-cordapp.yml')
def main():
p = ArgumentParser()
p.add_argument("--namespace", "-n", help="namespace where to deploy", required=True)
p.add_argument("--storage-class", "-s", default="standard", help="storage-class for volume claims [default=default]")
args = p.parse_args()
namespace = args.namespace
storage_class = args.storage_class
load_gen_image = open('built/load_generator.txt').read().strip()
cordapps_image = open('built/cordapps.txt').read().strip()
for l in open('built/node-images.txt'):
version, image = l.split()
p = os.path.join('config', version)
makedirs(p)
version = version.replace(".", "-").lower()
open(os.path.join(p, 'service.yml'), 'w').write(service_template.render(version=version))
open(os.path.join(p, 'statefulset.yml'), 'w').write(statefulset_template.render(version=version, image=image, namespace=namespace))
# Emit load generator config.
p = os.path.join('config', 'load-generator')
makedirs(p)
# The load generator will hit the node with the last version in the input.
# Since the load generator discovers all nodes via network map snapshot, it
# is not necessary to redeploy the load generator if another load generator
# is already running.
open(os.path.join(p, 'load-generator.yml'), 'w').write(loadgenerator_template.render(version=version, namespace=namespace, image=load_gen_image))
open(os.path.join(p, 'distribute-cordapp-job.yml'), 'w').write(distribute_cordapp_tmpl.render(image=cordapps_image))
# Genrerate persistent volume claims.
env = Environment(loader=FileSystemLoader('templates/persistent-volume-claims'))
p = 'config/persistent-volume-claims'
makedirs(p)
for templ in glob.glob('templates/persistent-volume-claims/*'):
b = os.path.basename(templ)
t = env.get_template(b)
open(p +'/'+ b, 'w').write(t.render(storage_class=storage_class))
# Prep for doorman and notary.
services_env = Environment(loader=FileSystemLoader('templates/services'))
pods_env = Environment(loader=FileSystemLoader('templates/pods'))
# Generate doorman.
doorman_name, doorman_image = open('built/doorman.txt').read().split()
templ = services_env.get_template('doorman.yml')
makedirs('config/doorman')
open('config/doorman/service.yml', 'w').write(templ.render(image=doorman_image, name=doorman_name))
templ = pods_env.get_template('doorman.yml')
open('config/doorman/pod.yml', 'w').write(templ.render(image=doorman_image, name=doorman_name))
templ = pods_env.get_template('doorman-init.yml')
open('config/doorman/pod-init.yml', 'w').write(templ.render(image=doorman_image, name=doorman_name))
# Generate notary.
notary_name, notary_image = open('built/notary.txt').read().split()
templ = services_env.get_template('notary.yml')
makedirs('config/notary')
open('config/notary/service.yml', 'w').write(templ.render(image=notary_image, name=notary_name))
templ = pods_env.get_template('notary.yml')
open('config/notary/pod.yml', 'w').write(templ.render(image=notary_image, name=notary_name, namespace=namespace))
templ = pods_env.get_template('notary-init.yml')
open('config/notary/pod-init.yml', 'w').write(templ.render(image=notary_image, name=notary_name, namespace=namespace))
# Template the HA nodes.
try:
makedirs('config/ha')
_, haimage = open('built/ha-node-image.txt').read().split()
templ = pods_env.get_template('ha-node.yml')
open('config/ha/node-a.yml', 'w').write(templ.render(image=haimage, name='hanode-a'))
open('config/ha/node-b.yml', 'w').write(templ.render(image=haimage, name='hanode-b'))
templ = services_env.get_template('ha.yml')
open('config/ha/service.yml', 'w').write(templ.render())
templ = services_env.get_template('db.yml')
open('config/ha/db-service.yml', 'w').write(templ.render())
templ = pods_env.get_template('db.yml')
open('config/ha/db-pod.yml', 'w').write(templ.render())
# Hot warm
_, hwimage = open('built/hot-warm-image.txt').read().split()
templ = pods_env.get_template('hot-warm.yml')
open('config/ha/hot-warm.yml', 'w').write(templ.render(image=hwimage, name='hot-warm'))
templ = services_env.get_template('zk.yml')
open('config/ha/zk-service.yml', 'w').write(templ.render())
templ = pods_env.get_template('zk.yml')
open('config/ha/zk-pod.yml', 'w').write(templ.render())
except IOError:
pass # Assuming the HA node images are not built.
def makedirs(p):
try:
os.makedirs(p)
except Exception: # assuming file exists
pass
if __name__ == '__main__':
main()

View File

@ -0,0 +1,12 @@
# HA Testing
## Hot Cold
Components:
- Postgres DB service and pod called `db`
- hanode-a and hanode-b pods, a service called `hanode` to load balance between the two
- persistent volumes for artemis, cordapps, doorman and notary
Registration is currently done with the doorman, the bootstrapper didn't work
for me, because some environment varaibles in the config file didn't resolve
properly. This could be improved by adding defaults.

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Pod
metadata:
name: db
labels:
app: db
spec:
containers:
- name: db
image: postgres
ports:
- containerPort: 5432

View File

@ -0,0 +1,42 @@
apiVersion: v1
kind: Pod
metadata:
name: hanode-a
labels:
app: hanode
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: hanode
image: ctesting.azurecr.io/r3/hanode:b3f29942d1
imagePullPolicy: Always
env:
- name: "COMPATIBILITY_ZONE_URL"
value: "http://doorman:1300"
volumeMounts:
- mountPath: /app/artemis
name: artemis
- mountPath: /app/cordapps
name: cordapps
- mountPath: /app/certificates
name: certificates
- mountPath: /truststore
name: truststore
readOnly: true
ports:
- containerPort: 10002
- containerPort: 10003
volumes:
- name: artemis
persistentVolumeClaim:
claimName: artemis
- name: cordapps
persistentVolumeClaim:
claimName: cordapps
- name: certificates
persistentVolumeClaim:
claimName: certificates
- name: truststore
secret:
secretName: truststore-3.0.0

View File

@ -0,0 +1,42 @@
apiVersion: v1
kind: Pod
metadata:
name: hanode-b
labels:
app: hanode
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: hanode
image: ctesting.azurecr.io/r3/hanode:b3f29942d1
imagePullPolicy: Always
env:
- name: "COMPATIBILITY_ZONE_URL"
value: "http://doorman:1300"
volumeMounts:
- mountPath: /app/artemis
name: artemis
- mountPath: /app/cordapps
name: cordapps
- mountPath: /app/certificates
name: certificates
- mountPath: /truststore
name: truststore
readOnly: true
ports:
- containerPort: 10002
- containerPort: 10003
volumes:
- name: artemis
persistentVolumeClaim:
claimName: artemis
- name: cordapps
persistentVolumeClaim:
claimName: cordapps
- name: certificates
persistentVolumeClaim:
claimName: certificates
- name: truststore
secret:
secretName: truststore-3.0.0

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: artemis
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 200Mi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: certificates
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 200Mi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cordapps
spec:
accessModes:
- ReadOnlyMany
storageClassName: standard
resources:
requests:
storage: 200Mi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: node-infos
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 200Mi

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: db
spec:
ports:
- name: postgres
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: db

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: hanode
spec:
ports:
- name: p2p
port: 10002
protocol: TCP
targetPort: 10002
- name: rpc
port: 10003
protocol: TCP
targetPort: 10003
selector:
app: hanode

View File

@ -0,0 +1,10 @@
#!/bin/sh
set -eux
kubectl apply -f pvcs/artemis.yml
kubectl apply -f pvcs/certificates.yml
kubectl apply -f services/
kubectl apply -f pods/db.yml

View File

@ -0,0 +1 @@
release-V3.0.1-DEV-PREVIEW

View File

@ -0,0 +1 @@
Jinja2==2.10

View File

@ -0,0 +1,22 @@
apiVersion: batch/v1
kind: Job
metadata:
name: distribute-cordapps
spec:
template:
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: distribute-healthcheck-cordapp
image: {{ image }}
command: ["/bin/sh", "-c", "rm -f cordapps-mnt/* && cp /cordapps/* /cordapps-mnt/"]
volumeMounts:
- name: cordapps
mountPath: /cordapps-mnt
restartPolicy: Never
volumes:
- name: cordapps
persistentVolumeClaim:
claimName: cordapps
backoffLimit: 4

View File

@ -0,0 +1,36 @@
apiVersion: "apps/v1beta1"
kind: Deployment
metadata:
name: load-generator
spec:
replicas: 0
selector:
matchLabels:
app: load-generator
template:
metadata:
labels:
app: load-generator
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: notary-healthcheck
image: {{ image }}
imagePullPolicy: Always
env:
- name: "TARGET_HOST"
value: "{{ version }}-0.{{ version }}.{{ namespace }}.svc.cluster.local"
- name: "RPC_PORT"
value: "10003"
- name: "NUM_ITERATIONS"
value: "2"
- name: "SLEEP_MILLIS"
value: "30000"
volumeMounts:
- name: cordapps
mountPath: /cordapps
volumes:
- name: cordapps
persistentVolumeClaim:
claimName: cordapps

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: artemis
spec:
accessModes:
- ReadWriteMany
storageClassName: {{ storage_class }}
resources:
requests:
storage: 200Mi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: certificates
spec:
accessModes:
- ReadWriteMany
storageClassName: {{ storage_class }}
resources:
requests:
storage: 200Mi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cordapps
spec:
accessModes:
- ReadOnlyMany
storageClassName: {{ storage_class }}
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: doorman
spec:
accessModes:
- ReadWriteMany
storageClassName: {{ storage_class }}
resources:
requests:
storage: 10Gi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: notary
spec:
accessModes:
- ReadWriteMany
storageClassName: {{ storage_class }}
resources:
requests:
storage: 10Gi

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Pod
metadata:
name: db
labels:
app: db
spec:
containers:
- name: db
image: postgres
ports:
- containerPort: 5432

View File

@ -0,0 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
name: doorman
labels:
app: doorman
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: doorman
image: {{ image }}
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: pv
- mountPath: /config
name: config
ports:
- containerPort: 1300
command:
- '/app/init.sh'
volumes:
- name: config
configMap:
name: doorman
- name: pv
persistentVolumeClaim:
claimName: doorman

View File

@ -0,0 +1,27 @@
apiVersion: v1
kind: Pod
metadata:
name: doorman
labels:
app: doorman
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: doorman
image: {{ image }}
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: pv
- mountPath: /config
name: config
ports:
- containerPort: 1300
volumes:
- name: config
configMap:
name: doorman
- name: pv
persistentVolumeClaim:
claimName: doorman

View File

@ -0,0 +1,42 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ name }}
labels:
app: hanode
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: hanode
image: {{ image }}
imagePullPolicy: Always
env:
- name: "COMPATIBILITY_ZONE_URL"
value: "http://doorman:1300"
volumeMounts:
- mountPath: /app/artemis
name: artemis
- mountPath: /app/cordapps
name: cordapps
- mountPath: /app/certificates
name: certificates
- mountPath: /truststore
name: truststore
readOnly: true
ports:
- containerPort: 10002
- containerPort: 10003
volumes:
- name: artemis
persistentVolumeClaim:
claimName: artemis
- name: cordapps
persistentVolumeClaim:
claimName: cordapps
- name: certificates
persistentVolumeClaim:
claimName: certificates
- name: truststore
secret:
secretName: truststore-3.0.0

View File

@ -0,0 +1,62 @@
apiVersion: "apps/v1beta1"
kind: Deployment
metadata:
name: {{ name }}
spec:
replicas: 0
selector:
matchLabels:
app: {{ name }}
template:
metadata:
labels:
app: {{ name }}
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: node
image: {{ image }}
imagePullPolicy: Always
env:
- name: "CONFIG_FILE"
value: "/config/hot-warm.conf"
- name: "COMPATIBILITY_ZONE_URL"
value: "http://doorman:1300"
volumeMounts:
- mountPath: /config
name: config
readOnly: true
- mountPath: /app/artemis
name: artemis
- mountPath: /app/cordapps
name: cordapps
- mountPath: /app/certificates
name: certificates
- mountPath: /truststore
name: truststore
readOnly: true
ports:
- containerPort: 10002
- containerPort: 10003
readinessProbe:
tcpSocket:
port: 10002
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config
configMap:
name: corda
- name: artemis
persistentVolumeClaim:
claimName: artemis
- name: cordapps
persistentVolumeClaim:
claimName: cordapps
- name: certificates
persistentVolumeClaim:
claimName: certificates
- name: truststore
secret:
secretName: truststore-3.0.0

View File

@ -0,0 +1,33 @@
apiVersion: v1
kind: Pod
metadata:
name: notary
labels:
app: notary
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: notary
image: {{ image }}
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: pv
- mountPath: /truststore
name: truststore
readOnly: true
ports:
- containerPort: 1300
env:
- name: "NAMESPACE"
value: "{{ namespace }}"
command:
- '/app/init.sh'
volumes:
- name: pv
persistentVolumeClaim:
claimName: notary
- name: truststore
secret:
secretName: truststore-3.0.0

View File

@ -0,0 +1,31 @@
apiVersion: v1
kind: Pod
metadata:
name: notary
labels:
app: notary
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: notary
image: {{ image }}
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: pv
- mountPath: /truststore
name: truststore
readOnly: true
env:
- name: "NAMESPACE"
value: "{{ namespace }}"
ports:
- containerPort: 1300
volumes:
- name: pv
persistentVolumeClaim:
claimName: notary
- name: truststore
secret:
secretName: truststore-3.0.0

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Pod
metadata:
name: zk
labels:
app: zk
spec:
containers:
- name: zk
image: zookeeper:3.5
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888

View File

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: {{ version }}
spec:
clusterIP: None
ports:
- name: p2p
port: 10002
protocol: TCP
targetPort: 10002
- name: rpc
port: 10003
protocol: TCP
targetPort: 10003
selector:
version: {{ version }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: db
spec:
ports:
- name: postgres
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: db

View File

@ -0,0 +1,11 @@
kind: Service
apiVersion: v1
metadata:
name: doorman
spec:
selector:
app: doorman
ports:
- protocol: TCP
port: 1300
targetPort: 1300

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: hanode
spec:
ports:
- name: p2p
port: 10002
protocol: TCP
targetPort: 10002
- name: rpc
port: 10003
protocol: TCP
targetPort: 10003
selector:
app: hanode

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: hot-warm
spec:
ports:
- name: p2p
port: 10002
protocol: TCP
targetPort: 10002
- name: rpc
port: 10003
protocol: TCP
targetPort: 10003
selector:
app: hot-warm

View File

@ -0,0 +1,16 @@
kind: Service
apiVersion: v1
metadata:
name: notary
spec:
selector:
app: notary
ports:
- protocol: TCP
port: 10001
targetPort: 10001
name: p2p
- protocol: TCP
port: 10002
targetPort: 10002
name: rpc

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: zk
spec:
ports:
- name: client-port
port: 2181
protocol: TCP
targetPort: 2181
selector:
app: zk

View File

@ -0,0 +1,54 @@
apiVersion: "apps/v1beta1"
kind: StatefulSet
metadata:
name: {{ version }}
spec:
serviceName: {{ version }}
replicas: 0
template:
metadata:
labels:
version: {{ version }}
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: corda
image: {{ image }}
imagePullPolicy: Always
ports:
- name: p2p
containerPort: 10002
- name: rpc
containerPort: 10003
- name: admin
containerPort: 10004
env:
- name: "P2P_ADDRESS"
value: "$HOSTNAME.{{ version }}.{{ namespace }}.svc.cluster.local:10002"
- name: "RPC_ADDRESS"
value: "$HOSTNAME.{{ version }}.{{ namespace }}.svc.cluster.local:10003"
- name: "ADMIN_ADDRESS"
value: "$HOSTNAME.{{ version }}.{{ namespace }}.svc.cluster.local:10004"
- name: "CONFIG_FILE"
value: "/config/node.conf"
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: truststore
mountPath: /truststore
readOnly: true
- name: cordapps
mountPath: /app/cordapps
readOnly: true
volumes:
- name: config
configMap:
name: corda
- name: truststore
secret:
secretName: truststore-3.0.0
- name: cordapps
persistentVolumeClaim:
claimName: cordapps