corda/experimental/kubernetes
2018-03-27 13:45:44 +01:00
..
bin Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
build-contexts Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
config-files Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
ha-testing Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
templates Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
.gitignore Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
0001-Read-corda-rev-from-environment-var.patch Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
generate_config.py Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
Readme.md Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
releases.txt Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00
requirements.txt Add experimental kubernetes support (#617) 2018-03-27 13:45:44 +01:00

Compatibility Testing

We can build any release we want to test, be it for performance or for compatibility, using the approach outlined below. The test environment includes a driver that lets the nodes issue and pay to a random peer. This way, the notary is put under stress and all combinations of deployed versions are tested eventually.

We need to peak into the vaults to make sure the data is in good shape (TODO).

The tools

This package contains scripts and configuration files to

  • build and publish the components of a Corda network,
  • generate YAML files declaring a deployment for Kubernetes,
  • deploy to a cluster, and
  • remove the deployment.

Preparation

To run locally you need to have kubectl and docker-cli setup and pointing to a cluster and you need to know your namespace. However, we can move the tasks to team-city, so installing kubectl locally is not required any more.

Parameters

Name Description
revison What to build, Git commit-ish, e.g. branch, tag, SHA1
namespace The kubernetes namespace to deploy to (TODO can be autogenerated)
storage-class The storage class used to provision volumes via persistent volume claims
docker-registry The docker registry to push and pull container images
kubernetes cluster The kubernetes context to use

Assumptions

  • You (or the build environment) have access to a kubernetes cluster
  • build environment has push access to docker registry
  • kubernetes cluster has pull access to docker registry via regsecret secret (TODO: setup howto)
  • storage can be provisioned (TODO: howto setup)

Build the container images from a git commit(-ish), e.g. branch-name, tag-name or SHA1.

./bin/build-and-publish/node.sh [<commit-ish>]
./bin/build-and-publish/cordapps.sh [<commit-ish>]
./bin/build-and-publish/doorman.sh [<commit-ish>]
./bin/build-and-publish/notary.sh [<commit-ish>]
 ./bin/build-and-publish/healthcheck.sh origin/thomas-compatibility-testing

The build is carried out in a new temporary git worktree that is removed after successful builds.

This appends to a file built/node-images.txt. To build a couple of releases you can specify the releases in a file (one per line) and use

for r in $(cat releases.txt); do  ./bin/build-and-publish/node.sh $r; done

Generate the deployment config files using

python generate_config.py -n <namespace> -s <storage-class>

Deploy

You can either use bin/start.sh to boot and bin/delete-all.sh to tear down the network,

bin/start.sh
bin/delete-all.sh

or apply the config manually

kubectl apply -f config/<version>
kubectl scale statefulset <version> --replicas=1

Notes

The notary jar is the same as the one of the regular node, but we currently include the notary init script for the interaction with the doorman and we include the node.conf in the container images. In the future we could use config maps for both the init script and the node.conf.

TODO

  • extend the workload to test exercise notary changes and contract upgrades
  • write the generate-config part in Kotlin
  • design an write better tooling in Kotlin
  • multi notary network
  • generate distributed notary singular identity
  • perhaps config maps support for config
  • publish metrics from the load generators, e.g. error count
  • investigate Helm
  • add more databases
  • use deployments