Add HA notary setup tutorial (#937)

This commit is contained in:
Thomas Schroeter 2018-06-13 13:18:44 +01:00 committed by GitHub
parent ab2b27915a
commit bc93275f00
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 828 additions and 1 deletions

View File

@ -0,0 +1,180 @@
================================
Percona, the underlying Database
================================
Percona's `documentation page <https://www.percona.com/doc>`__ explains the installation in detail.
In this section we're setting up a
three-node Percona cluster. A three-node cluster can tolerate one crash
fault. In production, you probably want to run five nodes, to be able to
tolerate up to two faults.
Host names and IP addresses used in the example are listed in the table below.
========= ========
Host IP
========= ========
percona-1 10.1.0.1
percona-2 10.1.0.2
percona-3 10.1.0.3
========= ========
Installation
============
Percona provides repositories for the YUM and APT package managers.
Alternatively you can install from source. For simplicity, we are going to
install Percona using the default data directory ``/var/lib/mysql``.
.. note::
The steps below should be run on all your Percona nodes, unless otherwise
mentioned. You should write down the host names or IP addresses of all your
Percona nodes before starting the installation, to configure the data
replication and later to configure the JDBC connection of your notary
cluster.
Run the commands below on all nodes of your Percona cluster to configure the
Percona repositories and install the service.
.. code:: sh
wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
sudo apt-get update
sudo apt-get install percona-xtradb-cluster-57
The service will start up automatically after the installation, you can confirm that the service is
running with ``service mysql status``, start the service with ``sudo service mysql start`` and stop with
``sudo service mysql stop``.
Configuration
=============
Configure the MySQL Root Password (if necessary)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Some distros allow root access to the database through a Unix domain socket, others
require you to find the temporary password in the log file and change it upon
first login.
Stop the Service
^^^^^^^^^^^^^^^^
.. code:: sh
sudo service mysql stop
Setup replication
^^^^^^^^^^^^^^^^^
Variables you need to change from the defaults are listed in the table below.
====================== =========================================================== ==========================================================
Variable Name Example Description
====================== =========================================================== ==========================================================
wsrep_cluster_address gcomm://10.1.0.1,10.1.0.2,10.1.0.3 The addresses of all the cluster nodes (host and port)
wsrep_node_address 10.1.0.1 The address of the Percona node
wsrep_cluster_name notary-cluster-1 The name of the Percona cluster
wsrep_sst_auth username:password The credentials for SST
wsrep_provider_options "gcache.size=8G" Replication options
====================== =========================================================== ==========================================================
Configure all replicas via
``/etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf`` as shown in the template
below.
.. literalinclude:: resources/wsrep.cnf
:caption: wsrep.cnf
:name: wsrep-cnf
The file ``/etc/mysql/percona-xtradb-cluster.conf.d/mysqld.cnf`` contains additional settings like the data directory. We're assuming
you keep the default ``/var/lib/mysql``.
Configure AppArmor, SELinux or other Kernel Security Module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you're changing the location of the database data directory, you might need to
configure your security module accordingly.
On the first Percona node
^^^^^^^^^^^^^^^^^^^^^^^^^
Start the Database
~~~~~~~~~~~~~~~~~~
.. code:: sh
sudo /etc/init.d/mysql bootstrap-pxc
Watch the logs using ``tail -f /var/log/mysqld.log``. Look for a log entry like
``WSREP: Setting wsrep_ready to true``.
Create the Corda User
~~~~~~~~~~~~~~~~~~~~~
.. code:: sql
CREATE USER corda IDENTIFIED BY '{{ password }}';
Create the Database and Tables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: sql
CREATE DATABASE corda;
CREATE TABLE IF NOT EXISTS corda.notary_committed_states (
issue_transaction_id BINARY(32) NOT NULL,
issue_transaction_output_id INT UNSIGNED NOT NULL,
consuming_transaction_id BINARY(32) NOT NULL,
CONSTRAINT id PRIMARY KEY (issue_transaction_id, issue_transaction_output_id)
);
GRANT SELECT, INSERT ON corda.notary_committed_states TO 'corda';
CREATE TABLE IF NOT EXISTS corda.notary_request_log (
consuming_transaction_id BINARY(32) NOT NULL,
requesting_party_name TEXT NOT NULL,
request_signature BLOB NOT NULL,
request_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
request_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
CONSTRAINT rid PRIMARY KEY (request_id)
);
GRANT INSERT ON corda.notary_request_log TO 'corda';
Create the SST User
~~~~~~~~~~~~~~~~~~~
.. code:: sql
CREATE USER {{ sst_user }}@localhost IDENTIFIED BY {{ sst_pass }};
GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO {{ sst_user }}@localhost;
FLUSH PRIVILEGES;
On all other Nodes
^^^^^^^^^^^^^^^^^^
Once you have updated the ``wsrep.cnf`` on all nodes, start MySQL on all the
remaining nodes of your cluster. Run this command on all nodes of your cluster,
except the first one. The config file is shown `above <#wsrep-cnf>`__.
.. code:: sh
service mysql start
Watch the logs using ``tail -f /var/log/mysqld.log``. Make sure you can start
the MySQL client on the command line and access the ``corda`` database on all
nodes.
.. code:: sh
mysql
mysql> use corda;
# The output should be `Database changed`.
In the next section, we're :doc:`installing-the-notary-service`. You can read about :doc:`operating-percona` in a later section of this tutorial.

View File

@ -0,0 +1,33 @@
Using the Bootstrapper
++++++++++++++++++++++
You can skip this section when you're setting up or joining a cluster with
doorman and network map.
Once the database is set up, you can prepare your configuration files of your notary
nodes and use the bootstrapper to create a Corda network, see
:doc:`../setting-up-a-corda-network`. Remember to configure
``notary.serviceLegalName`` in addition to ``myLegalName`` for all members of
your cluster.
You can find the documentation of the bootstrapper at :doc:`../setting-up-a-corda-network`.
Expected Outcome
~~~~~~~~~~~~~~~~
You will go from a set of configuration files to a directory tree containing a fully functional Corda network.
The notaries will be visible and available on the network. You can list available notaries using the node shell.
.. code:: sh
run notaryIdentities
The output of the above command should include the ``notary.serviceLegalName``
you have configured, e.g. ``O=HA Notary, L=London, C=GB``.
CorDapp developers should select the notary service identity from the network map cache.
.. code:: kotlin
serviceHub.networkMapCache.getNotary(CordaX500Name("HA Notary", "London", "GB"))

View File

@ -0,0 +1,6 @@
Monitoring
++++++++++
The notary health-check CorDapp monitors all notaries of the network and
serves a metrics website (TBD where the documentation is hosted).

View File

@ -0,0 +1,155 @@
In a network with Doorman and Network map
+++++++++++++++++++++++++++++++++++++++++
You can skip this section if you're not setting up or joining a network with
doorman and network map service.
Expected Outcome
~~~~~~~~~~~~~~~~
You will go from a set of configuration files to a fully functional Corda network.
The network map will be advertising the service identity of the Notary. Every
notary replica has obtained its own identity and the shared service identity
from the doorman.
Using the registration tool, you will obtain the serivce identity of your notary
cluster and distribute it to the keystores of all replicas of your notary
cluster.
The individual node identity is used by the messaging layer to route requests to
individual notary replicas. The notary replicas sign using the service identity
private key.
======== =========================== ===========================
Host Individual identity Service identity
======== =========================== ===========================
notary-1 O=Replica 1, L=London, C=GB O=HA Notary, L=London, C=GB
notary-2 O=Replica 2, L=London, C=GB O=HA Notary, L=London, C=GB
notary-3 O=Replica 3, L=London, C=GB O=HA Notary, L=London, C=GB
======== =========================== ===========================
The notaries will be visible and available on the network. To list available notary
identities using the Corda node shell
.. code:: sh
run notaryIdentities
The output of the above command should include ``O=HA Notary, L=London, C=GB``.
CorDapp developers should select the notary service identity from the network map cache.
.. code:: kotlin
serviceHub.networkMapCache.getNotary(CordaX500Name("HA Notary", "London", "GB"))
Every notary replica's keystore contains the private key of the replica and the
private key of the notary service (with aliases ``identity-private-key`` and
``distributed-notary-private key`` in the keystore). We're going to create and
populate the node's keystores later in this tutorial.
The Notary, the Doorman and the Network Map
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The notary is an essential component of every Corda network, therefore the
Notary identity needs to be created first, before other nodes can join the
network, since the Notary identity is part of the network parameters.
Adding a Corda notary to an existing network is covered in
the network services documentation (TBD where this is hosted). Removing a notary from a network
is currently not supported.
.. image:: resources/doorman-light.png
:scale: 70%
The notary sends a certificate signing request (CSR) to the doorman for
approval. Once approved, the notary obtains a signed certificate from the
doorman. The notary can then produce a signed node info file that contains the
P2P addresses, the legal identity, certificate and public key. The node infos
of all notaries that are part of the network are included in the network
parameters. Therefore, the notary node info files have to be present when the
network parameters are created.
Registering with the Doorman
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Obtaining the individual Node Identities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Write the configuration files for your replicas as described in :doc:`installing-the-notary-service`.
Register all the notary replicas with the doorman using the ``--initial-registration`` flag.
.. code:: sh
java -jar corda.jar --initial-registration \
--network-root-truststore-password '{{ root-truststore-password }}' \
--network-root-truststore network-root-truststore.jks
Obtaining the distributed Service Identity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once you completed the initial registration for all notary nodes, you can use
the registration tool to submit the certificate signing request (CSR) for the
service identity of your notary cluster. Read the documentation about the
`registration tool <https://github.com/corda/network-services/tree/master/registration-tool>`__
for detailed instructions.
Use the configuration file template below to configure the registration tool.
::
legalName = "{{ X500 name of the notary service }}"
email = "test@email.com"
compatibilityZoneURL = "https://{{ host }}:{{ port }}"
networkRootTrustStorePath = "network-root-truststore.jks"
networkRootTrustStorePassword = ""
keyStorePassword = ""
trustStorePassword = ""
crlCheckSoftFail = true
Run the command below to obtain the service identity of the notary cluster.
.. code:: sh
java -jar registration-tool.jar --config-file '{{ registation-config-file }}'
The service identity will be stored in a file
``certificates/notaryidentitykeystore.jks``. Distribute the
``distributed-notary-private-key`` into the keystores of all notary nodes that
are part of the cluster as follows:
* Copy the notary service identity to all notary nodes, placing it in the same directory as the ``nodekeystore.jks`` file and run the following command to import the service identity into the node's keystore:
.. code:: sh
registration-tool.jar --importkeystore \
--srcalias distributed-notary-private-key \
--srckeystore certificates/notaryidentitykeystore.jks \
--destkeystore certificates/nodekeystore.jks
* Check that the private keys are available in the keystore using the following command
.. code:: sh
keytool -list -v -keystore certificates/nodekeystore.jks | grep Alias
# Output:
# Alias name: cordaclientca
# Alias name: identity-private-key
# Alias name: distributed-notary-private-key
Network Map: Setting the Network Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This step is only applicable if you're the operator of the network map service.
In case the network map is operated by somebody else, you might need to send
them the node-info file of one of your notary nodes for inclusion in the
network parameters.
Copy the node info file of one of the notary replicas to the network map to
include the service identity in the network parameters. Follow the
instructions in the manual of the network services to generate the network
parameters (TBD where the documentation is hosted).

View File

@ -0,0 +1,39 @@
=============================
Setting up the Notary Service
=============================
In the previous section of this tutorial we set up a Percona cluster.
On top of the Percona cluster we're deploying three Corda notary nodes ``notary-{1,2,3}`` and
a single regular Corda node ``node-1`` that runs the notary health-check CorDapp.
If you're deploying VMs in your environment you might need to adjust the host names accordingly.
Configuration Files
+++++++++++++++++++
Below is a template for the notary configuration. Notice the parameters
``rewriteBatchedStatements=true&useSSL=false&failOverReadOnly=false`` of the
JDBC URL. See :doc:`../corda-configuration-file` for a complete reference.
Put the IP address or host name of the nearest Percona server first in the JDBC
URL. When running a Percona and a Notary replica on a single machine, list the
local IP first.
.. literalinclude:: resources/node.conf
:caption: node.conf
:name: node-conf
.. note::
Omit ``compatibilityZoneURL`` and set ``devMode = true`` when using the bootstrapper.
Next Steps
++++++++++
.. toctree::
:maxdepth: 1
installing-the-notary-service-bootstrapper
installing-the-notary-service-netman

View File

@ -0,0 +1,181 @@
=====================================
Highly Available Notary Service Setup
=====================================
About the HA Notary Installation
================================
In this chapter you'll learn how to set up, configure and start a highly
available (HA) Corda Notary from scratch. If you're targeting an environment
with doorman and network map, you require the registration tool. If you don't
require the doorman and network map, and you don't want to join an existing
network, the bootstrapper allows you to set up a cluster of nodes from a set of
configuration files.
The HA Notary relies on a Percona/XtraDB (Percona) cluster. How to set up Percona
is described below.
This guide assumes you're running a Debian-based Linux OS.
Double curly braces ``{{ }}`` are used to represent placeholder values
throughout this guide.
Overview
========
.. image:: resources/ha-notary-overview2.png
:scale: 75 %
The figure above displays the Corda nodes in green on the top, then the Corda
notaries in red in the middle and on the bottom are the Percona nodes in blue.
Corda nodes that request a notarisation by the service name of the Notary,
will connect to the available Notary nodes in a round-robin fashion.
Since our Notary cluster consists of several Percona nodes and several Corda
Notary nodes, we achieve high availability (HA). Individual nodes of the
Percona and Notary clusters can fail, while clients are still able to
notarise transactions. The Notary cluster remains available. A three-node
Percona cluster as shown in the figure above can tolerate one crash fault.
.. note::
In production you should consider running five nodes or more, to be able to
tolerate more than one simultaneous crash fault. One single Corda Notary
replica is enough to serve traffic in principal, although its capacity might
not be sufficient, depending on your throughput and latency requirements.
Colocating Percona and the Notary Service
+++++++++++++++++++++++++++++++++++++++++
.. image:: resources/percona-colocated.png
:scale: 50%
You can run a Percona DB service and a Corda Notary service on the same machine.
Summary
+++++++
* Corda nodes communicate with the Notary cluster via P2P messaging, the messaging layer handles selecting an appropriate Notary replica by the service legal name.
* Corda nodes connect to the Notary cluster members round-robin.
* The notaries communicate with the underlying Percona cluster via JDBC.
* The Percona nodes communicate with each other via group communication (GComm).
* The Percona replicas should only be reachable from each other and from the Notary nodes.
* The Notary P2P ports should be reachable from the internet (or at least from the rest of the Corda network you're building or joining).
* We recommend running the notaries and the Percona service in a joined private subnet, opening up the P2P ports of the notaries for external traffic.
Legal Names and Identities
++++++++++++++++++++++++++
Every Notary replica has two legal names. Its own legal name, specified by
``myLegalName``, e.g ``O=Replica 1, C=GB, L=London`` and the service legal name
specified in configuration by ``notary.serviceLegalName``, e.g. ``O=HA Notary,
C=GB, L=London``. Only the service legal name is included in the network
parameters. CorDapp developers should select the Notary service identity from the network map cache.
.. code:: kotlin
serviceHub.networkMapCache.getNotary(CordaX500Name("HA Notary", "London", "GB"))
Every Notary replica's keystore contains the private key of the replica and the
private key of the Notary service (with aliases ``identity-private-key`` and
``distributed-notary-private key`` in the keystore). We're going to create and
populate the node's keystores later in this tutorial.
Choosing Installation Path
==========================
.. note::
If you want to connect to a Corda network with a doorman and network map service,
use the registration tool to create your service identity. In case you want
to set up a test network for development or a private network without doorman and
network map, using the bootstrapper is recommended.
Expected Data Volume
====================
For non-validating notaries the Notary stores roughly one kilobyte per transaction.
Prerequisites
=============
* Java runtime
* Corda JAR
* Notary Health-Check JAR
* Bootstrapper JAR (only required when setting up network without doorman and network map)
* Network Registration tool (only required when setting up a network with doorman and network map)
* Root access to a Linux machine or VM to install Percona
* The private IP addresses of your DB hosts (where we're going to install Percona)
* The public IP addresses of your Notary hosts (in order to advertise these IPs for P2P traffic)
Your Corda distribution should contain all the JARs listed above.
Security
========
Credentials
+++++++++++
Make sure you have the following credentials available, create them if necessary and always
keep them safe.
================================ ============================================================================================================
Password or Keystore Description
================================ ============================================================================================================
database root password used to create the Corda user, setting up the DB and tables (only required for some installation methods)
Corda DB user password used by the Notary service to access the DB
SST DB user password used by the Percona cluster for data replication (SST stands for state snapshot transfer)
Network root truststore password (not required when using the bootstrapper)
Node keystore password (not required when using the bootstrapper)
The network root truststore (not required when using the bootstrapper)
================================ ============================================================================================================
Networking
++++++++++
Percona Cluster
~~~~~~~~~~~~~~~
===== =======================
Port Purpose
===== =======================
3306 MySQL client connections (from the Corda Notary nodes)
4444 SST via rsync and Percona XtraBackup
4567 Write-set replication traffic (over TCP) and multicast replication (over TCP and UDP)
4568 IST (Incremental State Transfer)
===== =======================
Follow the `Percona documentation <https://www.percona.com/doc/percona-xtradb-cluster/5.7/security/encrypt-traffic.html>`__
if you need to encrypt the traffic between your Corda nodes and Percona and between Percona nodes.
Corda Node
~~~~~~~~~~
========= ======= ==============================
Port Example Purpose
========= ======= ==============================
P2P Port 10002 P2P traffic (external)
RPC Port 10003 RPC traffic (internal only)
========= ======= ==============================
Later in the tutorial we're covering the Notary service configuration in details, in :doc:`installing-the-notary-service`.
Keys and Certificates
+++++++++++++++++++++
Keys are stored the same way as for regular Corda nodes in the ``certificates``
directory. If you're interested in the details you can find out
more in the :doc:`../permissioning` document.
Next Steps
==========
.. toctree::
:maxdepth: 1
installing-percona
installing-the-notary-service
operating-percona

View File

@ -0,0 +1,129 @@
==============================
Percona Monitoring, Backup and Restore (Advanced)
==============================
Monitoring
==========
Percona Monitoring and Management (PMM) is a platform for managing and
monitoring your Percona cluster. See the `PMM documentation
<https://www.percona.com/doc/percona-monitoring-and-management/index.html>`__.
Running PMM Server
^^^^^^^^^^^^^^^^^^
Install PMM Server on a single machine of your cluster.
.. code:: sh
docker run -d \
-p 80:80 \
--volumes-from pmm-data \
--name pmm-server \
percona/pmm-server:latest
Installing PMM Client
^^^^^^^^^^^^^^^^^^^^^^
You need to configure the Percona repositories first, as described above.
Install and configure PMM Client on all the machines that are running Percona.
.. code:: sh
sudo apt-get install pmm-client
sudo pmm-admin config --server ${PMM_HOST}:${PMM_PORT}
Backup
======
You can take backups with the ``XtraBackup`` tool. The command below creates a
backup in ``/data/backups``.
.. code:: sh
xtrabackup --backup --target-dir=/data/backups/
Restore
=======
Stop the Cluster
^^^^^^^^^^^^^^^^
Stop the Percona cluster by shutting down nodes one by one. Prepare the backup to restore using
.. code:: sh
xtrabackup --prepare --target-dir=/data/backups/
Restore from a Backup
^^^^^^^^^^^^^^^^^^^^^
.. code:: sh
mv '{{ data-directory }}' '{{ data-directory-backup }}'
xtrabackup --copy-back --target-dir=/data/backups/
sudo chown -R mysql:mysql '{{ data-directory }}'
Note that you might need the data in ``{{ data-direcotry-backup }}`` in case you
need to repair and replay from the binlog, as described below.
Start the first Node
^^^^^^^^^^^^^^^^^^^^
.. code:: sh
/etc/init.d/mysql bootstrap-pxc
Repair
======
You can recover from some accidents, e.g. a table drop, by restoring the last
backup and then applying the binlog up to the offending statement.
Replay the Binary Log
^^^^^^^^^^^^^^^^^^^^^
XtraBackup records the binlog position of the backup in
``xtrabackup_binlog_info``. Use this positon to start replaying the binlog from
your data directory (e.g. ``/var/lib/mysql``, or the target directory of the move command
used in the backup step above).
.. code:: sh
mysqlbinlog '{{ binlog-file }}' --start-position=<start-position> > binlog.sql
In case there are offending statements, such as
accidental table drops, you can open ``binlog.sql`` for examination.
Optionally can also pass ``--base64-output=decode-rows`` to decode every statement into a human readable format.
.. code:: sh
mysqlbinlog $BINLOG_FILE --start-position=$START_POS --stop-position=$STOP_POS > binlog.sql
# Replay the binlog
mysql -u root -p < binlog.sql
Start remaining Nodes
^^^^^^^^^^^^^^^^^^^^^
Finally, start the remaining nodes of the cluster.
Restarting a Cluster
====================
When all nodes of the cluster are down, manual intervention is needed to bring
the cluster back up. On the node with the most advanced replication index,
``set safe_to_bootstrap: 1`` in the file ``grastate.dat`` in the data directory.
You can use ``SHOW GLOBAL STATUS LIKE 'wsrep_last_committed';`` to find out the
sequence number of the last committed transaction. Start the first node using
``/etc/init.d/mysql bootstrap-pxc``. Bring back one node at a time and watch
the logs. If a SST is required, the first node can only
serve as a donor for one node a time.
See the documentation of the safe to bootstrap feature. Similar to restoring
from backup, restarting the entire cluster is an operation that deserves
practice. See the `documentation
<http://galeracluster.com/2016/11/introducing-the-safe-to-bootstrap-feature-in-galera-cluster/>`__
of this feature.

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -0,0 +1,42 @@
notary {
mysql {
connectionRetries={{ number of Percona nodes }}
dataSource {
autoCommit="false"
jdbcUrl="jdbc:mysql://{{ your cluster IPs }}/{{ DB name, e.g. corda }}?rewriteBatchedStatements=true&useSSL=false&failOverReadOnly=false"
username={{ DB username }}
password={{ DB password }}
}
}
validating=false
serviceLegalName="O=HA Notary, C=GB, L=London"
}
compatibilityZoneURL = "https://example.com:1300"
devMode = false
rpcSettings {
address : "localhost:18003"
adminAddress : "localhost:18004"
}
keyStorePassword = ""
trustStorePassword = ""
p2pAddress : "{{ fully qualified domain name, e.g. host.example.com (or localhost in development) }}:{{ P2P port }}"
rpcUsers=[]
myLegalName : "O=Replica 1, C=GB, L=London"
// We recommend using Postgres for the node database, or an other supported
// database that you already have set up. Note that the notarised states
// are written to the MySQL database configured in `notary.mysql`.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://[HOST]:[PORT]/postgres"
dataSource.user = [USER]
dataSource.password = [PASSWORD]
}
database = {
transactionIsolationLevel = READ_COMMITTED
schema = [SCHEMA]
}
jarDirs = [PATH_TO_JDBC_DRIVER_DIR]

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

View File

@ -0,0 +1,48 @@
[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so
wsrep_provider_options="gcache.size=8G"
# TODO set options related to the timeouts for WAN:
# evs.keepalive_period=PT3s
# evs.inactive_check_period=PT10S
# evs.suspect_timeout=PT30S
# evs.install_timeout=PT1M
# evs.send_window=1024
# evs.user_send_window=512
# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address="gcomm://{{ your_cluster_IPs }}"
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# Slave thread to use
wsrep_slave_threads= 8
wsrep_log_conflicts
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node IP address
wsrep_node_address={{ node_address }}
# Cluster name
wsrep_cluster_name={{ cluster_name }}
#If wsrep_node_name is not specified, then system hostname will be used
#wsrep_node_name=
#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING
# SST method
wsrep_sst_method=xtrabackup-v2
#Authentication for SST method
wsrep_sst_auth={{ sst_user }}:{{ sst_pass }}

View File

@ -0,0 +1,13 @@
========================
Running a notary cluster
========================
.. toctree::
:maxdepth: 1
introduction
installing-percona
installing-the-notary-service
installing-the-notary-service-bootstrapper
installing-the-notary-service-netman
operating-percona

View File

@ -27,9 +27,10 @@ World tutorials, and can be read in any order.
flow-state-machines
flow-testing
running-a-notary
running-a-notary-cluster/toctree
oracles
tutorial-custom-notary
tutorial-tear-offs
tutorial-attachments
event-scheduling
tutorial-observer-nodes
tutorial-observer-nodes