Dockerized High Availability Configuration
Introduction
Actions Pro images are available for customers in AWS ECR. To configure the necessary components and services, a Docker Compose file is used. This document provides step-by-step instructions for installing and running Actions Pro in a 3-node cluster.
Node Specifications
The setup requires three nodes, all of which must meet the Specifications detailed in the installation instructions.
Each node must be able to communicate with the others using their domain names or IP addresses.
System Configuration
To allow the Elasticsearch container to allocate sufficient virtual memory areas, increase the vm.max_map_count
kernel parameter on all nodes:
- Edit the
/etc/sysctl.conf
file and add the following line:vm.max_map_count=262144
- Save the file and apply the changes with:
sysctl -p
Required Open Ports
Primary Node:
- 3306 – MariaDB
- 4004, 15672 – RabbitMQ
- 5601 – Kibana
- 8443, 8080, 8005 – Actions Pro Tomcat
- 9200, 9300 – Elasticsearch
Secondary Nodes:
- 4004, 15672 – RabbitMQ ports are only needed on the primary and backup host.
- 8443, 8080, 8005 – Actions Pro Tomcat
- 9200, 9300 – Elasticsearch
Prerequisites
Ensure that Docker and Docker Compose are installed on all three nodes.
Docker Compose Setup
Node Assignment
- Download and extract
docker-compose.zip
on all three nodes. - Following a naming convention for nodes is assumed. For example:
- Primary Node:
node1
- First Secondary Node:
node2
- Second Secondary Node:
node3
- Primary Node:
- The extracted folder contains the following Docker Compose files:
docker-compose-ha-pr-node1.yml
(Primary node)docker-compose-ha-se-node2.yml
(First secondary node)docker-compose-ha-se-node3.yml
(Second secondary node)
Additional folders in the extracted folder can be found here.
Service Deployment
- The secondary nodes do not deploy
MariaDB
,Kibana
, orrsLog
components. These services are removed from the docker compose file of all the secondary nodes. - RabbitMQ is configured in Primary and Backup mode and runs only on
node1
(primary) andnode2
(first secondary).
Configuration
Environment Variables
Actions Pro is configured using properties in the blueprint.properties
file. These can be assigned to services in the Docker Compose file under the environment
attribute.
Handling Dots in Environment Variables
In a bash shell, .
is not a valid character in environment variable names. Replace dots with underscores. Example:
- Original:
rscontrol.log4j.Loggers.Root.level=DEBUG
- Updated:
rscontrol_log4j_Loggers_Root_level=DEBUG
Node-Specific Configuration
- The
LOCALHOST
environment variable forrsview
andrscontrol
must match the corresponding domain name or IP address forRSVIEW_NODES
andRSCONTROL_NODES
in theactionspro
environment file. - On
node3
, setSERVER_ID=3
in theactionspro
environment file. Increment this for additional nodes.
The Primary node will host all the services mentioned in the docker compose file.
The second node, node2
, will NOT host mariadb
, kibana
,rslog
, services, as they will run only on the primary node.
The third node, node3
, will NOT host mariadb
, kibana
,rabbitmq
, and rslog
services.
Updating Domain Names or IP Addresses
To simplify setup, the Docker Compose file, actionspro
environment file, and kibana.yml
file (located in the config
folder on the primary node) are preconfigured with placeholder domain names:
- Primary Node:
primary-host.domain.com
- First Secondary Node:
secondary1-host.domain.com
- Second Secondary Node:
secondary2-host.domain.com
Before deployment, replace these placeholders with the actual domain names or IP addresses of your respective host nodes.
Elasticsearch Configuration
- The
discovery.seed_hosts
property should contain a comma-separated list of all node domain names or IPs excluding the current node. - Each additional Elasticsearch node must have a unique service name reflected in
node.name
andcontainer_name
. - The
cluster.initial_master_nodes
property must list the unique Elasticsearch service names in the cluster. - In the health-check configuration, the
curl
command should reference the current node’s domain name or IP.
On all nodes, the properties kibana_yml_server_host
and kibana_yml_server_publicBaseUrl
in the actionspro
environment file must use the domain name of the primary host. The public base URL (or value of the property kibana_yml_server_publicBaseUrl
) should be accessible via a web browser.
Starting the Cluster
Preparation
Ensure that all nodes contain the certs
folder with:
keystore.jks
andkeystore.PKCS12
for Tomcat and Kibana (onnode1
only)- A valid license file in the
license
folder
Starting Services
- On the primary node, start all services:
docker compose -f <docker-compose-file.yml> up -d
- Simultaneously, on a secondary node, start the Elasticsearch component:
docker compose -f <docker-compose-file.yml> up elasticsearch2 -d
- On the primary node, wait for
rsview
to show a healthy status, then start all components on the secondary nodes:docker compose -f <docker-compose-file.yml> up -d
Viewing Logs
To monitor logs for a specific service, use:
docker logs <service-name> -f
Accessing Actions Pro
Once all services are running, Actions Pro can be accessed from any node using:
https://<primary-node-domain>:8443