High Availability Deployment
StackStorm has been systematically built with High availability(HA) as a goal. The exact deploymentsteps to achieve HA depend on the specifics of the infrastructure in which StackStorm is deployed. Thisguide covers a brief explanation on various StackStorm services, how they interact and the externalservices necessary for StackStorm to function. Note that StackStorm components also scale horizontally thusincreasing the system throughput while achieving higher availability.
In the sectionOverview: Single-box Reference Deployment a detailed picture and explanation of how single-boxdeployments work is provided. Let’s reproduce the picture here, to keep some context, and use it asa reference to layer on some HA deployment-specific details.

Note
A reproducible blueprint of StackStorm HA cluster is available as a helm chart, which is based on Docker and Kubernetes. SeeStackStorm HA Cluster in Kubernetes - BETA.
Components
First, a review of StackStorm components:
st2api
This process hosts the REST API endpoints that serve requests from WebUI, CLI and ChatOps. Itmaintains connections to MongoDB to store and retrieve information. It also connects to RabbitMQto push messages onto the message bus. It is a Python WSGI app running under a gunicorn-managedprocess which by default listens on port9101. It is front-ended by Nginx, acting as a reverseproxy.
Multiplest2api processes can be behind a load balancer in an active-active configuration.Each of these processes can be deployed on separate compute instances.
st2auth
All authentication is managed by this process. This process needs a connection to MongoDB and anauthentication backend. Seeauthentication backends for moreinformation. It is a Python WSGI app running under a gunicorn-managed process which by defaultlistens on port9100. It is front-ended by Nginx acting as a reverse proxy.
Multiplest2auth processes can be behind a load balancer in an active-active configuration.Each of these processes can be deployed on separate compute instances. If using the PAMauthentication backend, special care has to be taken to guarantee that all boxes on which aninstance ofst2auth runs should have the same users. Generally, allst2auth process shouldsee the same identities, via some provider if applicable, for the system to work predictably.
st2stream
This process exposes a server-sent event stream. It requires access to both MongoDB and RabbitMQ.It is also a gunicorn-managed process, listening on port9102 by default. It is front-ended byNginx acting as a reverse proxy. Clients like WebUI and ChatOps maintain a persistent connectionwith anst2stream process and receive update from thest2stream server.
Multiplest2stream process can be behind a load balancer in an active-active configuration.Since clients maintain a persistent connection with a specific instance the client will brieflylose events if anst2stream process goes down. It is the responsibility of the client toreconnect to an alternate stream connection via the load balancer. Note that this is in contrastwithst2api where each connection from a client is short-lived. Take the long-lived nature ofconnections made to this process when configuring appropriate timeouts for load balancers, wsgiapp servers like gunicorn etc.
st2sensorcontainer
st2sensorcontainer manages the sensors to be run on a node. It will start, stop and restartsensors running on a node. In this case a node is the same as a compute instance i.e. a VirtualMachine. In future this could be a container.
It is possible to runst2sensorcontainer in HA mode by running one process on each computeinstance. Each sensor node needs to be provided with proper partition information to share workwith other sensor nodes so that the same sensor does not run on different nodes.SeePartitioning Sensors for information on how to partition sensors. Currentlyst2sensorcontainer processes do not form a cluster and distribute work or take overnew work if some nodes in the cluster disappear. It is possible for a sensor itself to beimplemented with HA in mind so that the same sensor can be deployed on multiple nodes with thesensor managing active-active or active-passive. Providing some platform level HA support forsensors is likely to be an enhancement to StackStorm in future releases.
By default sensor container service runs in managed mode. This means that the sensor containerprocess manages child processes for all the running sensors and restarts them if they crash orsimilar.
In some scenarios this is not desired and service / process life-cycle (restarting, scaling out,etc.) is handled by a third party service such as Kubernetes.
To account for such deployments, sensor container can be started in single sensor mode using--single-sensor-mode and--sensor-ref command line options. When those options areprovided, sensor container service will run a single sensor and exit immediately if a sensorcrashes or similar.
For example:
st2sensorcontainer--single-sensor-mode--sensor-reflinux.FileWatchSensor
st2rulesengine
st2rulesengine evaluates rules when it seesTriggerInstances and decide if an ActionExecution is to be requested. It needs access to MongoDB tolocate rules and RabbitMQ to listen for TriggerInstances and request ActionExecutions.
Multiplest2rulesengine processes can run in active-active with only connections to MongoDB andRabbitMQ. All these will share the TriggerInstance load and naturally pick up more work if one ormore of the processes becomes unavailable.
st2timersengine
st2timersengine is responsible for scheduling all user specified timers. Seetimers for the specifics on setting up timers via rules.st2timersengine process needs access to both Mongo database and RabbitMQ message bus.
You have to have exactly one activest2timersengine process running to schedule all timers.Having more than one activest2timersengine will result in duplicate timer events and thereforeduplicate rule evaluations leading to duplicate workflows or actions.
To address failover in HA deployments, use external monitoring of thest2timersengine process to ensureone process is running, and to trigger spinning up a newst2timersengine process if it fails.Losing thest2timersengine will mean no timer events will be injected into StackStorm and thereforeno timer rules would be evaluated.
st2workflowengine
st2workflowengine drives the execution of orquesta workflows. Once the orquesta action runnerpasses the workflow execution request to thest2workflowengine, the workflow engine evaluatesthe execution graph generated by the workflow definition and identifies the next set of tasks torun. If the workflow execution is still in a running state and there are tasks identified, theworkflow engine will launch new action executions according to the task spec in the workflowdefinition.
When an action execution completed under the context of an orquesta workflow, thest2workflowengine processes the completion logic and determines if the task is completed. Ifthe task is completed, the workflow engine then evaluates the criteria for task transition andidentifies the next set of tasks and launch new action executions accordingly. This continues tohappen until there are no more tasks to execute or the workflow execution is in a completedstate.
Multiplest2workflowengine processes can run in active-active with only connections to MongoDBand RabbitMQ. All the workflow engine processes will share the load and pick up more work if one ormore of the processes become available. However, please note that if one of the workflow enginesgoes offline unexpectedly while processing a request, it is possible that the request or theparticular instance of the workflow execution will be in an unexpected state.
st2actionrunner
All ActionExecutions are handled byst2actionrunner. Once an execution is scheduledst2actionrunner handles the life-cycle of an execution to one of the terminal states.
Multiplest2actionrunner processes can run in active-active with only connections to MongoDBand RabbitMQ. Work gets naturally distributed across runners via RabbitMQ. Adding morest2actionrunner processes increases the ability of StackStorm to execute actions.
In a proper distributed setup it is recommended to setup Zookeeper or Redis to provide adistributed co-ordination layer. SeePolicies. Using the defaultfile-based co-ordination backend will not work as it would in a single box deployment.
To increase the number ofworkers perst2actionrunner service, refer to theConfiguring Action Runner workers of the config docs.
st2scheduler
st2scheduler is responsible for handling ingress action execution requests.It takes incoming requests off the bus and queues them for eventual schedulingwith an instance ofst2actionrunner.
Multiple instances ofst2scheduler can be run at a time. Databaseversioning prevents multiple execution requests from being picked up bydifferent schedulers. Scheduler garbage collection handles executions that mighthave failed to be scheduled by a failedst2scheduler instance.
st2notifier
This is a dual purpose process - its main function is to generatest2.core.actiontrigger andst2.core.notifytrigger based on the completion of ActionExecution. The auxiliary purpose is toact as a backup scheduler for actions that may not have been scheduled.
Multiplest2notifier processes can run in active-active mode, using connections to RabbitMQand MongoDB. For the auxiliary purpose to function in an HA deployment when more than onest2notifier is running, either Zookeeper or Redis is required to provide co-ordination. It isalso possible to designate a singlest2notifier as provider of auxiliary functions by disablingthe scheduler in all but onest2notifiers.
st2garbagecollector
Optional service that cleans up old executions and other operations data based on setupconfigurations. By default this process does nothing and needs to be setup to perform any work.
By design it is a singleton process. Running multiple instances in active-active will not yieldmuch benefit, but will not do any harm. The ideal configuration is active-passive but StackStorm itselfdoes not provide the ability to run this in active-passive.
Required Dependencies
This section has some HA recommendations for the dependencies required by StackStorm components. Thisshould serve as a guide only. The exact configuration will depend upon the site infrastructure.
MongoDB
StackStorm uses this to cache Actions, Rules and Sensor metadata which already live in the filesystem.All the content should ideally be source-control managed, preferably in a git repository. StackStormalso stores operational data like ActionExecution, TriggerInstance etc. The Key-Value datastorecontents are also maintained in MongoDB.
MongoDB supportsreplica set high-availability, which we recommend toprovide safe failover. Seehere for how to configure StackStorm to connectto MongoDB replica sets.
Loss of connectivity to a MongoDB cluster will cause downtime for StackStorm. However, once a replicaMongoDB is brought back it should be possible to bring StackStorm back to operational state bysimply loading the content (throughst2ctlreload--register-all andst2keyload. Easyaccess to old ActionExecutions will be lost but all the data of old ActionExecutions will stillbe available in audit logs.
RabbitMQ
RabbitMQ is the communication hub for StackStorm to co-ordinate and distribute work. SeeRabbitMQ documentation to understand HA deploymentstrategies.
Our recommendation is to mirror all the Queues and Exchanges so that the loss of one server doesnot affect functionality.
Seehere for how to configure StackStorm to connect to a RabbitMQcluster.
Coordination
Support of workflows with concurrent task executions and concurrency policies for action executionsrely on a proper co-ordination backend in a distributed deployment to work correctly.
The coordination service can be configured to use different backends such as redis or zookeeper. Forthe single node installation script, redis is installed and configured by default.
Thisshows how to run a replicated Zookeeper setup. (Note: Make sure to refer to the documentation in thesame version as your running Zookeeper installation, if any.)Seethis to understand Redis deployments using sentinel.
Nginx and Load Balancing
An load balancer is required to reverse proxy each instance ofst2api,st2auth andst2stream. In the reference setup, Nginx is used for this. This serverterminates SSL connections, shields clients from internal port numbers of various servicesand only require ports 80 and 443 to be open on containers.
Often it is best to deploy one set of all these services on a compute instance and share an Nginxserver.
There is also a need for a load balancer to frontend all the REST services. This results in an HAdeployment for REST services as well as single endpoint for clients. Most deploymentinfrastructures will already have a load balancer solution which they would prefer to use so we donot provide any specific recommendations.
Sharing Content
In an HA setup withst2api,st2actionrunner andst2sensorcontainer each running onmultiple boxes the question of managing distributed content is crucial. StackStorm does not provide abuilt-in solution to distributing content on various boxes. Instead it relies on externalmanagement of StackStorm content. Here are a few strategies:
Read-Write NFS mounts
If the content folders i.e./opt/stackstorm/packs and/opt/stackstorm/virtualenvs areplaced on read-write NFS mounts then writes from any StackStorm node will be visible to other nodes.Special care needs to be taken with/opt/stackstorm/virtualenvs since that has symlinks tosystem libraries. If care is not taken to provision all host boxes in an identical manner it couldlead to unpredictable behavior. Managing thevirtualenvs on every host box individually wouldbe a more robust approach.
Content management
Manage pack installation using a configuration management tool of your choice, such as Ansible,Puppet, Chef, or Salt. Assuming that the list of packs to be deployed will be static, thendeploying content to StackStorm nodes via CM tools could be a sub-step of an overall StackStorm deployment.This is perhaps the better of the two approaches to end up with a predictable HA deployment.
Reference HA setup
In this section we provide a highly opinionated and therefore prescriptive approach to deployingStackStorm in HA. This deployment has 3 independent boxes which we categorize as “controller box” and“blueprint box.” We’ll call these boxesst2-multi-node-cntl,st2-multi-node-1 andst2-multi-node-2. For the sake of reference we will be using Ubuntu 18.04 as the base OS.Obviously you can also use RedHat/RockyLinux/CentOS.

StackStorm HA reference deployment.
Controller Box
This box runs all the shared required dependencies and some StackStorm components:
Nginx as load balancer
MongoDB
RabbitMQ
Redis/Zookeeper
st2chatops
st2web
In practiceMongoDB,RabbitMQ, andRedis/Zookeeper will usually be on standalone clustersmanaged outside of StackStorm. The two shared components (st2chatops andst2web) are placed herefor the sake of convenience. They could be placed anywhere with the right configuration.
The Nginx load balancer can easily be switched out for Amazon ELB, HAProxy or any other of yourchoosing. In that casest2web which is being served off this Nginx instance will also need anew home.
st2chatops which useshubot is not easily deployed in HA. Using something likekeepalived to maintainst2chatops in active-passiveconfiguration is an option.
Follow these steps to provision a controller box on Ubuntu 18.04:
Install Required Dependencies
Install
MongoDB,RabbitMQ, andRedis:The python redis client is already included in the StackStorm virtualenv. If using Zookeeper, thekazoo module needs to be installed into the StackStorm virtualenv.
$sudoapt-getinstall-ymongodb-serverrabbitmq-serverredis-server
Fix
bind_ipin/etc/mongodb.confto bind MongoDB to an interface that has an IP addressreachable fromst2-multi-node-1andst2-multi-node-2.Restart MongoDB:
$sudoservicemongodbrestart
Add stable StackStorm repos:
$curl-shttps://packagecloud.io/install/repositories/StackStorm/stable/script.deb.sh|sudobash
Setup
st2weband SSL termination. Followinstall webui and setupssl. You will need to stop after removing the default Nginx configfile.A sample configuration for Nginx as load balancer for the controller box is provided below.With this configuration Nginx will load balance all requests between the two blueprint boxes
st2-multi-node-1andst2-multi-node-2. This includes requests tost2apiandst2auth. Nginx also serves as the webserver forst2web.
## nginx configuration to expose st2 webui, redirect HTTP->HTTPS,# provide SSL termination, and reverse-proxy st2api and st2auth API endpoint.# To enable:# cp ${LOCATION}/st2.conf /etc/nginx/sites-available# ln -l /etc/nginx/sites-available/st2.conf /etc/nginx/sites-enabled/st2.conf# see https://docs.stackstorm.com/install.html for detailsupstream st2 { server st2-multi-node-1:443; server st2-multi-node-2:443;}server { listen *:80 default_server; add_header Front-End-Https on; add_header X-Content-Type-Options nosniff; if ($ssl_protocol = "") { return 308 https://$host$request_uri; } index index.html; access_log /var/log/nginx/st2webui.access.log combined; error_log /var/log/nginx/st2webui.error.log;}server { listen *:443 ssl; ssl_certificate /etc/ssl/st2/st2.crt; ssl_certificate_key /etc/ssl/st2/st2.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES128-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA128:DHE-RSA-AES128-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA128:ECDHE-RSA-AES128-SHA384:ECDHE-RSA-AES128-SHA128:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:AES128-GCM-SHA384:AES128-GCM-SHA128:AES128-SHA128:AES128-SHA128:AES128-SHA:AES128-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; index index.html; access_log /var/log/nginx/ssl-st2webui.access.log combined; error_log /var/log/nginx/ssl-st2webui.error.log; add_header Front-End-Https on; add_header X-Content-Type-Options nosniff; location @apiError { add_header Content-Type application/json always; return 503 '{ "faultstring": "Nginx is unable to reach st2api. Make sure service is running." }'; } location /api/ { error_page 502 = @apiError; proxy_pass https://st2/api/; proxy_next_upstream error timeout http_500 http_502 http_503 http_504; proxy_read_timeout 90; proxy_connect_timeout 90; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Connection ''; chunked_transfer_encoding off; proxy_buffering off; proxy_cache off; } location @streamError { add_header Content-Type text/event-stream; return 200 "retry: 1000\n\n"; } # For backward compatibility reasons, rewrite requests from "/api/stream" # to "/stream/v1/stream" and "/api/v1/stream" to "/stream/v1/stream" rewrite ^/api/stream/?$ /stream/v1/stream break; rewrite ^/api/(v\d)/stream/?$ /stream/$1/stream break; location /stream/ { error_page 502 = @streamError; proxy_pass https://st2/stream/; proxy_next_upstream error timeout http_500 http_502 http_503 http_504; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; sendfile on; tcp_nopush on; tcp_nodelay on; # Disable buffering and chunked encoding. # In the stream case we want to receive the whole payload at once, we don't # want multiple chunks. proxy_set_header Connection ''; chunked_transfer_encoding off; proxy_buffering off; proxy_cache off; } location @authError { add_header Content-Type application/json always; return 503 '{ "faultstring": "Nginx is unable to reach st2auth. Make sure service is running." }'; } location /auth/ { error_page 502 = @authError; proxy_pass https://st2/auth/; proxy_next_upstream error timeout http_500 http_502 http_503 http_504; proxy_read_timeout 90; proxy_connect_timeout 90; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Authorization; proxy_set_header Connection ''; chunked_transfer_encoding off; proxy_buffering off; proxy_cache off; } location / { root /opt/stackstorm/static/webui/; index index.html; sendfile on; tcp_nopush on; tcp_nodelay on; }}
Create the st2 logs directory and the st2 user:
mkdir-p/var/log/st2useraddst2
Install
st2chatopsfollowingsetup chatops.
Blueprint box
This box is a repeatable StackStorm image that is essentially the single-box reference deployment with afew changes. The aim is to deploy as many of these boxes for desired HA objectives and horizontalscaling. StackStorm processes outlined above can be turned on/off individually, therefore each box canalso be made to offer different services.
Add stable StackStorm repos:
$curl-shttps://packagecloud.io/install/repositories/StackStorm/stable/script.deb.sh|sudobash
Install all StackStorm components:
$sudoapt-getinstall-yst2
Install Nginx:
$sudoapt-getinstall-ynginx
Replace
/etc/st2/st2.confwith the samplest2.confprovided below. This config points tothe controller node or configuration values ofdatabase,messaging, andcoordination.
# System-wide configuration[api]# Host and port to bind the API server.host=127.0.0.1port=9101logging=/etc/st2/logging.api.confmask_secrets=True# allow_origin is required for handling CORS in st2 web UI.# allow_origin = http://myhost1.example.com:3000,http://myhost2.example.com:3000[stream]logging=/etc/st2/logging.stream.conf[sensorcontainer]logging=/etc/st2/logging.sensorcontainer.conf[rulesengine]logging=/etc/st2/logging.rulesengine.conf[actionrunner]logging=/etc/st2/logging.actionrunner.conf# The line should be commented and 'always-copy' removed when using EL7 or EL8 as it causes virtualenv issues on pack installvirtualenv_opts=--always-copy[notifier]logging=/etc/st2/logging.notifier.conf[garbagecollector]logging=/etc/st2/logging.garbagecollector.conf[workflow_engine]logging=/etc/st2/logging.workflowengine.conf[auth]host=127.0.0.1port=9100use_ssl=Falsedebug=Falseenable=Truelogging=/etc/st2/logging.auth.confmode=standalone# Note: Settings below are only used in "standalone" modebackend=flat_filebackend_kwargs={"file_path": "/etc/st2/htpasswd"}# Base URL to the API endpoint excluding the version (e.g. http://myhost.net:9101/)api_url=[system]base_path=/opt/stackstorm[syslog]host=st2-multi-node-controllerport=514facility=local7protocol=udp[log]excludes=requests,paramikoredirect_stderr=Falsemask_secrets=True[system_user]user=stanleyssh_key_file=/home/stanley/.ssh/stanley_rsa[messaging]url=amqp://guest:guest@st2-multi-node-controller:5672/[ssh_runner]remote_dir=/tmpuse_paramiko_ssh_runner=True[database]host=st2-multi-node-controller[coordination]# url = kazoo://st2-multi-node-controllerurl=redis://st2-multi-node-controller
Generate a certificate:
$sudomkdir-p/etc/ssl/st2$sudoopensslreq-x509-newkeyrsa:2048-keyout/etc/ssl/st2/st2.key-out/etc/ssl/st2/st2.crt\-daysXXX-nodes-subj"/C=US/ST=California/L=Palo Alto/O=StackStorm/OU=Information \ Technology/CN=$(hostname)"
- Configure users & authentication as perthis documentation. Make
sure that user configuration on all boxes running
st2authis identical. This ensuresconsistent authentication from the entire StackStorm install since the request to authenticate auser can be forwarded by the load balancer to any of thest2authprocesses.
- Use the sample Nginx config that is provided below for the blueprint boxes. In this config
Nginx will act as the SSL termination endpoint for all the REST endpoints exposed by
st2apiandst2auth:
## nginx configuration to expose st2 webui, redirect HTTP->HTTPS,# provide SSL termination, and reverse-proxy st2api and st2auth API endpoint.# To enable:# cp ${LOCATION}/st2.conf /etc/nginx/sites-available# ln -l /etc/nginx/sites-available/st2.conf /etc/nginx/sites-enabled/st2.conf# see https://docs.stackstorm.com/install.html for detailsserver{listen*:80default_server;add_headerFront-End-Httpson;add_headerX-Content-Type-Optionsnosniff;if($ssl_protocol=""){return308https://$host$request_uri;}indexindex.html;access_log/var/log/nginx/st2webui.access.logcombined;error_log/var/log/nginx/st2webui.error.log;}server{listen*:443ssl;ssl_certificate/etc/ssl/st2/st2.crt;ssl_certificate_key/etc/ssl/st2/st2.key;ssl_session_cacheshared:SSL:10m;ssl_session_timeout5m;ssl_protocolsTLSv1.2TLSv1.3;ssl_ciphersEECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES128-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA128:DHE-RSA-AES128-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA128:ECDHE-RSA-AES128-SHA384:ECDHE-RSA-AES128-SHA128:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA128:DHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:AES128-GCM-SHA384:AES128-GCM-SHA128:AES128-SHA128:AES128-SHA128:AES128-SHA:AES128-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;ssl_prefer_server_cipherson;indexindex.html;access_log/var/log/nginx/ssl-st2webui.access.logcombined;error_log/var/log/nginx/ssl-st2webui.error.log;add_headerFront-End-Httpson;add_headerX-Content-Type-Optionsnosniff;location/api/{rewrite^/api/(.*)/$1break;proxy_passhttp://127.0.0.1:9101/;proxy_read_timeout90;proxy_connect_timeout90;proxy_redirectoff;proxy_set_headerHost$host;proxy_set_headerX-Real-IP$remote_addr;proxy_set_headerX-Forwarded-For$proxy_add_x_forwarded_for;proxy_set_headerConnection'';chunked_transfer_encodingoff;proxy_bufferingoff;proxy_cacheoff;proxy_set_headerHost$host;}# For backward compatibility reasons, rewrite requests from "/api/stream"# to "/stream/v1/stream" and "/api/v1/stream" to "/stream/v1/stream"rewrite^/api/stream/?$/stream/v1/streambreak;rewrite^/api/(v\d)/stream/?$/stream/$1/streambreak;location/stream/{rewrite^/stream/(.*)/$1break;proxy_passhttp://127.0.0.1:9102/;proxy_set_headerHost$host;proxy_set_headerX-Real-IP$remote_addr;proxy_set_headerX-Forwarded-For$proxy_add_x_forwarded_for;sendfileon;tcp_nopushon;tcp_nodelayon;# Disable buffering and chunked encoding.# In the stream case we want to receive the whole payload at once, we don't# want multiple chunks.proxy_set_headerConnection'';chunked_transfer_encodingoff;proxy_bufferingoff;proxy_cacheoff;}location/auth/{rewrite^/auth/(.*)/$1break;proxy_passhttp://127.0.0.1:9100/;proxy_read_timeout90;proxy_connect_timeout90;proxy_redirectoff;proxy_set_headerHost$host;proxy_set_headerX-Real-IP$remote_addr;proxy_set_headerX-Forwarded-For$proxy_add_x_forwarded_for;proxy_pass_headerAuthorization;proxy_set_headerConnection'';chunked_transfer_encodingoff;proxy_bufferingoff;proxy_cacheoff;proxy_set_headerHost$host;}}
- To use Timer triggers, enable them on only one server. Make this change in
/etc/st2/st2.conf:[timer]enable = False
- SeePartitioning Sensors to decide how to partition sensors to suit your
requirements.
All content should be synced by choosing a suitable strategy as outlined above. This is crucialto obtain predictable outcomes.