- Home
- F5 NGINX Plus
- Deployment Guides
- Load Balancing Third-Party Servers Load Balancing Node.js Application Servers with NGINX Open Source and NGINX Plus
Load Balancing Node.js Application Servers with NGINX Open Source and NGINX Plus
This deployment guide explains how to use NGINX Open Source and F5 NGINX Plus to load balance HTTP and HTTPS traffic across a pool of Node.js application servers. The detailed instructions in this guide apply to both cloud‑based and on‑premises deployments of Node.js.
NGINX Open Source is an open source web server and reverse proxy that has grown in popularity in recent years because of its scalability, outstanding performance, and small footprint. NGINX Open Source was first created to solve the C10K problem (serving 10,000 simultaneous connections on a single web server). NGINX Open Source’s features and performance have made it a staple of high‑performance sites – it’sthe #1 web server at the 100,000 busiest websites in the world.
NGINX Plus is the commercially supported version of NGINX Open Source. NGINX Plus is a complete application delivery platform, extending the power of NGINX Open Source with a host of enterprise‑ready capabilities that enhance a Node.js deployment and are instrumental to building web applications at scale:
- Full‑featured HTTP, TCP, and UDP load balancing
- Intelligent session persistence
- High‑performance reverse proxy
- Caching and offload of dynamic and static content
- Adaptive streaming to deliver audio and video to any device
- Application-aware health checks andhigh availability
- Advanced activity monitoring available via a dashboard or API
- Management and real‑time configuration changes with DevOps‑friendly tools
Node.js is a JavaScript runtime built on theV8 JavaScript engine. Node.js uses an event‑driven, non‑blocking I/O model that makes it lightweight and efficient. The package ecosystem for Node.js,npm, is the largest ecosystem of open source libraries in the world.
To download the Node.js software and get installation instructions, visit theNode.js website.
The information in this deployment guide applies equally to open source Node.js software and commercially supported Node.js frameworks.
- A Node.js application server installed and configured on a physical or virtual system.
- A Linux system to host NGINX Open Source or NGINX Plus. To avoid potential conflicts with other applications, we recommend you install NGINX Plus on a fresh physical or virtual system. For the list of Linux distributions supported by NGINX Plus, seeNGINX Plus Technical Specifications.
- NGINX Open Source or NGINX Plus installed on the physical or virtual system. Some features are available only withNGINX Plus, including sophisticated session persistence, application health checks, live activity monitoring, and dynamic reconfiguration of upstream groups. For installation instructions for both products, see theNGINX Plus Admin Guide.
The instructions assume you have basic Linux system administration skills, including the following. Full instructions are not provided for these tasks.
- Configuring and deploying a Node.js application
- Installing Linux software from vendor‑supplied packages
- Editing configuration files
- Copying files between a central administrative system and Linux servers
- Running basic commands to start and stop services
- Reading log files
example.com
is used as a sample domain name (in key names and configuration blocks). Replace it with your organization’s name.- Many NGINX Open Source and NGINX Plus configuration blocks in this guide list two sample Node.js application servers with IP addresses 192.168.33.11 and 192.168.33.12. Replace these addresses with the IP addresses of your Node.js servers. Include a line in the configuration block for each server if you have more or fewer than two.
- For readability reasons, some commands appear on multiple lines. If you want to copy and paste them into a terminal window, we recommend that you first copy them into a text editor, where you can substitute the object names that are appropriate for your deployment and remove any extraneous formatting characters that your browser might insert.
- We recommend that you do not copy text from the configuration snippets in this guide into your configuration files. For the recommended way to create configuration files, seeCreating and Modifying Configuration Files.
If you plan to enable SSL/TLS encryption of traffic between NGINX Open Source or NGINX Plus and clients of your Node.js application, you need to configure a server certificate for NGINX Open Source or NGINX Plus.
- SSL/TLS support is enabled by default in allNGINX Plus packages andNGINX Open Source binaries provided by NGINX.
- If you are compiling NGINX Open Source from source, include the
--with-http_ssl_module
parameter to enable SSL/TLS support for HTTP traffic (the corresponding parameter for TCP/UDP is--with-stream_ssl_module
, and for email is--with-;mail_ssl_module
, but this guide does not cover those protocol types). - If using binaries from other providers, consult the provider documentation to determine if they support SSL/TLS.
There are several ways to obtain a server certificate, including the following. For your convenience,step-by-step instructions are provided for the second and third options.
- If you already have an SSL certificate for NGINX Open Source or NGINX Plus installed on another UNIX or Linux system (including systems running Apache HTTP Server), copy it to the/etc/nginx/ssl directory on the NGINX Open Source or NGINX Plus server.
- Generate a self‑signed certificate as described inGenerating a Self‑Signed Certificate below. This is sufficient for testing scenarios, but clients of production deployments generally require a certificate signed by a certificate authority (CA).
- Request a new certificate from a CA or your organization’s security group, as described inGenerating a Certificate Request below.
For more details on SSL/TLS termination, see theNGINX Plus Admin Guide.
Generate a public‑private key pair and a self‑signed server certificate in PEM format that is based on them.
Log in as the root user on a machine that has the
openssl
software installed.Generate the key pair in PEM format (the default). To encrypt the private key, include the
-des3
parameter. (Other encryption algorithms are available, listed on the man page for thegenrsa command.) You are prompted for the passphrase used as the basis for encryption.shellroot# openssl genrsa -des3 -out ~/private-key.pem2048Generating RSA private key ...Enter pass phrasefor private-key.pem:
root# openssl genrsa -des3 -out ~/private-key.pem2048Generating RSA private key ...Enter pass phrasefor private-key.pem:
Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.
root# cp ~/private-key.pem secure-dir/private-key.pem.backup
root# cp ~/private-key.pem secure-dir/private-key.pem.backup
Generate the certificate. Include the
-new
and-x509
parameters to make a new self‑signed certificate. Optionally include the-days
parameter to change the key’s validity lifetime from the default of 30 days (10950 days is about 30 years). Respond to the prompts with values appropriate for your testing deployment.shellroot# openssl req -new -x509 -key ~/private-key.pem -out ~/self-cert.pem\ -days10950
root# openssl req -new -x509 -key ~/private-key.pem -out ~/self-cert.pem\ -days10950
Copy or move the certificate file and associated key files to the/etc/nginx/ssl directory on the NGINX Open Source or NGINX Plus server.
Log in as the root user on a machine that has the
openssl
software installed.Create a private key to be packaged in the certificate.
root# openssl genrsa -out ~/example.com.key2048
root# openssl genrsa -out ~/example.com.key2048
Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.
root# cp ~/example.com.key <SECURE-DIR>/example.com.key.backup
root# cp ~/example.com.key <SECURE-DIR>/example.com.key.backup
Create a Certificate Signing Request (CSR) file.
root# openssl req -new -sha256 -key ~/example.com.key -out ~/example.com.csr
root# openssl req -new -sha256 -key ~/example.com.key -out ~/example.com.csr
Request a certificate from a CA or your internal security group, providing the CSR file (example.com.csr). As a reminder, never share private keys (.key files) directly with third parties.
The certificate needs to be PEM format rather than in the Windows‑compatible PFX format. If you request the certificate from a CA website yourself, choose NGINX or Apache (if available) when asked to select the server platform for which to generate the certificate.
Copy or move the certificate file and associated key files to the/etc/nginx/ssl directory on the NGINX Plus server.
To reduce errors, this guide has you copy directives from files provided by NGINX into your configuration files, instead of using a text editor to type in the directives yourself. Then you go through the sections in this guide (starting withConfiguring Virtual Servers for HTTP and HTTPS Traffic) to learn how to modify the directives as required for your deployment.
As provided, there is one file for basic load balancing (with NGINX Open Source or NGINX Plus) and one file for enhanced load balancing (with NGINX Plus). If you are installing and configuring NGINX Open Source or NGINX Plus on a fresh Linux system and using it only to load balance Node.js traffic, you can use the provided file as your main configuration file, which by convention is called/etc/nginx/nginx.conf.
We recommend, however, that instead of a single configuration file you use the scheme that is set up automatically when you install an NGINX Plus package, especially if you already have an existing NGINX Open Source or NGINX Plus deployment or plan to expand your use of NGINX Open Source or NGINX Plus to other purposes in future. In the conventional scheme, the main configuration file is still called/etc/nginx/nginx.conf, but instead of including all directives in it, you create separate configuration files for different functions and store the files in the/etc/nginx/conf.d directory. You then use theinclude
directive in the appropriate contexts of the main file to read in the contents of the function‑specific files.
If you have just installed NGINX Open Source or NGINX Plus there is a default configuration file,default.conf, in the/etc/nginx/conf.d directory. This configuration defined there is not appropriate for the deployment described in this guide, but you want to leave a file with that name in the directory so it does not get replaced with a new version the next time you upgrade NGINX Open Source or NGINX Plus. To save a copy for future reference you can copy it to a new name without the.conf extension.
To download the complete configuration file for basic load balancing:
root# cd /etc/nginx/conf.droot# curl https://www.nginx.com/resource/conf/nodejs-basic.conf > nodejs-basic.conf
root# cd /etc/nginx/conf.droot# curl https://www.nginx.com/resource/conf/nodejs-basic.conf > nodejs-basic.conf
To download the complete configuration file for enhanced load balancing:
root# cd /etc/nginx/conf.droot# curl https://www.nginx.com/resource/conf/nodejs-enhanced.conf > nodejs-enhanced.conf
root# cd /etc/nginx/conf.droot# curl https://www.nginx.com/resource/conf/nodejs-enhanced.conf > nodejs-enhanced.conf
(You can also access the URL in a browser and and copy the text into the indicated file.)
Note: If you download both files, place only one of them in the/etc/nginx/conf.d directory.
To set up the conventional configuration scheme, add anhttp
configuration block in the mainnginx.conf file, if it does not already exist. (The standard placement is below any global directives.) Add thisinclude directive with the appropriate filename:
http{includeconf.d/nodejs-(basic|enhanced).conf;}
http{includeconf.d/nodejs-(basic|enhanced).conf;}
Directive documentation:include
You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration filesfunction-http.conf, this is an appropriateinclude
directive:
http{includeconf.d/*-http.conf;}
http{includeconf.d/*-http.conf;}
For reference purposes, the full configuration files are also provided in this document:
We recommend, however, that you do not copy text directly from this document. It does not necessarily use the same mechanisms for positioning text (such as line breaks and white space) as text editors do. In text copied into an editor, lines might run together and indenting of child statements in configuration blocks might be missing or inconsistent. The absence of formatting does not present a problem for NGINX Open Source or NGINX Plus, because (like many compilers) they ignore white space during parsing, relying solely on semicolons and curly braces as delimiters. The absence of white space does, however, make it more difficult for humans to interpret the configuration and modify it without making mistakes.
We recommend that each time you complete a set of updates to the configuration, you run thenginx -t
command to test the configuration file for syntactic validity.
root# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful
root# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful
To tell NGINX Open Source or NGINX Plus to start using the new configuration, run one of the following commands:
root# nginx -s reload
root# nginx -s reload
or
root# service nginx reload
root# service nginx reload
This section explains how to set up NGINX Open Source or NGINX Plus as a load balancer in front of two Node.js servers. The instructions in the first two sections are mandatory:
The instructions in the remaining sections are optional, depending on the requirements of your application:
- Configuring Basic Session Persistence
- Configuring Proxy of WebSocket Traffic
- Configuring Content Caching
- Configuring HTTP/2 Support
The complete configuration file appears inFull Configuration for Basic Load Balancing.
If you are using NGINX Plus, you can configure additional enhanced features after you complete the configuration of basic load balancing. SeeConfiguring Enhanced Load Balancing with NGINX Plus.
These directives define virtual servers for HTTP and HTTPS traffic in separateserver
blocks in the top‑levelhttp
configuration block. All HTTP requests are redirected to the HTTPS server.
Configure a
server
block that listens for requests for“https://example.com” received on port 443.The
ssl_certificate
andssl_certificate_key
directives are required; substitute the names of the certificate and private key you chose inConfiguring an SSL/TLS Certificate for Client Traffic.The other directives are optional but recommended.
nginx# In the 'http' blockserver{listen443ssl;server_nameexample.com;ssl_certificate/etc/nginx/ssl/<certificate-name>;ssl_certificate_key/etc/nginx/ssl/<private-key>;ssl_session_cacheshared:SSL:1m;ssl_prefer_server_cipherson;}
# In the 'http' blockserver{listen443ssl;server_nameexample.com;ssl_certificate/etc/nginx/ssl/<certificate-name>;ssl_certificate_key/etc/nginx/ssl/<private-key>;ssl_session_cacheshared:SSL:1m;ssl_prefer_server_cipherson;}
Directive documentation:listen,server,server_name,ssl_certificate,ssl_certificate_key,ssl_prefer_server_ciphers,ssl_session_cache
Configure a
server
block that permanently redirects requests received on port 80 for“http://example.com” to the HTTPS server, which is defined in the next step.If you’re not using SSL/TLS for client connections, omit the
location
block. When instructed in the remainder of this guide to add directives to theserver
block for HTTPS traffic, add them to this block instead.nginx# In the 'http' blockserver{listen80;server_nameexample.com;proxy_http_version1.1;proxy_set_headerHost$host;proxy_set_headerConnection"";# Redirect all HTTP requests to HTTPSlocation/{return301https://$server_name$request_uri;}}
# In the 'http' blockserver{listen80;server_nameexample.com;proxy_http_version1.1;proxy_set_headerHost$host;proxy_set_headerConnection"";# Redirect all HTTP requests to HTTPSlocation/{return301https://$server_name$request_uri;}}
Directive documentation:location,proxy_http_version,proxy_set_header,return
For more information on configuring SSL/TLS, see theNGINX Plus Admin Guide and the reference documentation for the HTTPSSL/TLS module.
To configure load balancing, you first create a named “upstream group,” which lists the backend servers. You then set up NGINX Open Source or NGINX Plus as a reverse proxy and load balancer by referring to the upstream group in one or moreproxy_pass
directives.
Configure an upstream group callednodejs with two Node.js application servers listening on port 8080, one on IP address 192.168.33.11 and the other on 192.168.33.12.
nginx# In the 'http' blockupstreamnodejs{server192.168.33.11:8080;server192.168.33.12:8080;}
# In the 'http' blockupstreamnodejs{server192.168.33.11:8080;server192.168.33.12:8080;}
In the
server
block for HTTPS traffic that we created inConfiguring Virtual Servers for HTTP and HTTPS Traffic, include twolocation
blocks:- The first one matches HTTPS requests in which the path starts with/webapp/, and proxies them to thenodejs upstream group we created in the previous step.
- The second one funnels all traffic to the first
location
block, by doing a temporary redirect of all requests for“http://example.com/".
nginx# In the 'server' block for HTTPS trafficlocation/webapp/{proxy_passhttp://nodejs;}location=/{return302/webapp/;}
# In the 'server' block for HTTPS trafficlocation/webapp/{proxy_passhttp://nodejs;}location=/{return302/webapp/;}
Directive documentation:location,proxy_pass,return
Note that these blocks handle only standard HTTPS traffic. If you want to load balance WebSocket traffic, you need to add another
location
block as described inConfiguring Proxy of WebSocket Traffic.
By default, NGINX Open Source and NGINX Plus use the Round Robin algorithm for load balancing among servers. The load balancer runs through the list of servers in the upstream group in order, forwarding each new request to the next server. In our example, the first request goes to 192.168.33.11, the second to 192.168.33.12, the third to 192.168.33.11, and so on. For information about the other available load‑balancing algorithms, see theNGINX Plus Admin Guide.
In NGINX Plus, you can also set up dynamic reconfiguration of an upstream group when the set of backend servers changes, using the Domain Name System (DNS) or an API; seeEnabling Dynamic Reconfiguration of Upstream Groups.
For more information on proxying and load balancing, seeNGINX Reverse Proxy andHTTP Load Balancing in the NGINX Plus Admin Guide, and the reference documentation for the HTTPProxy andUpstream modules.
If your application requires basic session persistence (also known assticky sessions), you can implement it in NGINX Open Source with the IP Hash load‑balancing algorithm. (NGINX Plus offers a more sophisticated form of session persistence, as described inConfiguring Advanced Session Persistence.)
With the IP Hash algorithm, for each request NGINX calculates a hash based on the client’s IP address, and associates the hash with one of the upstream servers. It sends all requests with that hash to that server, thus establishing session persistence.
If the client has an IPv6 address, the hash is based on the entire address. If it has an IPv4 address, the hash is based on just the first three octets of the address. This is designed to optimize for ISP clients that are assigned IP addresses dynamically from a subnetwork (/24) range. However, it is not effective in these cases:
- The majority of the traffic to your site is coming from one forward proxy or from clients on the same /24 network, because in that case IP Hash maps all clients to the same server.
- A client’s IP address can change during the session, for example when a mobile client switches from a WiFi network to a cellular one.
To configure session persistence in NGINX, add theip_hash
directive to theupstream
block created inConfiguring Basic Load Balancing:
# In the 'http' blockupstreamnodejs{ip_hash;server192.168.33.11:8080;server192.168.33.12:8080;}
# In the 'http' blockupstreamnodejs{ip_hash;server192.168.33.11:8080;server192.168.33.12:8080;}
Directive documentation:ip_hash
You can also use the Hash load‑balancing method for session persistence, with the hash based on any combination of text andNGINX variables you specify. For example, you can hash on full (four‑octet) client IP addresses with the following configuration.
# In the 'http' blockupstreamnodejs{hash$remote_addr;server192.168.33.11:8080;server192.168.33.12:8080;}
# In the 'http' blockupstreamnodejs{hash$remote_addr;server192.168.33.11:8080;server192.168.33.12:8080;}
Directive documentation:hash
The WebSocket protocol (defined inRFC 6455) enables simultaneous two‑way communication over a single TCP connection between clients and servers, where each side can send data independently from the other. To initiate the WebSocket connection, the client sends a handshake request to the server, upgrading the request from standard HTTP to WebSocket. The connection is established if the handshake request passes validation, and the server accepts the request. When a WebSocket connection is created, a browser client can send data to a server while simultaneously receiving data from that server.
The Node.js app server supports WebSocket out of the box, so no additional Node.js configuration is required. If you want to use NGINX Open Source or NGINX Plus to proxy WebSocket traffic to your Node.js application servers, add the directives discussed in this section.
NGINX Open Source and NGINX Plus by default use HTTP/1.0 for upstream connections. To be proxied correctly, WebSocket connections require HTTP/1.1 along with some other configuration directives that set HTTP headers:
# In the 'http' blockmap$http_upgrade$connection_upgrade{defaultupgrade;''close;}# In the 'server' block for HTTPS trafficlocation/wstunnel/{proxy_passhttp://nodejs;proxy_http_version1.1;proxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection$connection_upgrade;}
# In the 'http' blockmap$http_upgrade$connection_upgrade{defaultupgrade;''close;}# In the 'server' block for HTTPS trafficlocation/wstunnel/{proxy_passhttp://nodejs;proxy_http_version1.1;proxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection$connection_upgrade;}
Directive documentation:location,map,proxy_http_version,proxy_pass,proxy_set_header
The firstproxy_set_header
directive is needed because theUpgrade
request header ishop-by-hop; that is, the HTTP specification explicitly forbids proxies from forwarding it. This directive overrides the prohibition.
The secondproxy_set_header
directive sets theConnection
header to a value that depends on the test in themap
block: if the request has anUpgrade
header, theConnection
header is set toupgrade
; otherwise, it is set toclose
.
For more information about proxying WebSocket traffic, seeWebSocket proxying andNGINX as a WebSocket Proxy.
Caching responses from your Node.js app servers can both improve response time to clients and reduce load on the servers, because eligible responses are served immediately from the cache instead of being generated again on the server. There are a variety of useful directives that can be used to fine‑tune caching behavior; for a detailed discussion, seeA Guide to Caching with NGINX.
To enable basic caching of responses from the Node.js app server, add the following configuration:
Include the
proxy_cache_path
directive to create the local disk directory/tmp/NGINX_cache/ for use as a cache. Thekeys_zone
parameter allocates 10 megabytes (MB) of shared memory for a zone calledbackcache, which is used to store cache keys and metadata such as usage timers. A 1‑MB zone can store data for about 8,000 keys.nginx# In the 'http' blockproxy_cache_path/tmp/NGINX_cache/keys_zone=backcache:10m;
# In the 'http' blockproxy_cache_path/tmp/NGINX_cache/keys_zone=backcache:10m;
Directive documentation:proxy_cache_path
In the
location
block that matches HTTPS requests in which the path starts with/webapp/, include theproxy_cache
directive to reference the cache created in the previous step.nginx# In the 'server' block for HTTPS trafficlocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;}
# In the 'server' block for HTTPS trafficlocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;}
Directive documentation:proxy_cache,proxy_pass
For more complete information on caching, refer to theNGINX Plus Admin Guide and the reference documentation for the HTTPProxy module.
HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and later. As always, we recommend you run the latest version of software to take advantage of improvements and bug fixes.
If using NGINX Open Source, note that in version 1.9.5 and later the SPDY module is completely removed from the codebase and replaced with theHTTP/2 module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX Open Source to use SPDY. If you want to keep using SPDY, you need to compile NGINX Open Source from the sources in theNGINX 1.8.x branch.
If using NGINX Plus, in R11 and later thenginx-plus package supports HTTP/2 by default, and thenginx-plus-extras package available in previous releases is deprecated by separatedynamic modules authored by NGINX.
In NGINX Plus R8 through R10, thenginx-plus andnginx-plus-extras packages support HTTP/2 by default.
In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY.
If using NGINX Plus R7, you must install thenginx-plus-http2 package instead of thenginx-plus ornginx-plus-extras package.
To enable HTTP/2 support, add thehttp2 directive in theserver
block for HTTPS traffic that we created inConfiguring Virtual Servers for HTTP and HTTPS Traffic, so that it looks like this:
# In the 'server' block for HTTPS trafficlisten443ssl;http2on;
# In the 'server' block for HTTPS trafficlisten443ssl;http2on;
Directive documentation:http2
To verify that HTTP/2 translation is working, you can use the “HTTP/2 and SPDY indicator” plug‑in available forGoogle Chrome andFirefox.
The full configuration for basic load balancing appears here for your convenience. It goes in thehttp
context. The complete file is available fordownload from the NGINX website.
We recommend that you do not copy text directly from this document, but instead use the method described inCreating and Modifying Configuration Files to include these directives in your configuration – add aninclude
directive to thehttp
context of the mainnginx.conf file to read in the contents of/etc/nginx/conf.d/nodejs-basic.conf.
proxy_cache_path/tmp/NGINX_cache/keys_zone=backcache:10m;map$http_upgrade$connection_upgrade{defaultupgrade;''close;}upstreamnodejs{# Use IP Hash for session persistenceip_hash;# List of Node.js application serversserver192.168.33.11:8080;server192.168.33.12:8080;}server{listen80;server_nameexample.com;# Redirect all HTTP requests to HTTPSlocation/{return301https://$server_name$request_uri;}}server{listen443ssl;http2on;server_nameexample.com;ssl_certificate/etc/nginx/ssl/certificate-name;ssl_certificate_key/etc/nginx/ssl/private-key;ssl_session_cacheshared:SSL:1m;ssl_prefer_server_cipherson;# Return a temporary redirect to '/webapp/' when user requests '/'location=/{return302/webapp/;}# Load balance requests for '/webapp/' across Node.js app serverslocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;}# WebSocket configurationlocation/wstunnel/{proxy_passhttps://nodejs;proxy_http_version1.1;proxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection$connection_upgrade;}}
proxy_cache_path/tmp/NGINX_cache/keys_zone=backcache:10m;map$http_upgrade$connection_upgrade{defaultupgrade;''close;}upstreamnodejs{# Use IP Hash for session persistenceip_hash;# List of Node.js application serversserver192.168.33.11:8080;server192.168.33.12:8080;}server{listen80;server_nameexample.com;# Redirect all HTTP requests to HTTPSlocation/{return301https://$server_name$request_uri;}}server{listen443ssl;http2on;server_nameexample.com;ssl_certificate/etc/nginx/ssl/certificate-name;ssl_certificate_key/etc/nginx/ssl/private-key;ssl_session_cacheshared:SSL:1m;ssl_prefer_server_cipherson;# Return a temporary redirect to '/webapp/' when user requests '/'location=/{return302/webapp/;}# Load balance requests for '/webapp/' across Node.js app serverslocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;}# WebSocket configurationlocation/wstunnel/{proxy_passhttps://nodejs;proxy_http_version1.1;proxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection$connection_upgrade;}}
This section explains how to configure enhanced load balancing with some of the extended features in NGINX Plus.
Note: Before setting up the enhanced features described in this section, you must complete the instructions for basic load balancing in these two sections:
Except as noted, all optional basic features (described in the other subsections ofConfiguring Basic Load Balancing in NGINX Open Source and NGINX Plus) can be combined with the enhanced features described here.
- Configuring Advanced Session Persistence
- Configuring Application Health Checks
- Enabling Live Activity Monitoring
- Enabling Dynamic Reconfiguration of Upstream Groups
The complete configuration file appears inFull Configuration for Enhanced Load Balancing.
NGINX Plus has more sophisticated session persistence methods than open source NGINX, implemented in three variants of thesticky
directive. In the following example, we add thesticky cookie
directive to the upstream group we created inConfiguring Basic Load Balancing.
Remove or comment out the
ip_hash
directive, leaving only theserver
directives:nginx# In the 'http' blockupstreamnodejs{#ip_hash;server192.168.33.11:8080;server192.168.33.12:8080;}
# In the 'http' blockupstreamnodejs{#ip_hash;server192.168.33.11:8080;server192.168.33.12:8080;}
Configure session persistence that uses the
sticky cookie
directive.nginx# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;stickycookiesrv_idexpires=1hdomain=.example.compath=/;}
# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;stickycookiesrv_idexpires=1hdomain=.example.compath=/;}
Directive documentation:
sticky cookie
,zone
With this method, NGINX Plus adds an HTTP session cookie to the first response to a given client from the upstream group, identifying which server generated the response (in an encoded fashion). Subsequent requests from the client include the cookie value and NGINX Plus uses it to route the request to the same upstream server, thereby achieving session persistence.
Thezone
directive creates a shared memory zone for storing information about sessions. The amount of memory allocated – here, 64 KB – determines how many sessions can be stored at a time (the number varies by platform). The name assigned to the zone – here,nodejs
– must be unique for eachsticky
directive.
The first parameter tosticky cookie
(in the example,srv_id
) sets the name of the cookie to be set or inspected. Theexpires
parameter tells the browser how long the cookie is valid, here one hour. Thedomain
parameter defines the domain and thepath
parameter defines the URL path for which the cookie is set.
For more information about session persistence, see theNGINX Plus Admin Guide.
Health checks are out‑of‑band HTTP requests sent to a server at fixed intervals. They are used to determine whether a server is responsive and functioning correctly, without requiring an actual request from a client.
Because thehealth_check
directive is placed in thelocation
block, we can enable different health checks for each application.
In the
location
block that matches HTTPS requests in which the path starts with/webapp/ (created inConfiguring Basic Load Balancing), add thehealth_check
directive.Here we configure NGINX Plus to send an out‑of‑band request for the top‑level URI/ (slash) to each of the servers in thenodejs upstream group every 5 seconds (the default URI and frequency). If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes a subsequent health check. We include the
match
parameter so we can define a nondefault set of health‑check tests (we define them in the next step).nginx# In the 'server' block for HTTPS trafficlocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;health_checkmatch=health_check;}
# In the 'server' block for HTTPS trafficlocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;health_checkmatch=health_check;}
Directive documentation:health_check
In the
http
context, include amatch
directive to define the tests that a server must pass to be considered functional. In this example, it must return status code200
, theContent-Type
response header must containtext/html
, and the response body must match the indicated character string.nginx# In the 'http' blockmatchhealth_check{status200;headerContent-Type~text/html;body~"Helloworld";}
# In the 'http' blockmatchhealth_check{status200;headerContent-Type~text/html;body~"Helloworld";}
Directive documentation:match
In thenodejs upstream group, add the following
zone
directive as necessary (if you configuredadvanced session persistence you already added it). It creates a shared memory zone that stores the group’s configuration and run‑time state, which are accessible to all worker processes.nginx# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;# ...}
# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;# ...}
Directive documentation:zone
NGINX Plus also has a slow‑start feature that is a useful auxiliary to health checks. When a failed server recovers, or a new server is added to the upstream group, NGINX Plus slowly ramps up the traffic to it over a defined period of time. This gives the server time to “warm up” without being overwhelmed by more connections than it can handle as it starts up. For more information, see theNGINX Plus Admin Guide.
For example, to set a slow‑start period of 30 seconds for your Node.js application servers, include theslow_start
parameter to theirserver
directives:
# In the 'upstream' blockserver192.168.33.11:8080slow_start=30s;server192.168.33.12:8080slow_start=30s;
# In the 'upstream' blockserver192.168.33.11:8080slow_start=30s;server192.168.33.12:8080slow_start=30s;
Parameter documentation:slow_start
For information about customizing health checks, see theNGINX Plus Admin Guide.
NGINX Plus includes a live activity monitoring interface that provides key load and performance metrics in real time, including TCP metrics inNGINX Plus R6 and later. Statistics are reported through a RESTful JSON interface, making it very easy to feed the data to a custom or third‑party monitoring tool. There is also a built‑in dashboard. Follow these instructions to deploy it.

For more information about live activity monitoring, see theNGINX Plus Admin Guide.
The quickest way to configure the module and the built‑in NGINX Plus dashboard is to download the sample configuration file from the NGINX website and modify it as necessary. For more complete instructions, seeLive Activity Monitoring of NGINX Plus in 3 Simple Steps.
Download thestatus.conf file to the NGINX Plus server:
nginx# cd /etc/nginx/conf.d#curlhttps://www.nginx.com/resource/conf/status.conf>status.conf
# cd /etc/nginx/conf.d#curlhttps://www.nginx.com/resource/conf/status.conf>status.conf
Customize the file for your deployment as specified by comments in the file. In particular, the default settings in the file allow anyone on any network to access the dashboard. We strongly recommend that you restrict access to the dashboard with one or more of the following methods:
IP address‑based access control lists (ACLs). In the sample configuration file, uncomment the
allow
anddeny
directives, and substitute the address of your administrative network for 10.0.0.0/8. Only users on the specified network can access the status page.nginxallow10.0.0.0/8;denyall;
allow10.0.0.0/8;denyall;
Directive documentation:allow and deny
HTTP Basic authentication. In the sample configuration file, uncomment the
auth_basic
andauth_basic_user_file
directives and add user entries to the/etc/nginx/users file (for example, by using anhtpasswd generator). If you have an Apache installation, another option is to reuse an existinghtpasswd file.nginxauth_basicon;auth_basic_user_file/etc/nginx/users;
auth_basicon;auth_basic_user_file/etc/nginx/users;
Directive documentation:auth_basic,auth_basic_user_file
Client certificates, which are part of a complete configuration of SSL/TLS. For more information, see theNGINX Plus Admin Guide and the documentation for the HTTPSSL/TLS module.
Firewall. Configure your firewall to disallow outside access to the port for the dashboard (8080 in the sample configuration file).
In thenodejs upstream group, include the
zone
directive as necessary (if you configuredadvanced session persistence orapplication health checks, you already added it). It creates a shared memory zone that stores the group’s configuration and run‑time state, which are accessible to all worker processes.nginx# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;# ...}
# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;# ...}
Directive documentation:zone
In the
server
block for HTTPS traffic (created inConfiguring Virtual Servers for HTTP and HTTPS Traffic), add thestatus_zone
directive:nginx# In the 'server' block for HTTPS trafficstatus_zonenodejs_server;
# In the 'server' block for HTTPS trafficstatus_zonenodejs_server;
Directive documentation:status_zone
When you reload the NGINX Plus configuration file, for example by running thenginx -s reload
command, the NGINX Plus dashboard is available immediately athttp://nginx-plus-server-address:8080.
With NGINX Plus, you can reconfigure load‑balanced server groups (both HTTP and TCP/UDP) dynamically using either DNS or the NGINX Plus API introduced inNGINX Plus R13. See the NGINX Plus Admin Guide for a more detailed discussion of theDNS andAPI methods.
To enable dynamic reconfiguration of your upstream group of Node.js app servers using the NGINX Plus API, you need to grant secured access to it. You can use the API to add or remove servers, dynamically alter their weights, and set their status asprimary
,backup
, ordown
.
Include the
zone
directive in thenodejs upstream group to create a shared memory zone for storing the group’s configuration and run‑time state, which makes the information available to all worker processes. (If you configuredadvanced session persistence,application health checks, orlive activity monitoring, you already made this change.)nginx# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;# ...}
# In the 'http' blockupstreamnodejs{zonenodejs64k;server192.168.33.11:8080;server192.168.33.12:8080;# ...}
Directive documentation:zone
In the
server
block for HTTPS traffic (created inConfiguring Virtual Servers for HTTP and HTTPS Traffic), add a newlocation
block for the NGINX Plus API, which enables dynamic reconfiguration among other features. It contains theapi
directive (api is also the conventional name for the location, as used here).(If you configuredlive activity monitoring by downloading thestatus.conf file, it already includes this block.)
We strongly recommend that you restrict access to the location so that only authorized administrators can access the NGINX Plus API. Theallow and deny directives in the following example permit access only from the localhost address (127.0.0.1).
nginx# In the 'server' block for HTTPS trafficlocation/api{apiwrite=on;allow127.0.0.1;denyall;}
# In the 'server' block for HTTPS trafficlocation/api{apiwrite=on;allow127.0.0.1;denyall;}
Directive documentation:allow and deny,api
In thehttp
block, add theresolver
directive pointing to your DNS server and then add theresolve
parameter to theserver
directive in thenodejsupstream
block, which instructs NGINX Plus to periodically re‑resolve the domain name (example.com here) with DNS:
# In the 'http' blockresolver<IP-address-of-DNS-server>;upstreamnodejs{zonenodejs64k;serverexample.comresolve;}
# In the 'http' blockresolver<IP-address-of-DNS-server>;upstreamnodejs{zonenodejs64k;serverexample.comresolve;}
Directive and parameter documentation:resolve,resolver
NGINX Plus Release 9 and later can also use the additional information in DNSSRV
records, such as the port number. Include theservice
parameter to theserver
directive, along with theresolve
parameter:
# In the 'http' blockresolver<IP-address-of-DNS-server>;upstreamnodejs{zonenodejs64k;serverexample.comservice=httpresolve;}
# In the 'http' blockresolver<IP-address-of-DNS-server>;upstreamnodejs{zonenodejs64k;serverexample.comservice=httpresolve;}
Parameter documentation:service
The full configuration for enhanced load balancing appears here for your convenience. It goes in thehttp
context. The complete file is available fordownload from the NGINX website.
We recommend that you do not copy text directly from this document, but instead use the method described inCreating and Modifying Configuration Files to include these directives in your configuration – namely, add aninclude
directive to thehttp
context of the mainnginx.conf file to read in the contents of/etc/nginx/conf.d/nodejs-enhanced.conf.
Note: Theapi
block in this configuration summary and thedownloadablenodejs-enhanced.conf file is for theAPI method of dynamic reconfiguration. If you want to use theDNS method instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.)
proxy_cache_path/tmp/NGINX_cache/keys_zone=backcache:10m;map$http_upgrade$connection_upgrade{defaultupgrade;''close;}matchnodejs_check{status200;headerContent-Type~"text/html";body~"Helloworld";}upstreamnodejs{# Health-monitored upstream groups must have a zone definedzonenodejs64k;# List of Node.js application serversserver192.168.33.11:8080slow_start=30s;server192.168.33.12:8080slow_start=30s;# Session persistence using sticky cookiestickycookiesrv_idexpires=1hdomain=.example.compath=/;}server{listen80;server_nameexample.com;# Redirect all HTTP requests to HTTPSlocation/{return301https://$server_name$request_uri;}}server{listen443ssl;http2on;server_nameexample.com;# Required for NGINX Plus to provide extended status informationstatus_zonenodejs;ssl_certificate/etc/nginx/ssl/certificate-name;ssl_certificate_key/etc/nginx/ssl/private-key;ssl_session_cacheshared:SSL:1m;ssl_prefer_server_cipherson;# Return a 302 redirect to '/webapp/' when user requests '/'location=/{return302/webapp/;}# Load balance requests for '/webapp/' across Node.js app serverslocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;# Set up active health checkshealth_checkmatch=nodejs_check;}# WebSocket configurationlocation/wstunnel/{proxy_passhttps://nodejs;proxy_http_version1.1;proxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection$connection_upgrade;}# Secured access to the NGINX Plus APIlocation/api{apiwrite=on;allow127.0.0.1;# Permit access from localhostdenyall;# Deny access from everywhere else}}
proxy_cache_path/tmp/NGINX_cache/keys_zone=backcache:10m;map$http_upgrade$connection_upgrade{defaultupgrade;''close;}matchnodejs_check{status200;headerContent-Type~"text/html";body~"Helloworld";}upstreamnodejs{# Health-monitored upstream groups must have a zone definedzonenodejs64k;# List of Node.js application serversserver192.168.33.11:8080slow_start=30s;server192.168.33.12:8080slow_start=30s;# Session persistence using sticky cookiestickycookiesrv_idexpires=1hdomain=.example.compath=/;}server{listen80;server_nameexample.com;# Redirect all HTTP requests to HTTPSlocation/{return301https://$server_name$request_uri;}}server{listen443ssl;http2on;server_nameexample.com;# Required for NGINX Plus to provide extended status informationstatus_zonenodejs;ssl_certificate/etc/nginx/ssl/certificate-name;ssl_certificate_key/etc/nginx/ssl/private-key;ssl_session_cacheshared:SSL:1m;ssl_prefer_server_cipherson;# Return a 302 redirect to '/webapp/' when user requests '/'location=/{return302/webapp/;}# Load balance requests for '/webapp/' across Node.js app serverslocation/webapp/{proxy_passhttp://nodejs;proxy_cachebackcache;# Set up active health checkshealth_checkmatch=nodejs_check;}# WebSocket configurationlocation/wstunnel/{proxy_passhttps://nodejs;proxy_http_version1.1;proxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection$connection_upgrade;}# Secured access to the NGINX Plus APIlocation/api{apiwrite=on;allow127.0.0.1;# Permit access from localhostdenyall;# Deny access from everywhere else}}
NodeSource, developers of N|Solid, contributed to this deployment guide.
- Version 4 (May 2024) – Update about HTTP/2 support (thehttp2 directive)
- Version 3 (April 2018) – Updated information about the NGINX Plus API (NGINX Plus R13, NGINX Open Source 1.13.4)
- Version 2 (May 2017) – Update about HTTP/2 support (NGINX Plus R11 and later)
- Version 1 (December 2016) – Initial version (NGINX Plus R11, NGINX 1.11.5)
What's on This Page
- About NGINX Open Source and NGINX Plus
- About Node.js
- Prerequisites and System Requirements
- Configuring an SSL/TLS Certificate for Client Traffic
- Creating and Modifying Configuration Files
- Configuring Basic Load Balancing with NGINX Open Source or NGINX Plus
- Configuring Enhanced Load Balancing with NGINX Plus
- Resources