Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Dynamic proxy with nginx. Dockerized Nginx+Lua dynamic proxy with upstreams stored in redis.

NotificationsYou must be signed in to change notification settings

Ermlab/nginx-lua-proxy

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docker stars Docker pulls

Docker repo Github repo

The project will help you create in runtime dynamic routing backend based on URL stored in Redis.If you want to dynamically create or serve some backend (web app, vm or other services) for your users you can save custom url in Redis and it will work without restarting the nginx.

The main goal is to build the counterpart of hipache (https://github.com/hipache/hipache) with nginx.The proxy tries to find the host in redis database and without the reloading (proxy server) use it as upstream server.

The data stored in redis is in the same format as in hipache. All code is built from source.

This procject is based on wonderfull projects:

Usage

  1. Run REDIS database, it is essential to name itredis, because the lua resty redis connection objects rely on hostname=redis,

    docker run -d --name redis redis
  2. Run nginx-lua-proxy container and linked it with redis

    docker run -d --link redis:redis -p 9090:80 --name $CONTAINER_NAME ermlab/nginx-lua-proxy
  3. Add to redis some hosts

    $ redis-cli rpush frontend:dynamic1.example.com mywebsite$ redis-cli rpush frontend:dynamic1.example.com http://192.168.0.50:80$ redis-cli rpush frontend:dynamic2.example.com mywebsite$ redis-cli rpush frontend:dynamic2.example.com http://192.168.0.100:80
  4. Check if everything is working

    curl -H 'Host: dynamic1.example.com' http://localhost:9090orcurl -H 'Host: dynamic2.example.com' http://localhost:9090
  5. If you want to test in the browser you should set the dns wildcard for domain *.example.com and it should point to your nginx proxy

Performance testing Hipache vs NGINX

Testing scenario:

  • at front sits haproxy and do routing between two backends: hipache.ermlab.com and nginx.ermlab.com
  • haproxy redirects traffic from *.hipache.ermlab.com to hipache proxy and *.nginx.ermlab.com to nginx-lua-proxy
  • haproxy, hipache, nginx-lua-proxy and redis are installed on the same server (proxy server)
  • there is one simple static website, it is available at 192.168.0.10 (web server)
  • redis contains two dynamic backends both point to the same website (192.168.0.10)
    • host for hipache: id1.hipache.ermlab.com->192.168.0.10
    • host for nginx-lua-proxy: id1.nginx.ermlab.com->192.168.0.10
  • software runs as docker containers: redis, hipache, nginx-lua-proxy
  • proxy server and web server have 2CPUs and 2GB RAM

Testing with apache benchmark

ab -n 20000 -c 200 http://id1.hipache.ermlab.comab -n 20000 -c 200 http://id1.nginx.ermlab.com

Results

ParameterHipacheNginx-lua-proxy
Concurrency Level:200200
*Time taken for tests:57.446 seconds14.951 seconds
Complete requests:2000020000
Failed requests:00
Write errors:00
Total transferred:6500000 bytes6380000 bytes
HTML transferred:2680000 bytes2560000 bytes
**Requests per second:348.15 [#/sec] (mean)1337.68 [#/sec] (mean)
*Time per request:348.464 [ms] (mean)149.513 [ms] (mean)
*Time per request:2.872 [ms]0.748 [ms]
**Transfer rate:110.50 [Kbytes/sec]416.65 [Kbytes/sec]

*Lower is better

**Higher is better

Our solution is 3-4x better than Hipache.

Hipache - connection times

Connection Times (ms)minmean[+/-sd]medianmax
Connect:020362.217001
Processing:4456653.239815349
Waiting:3453653.339515349
Total:5477744.640015350

Nginx-lua-proxy - connection times

Connection Times (ms)minmean[+/-sd]medianmax
Connect:010.5116
Processing:40143197.91103297
Waiting:40143197.91103297
Total:46144197.91113298

Percentage of the requests served within a certain time(ms) - lower is better

PercentHipache (ms)Nginx-lua-proxy (ms)
50%400111
66%484120
75%546126
80%584129
90%687138
95%794152
98%897198
99%1032515
100% (longest request)153503298

VHOST Configuration

All VHOST configuration is managed through a REDIS. This makes it possible to update the configurationdynamically and gracefully while the server is running, and have that stateshared across workers.

Let's take an example to proxify requests to 2 backends for the hostnameexample.com. The 2 backends IP are192.168.0.42 and192.168.0.43 andthey serve the HTTP traffic on the port80.

redis-cli is the standard client tool to talk to Redis from the terminal.

Follow these steps:

  1. Create the frontend and associate an identifier:

     $ redis-cli rpush frontend:example.com mywebsite (integer) 1

The frontend identifier ismywebsite, it could be anything.

  1. Associate the 2 backends:

     $ redis-cli rpush frontend:example.com http://192.168.0.42:80 (integer) 2 $ redis-cli rpush frontend:example.com http://192.168.0.43:80 (integer) 3
  2. Review the configuration:

     $ redis-cli lrange frontend:example.com 0 -1 1) "mywebsite" 2) "http://192.168.0.42:80" 3) "http://192.168.0.43:80"

While the server is running, any of these steps can be re-run without messing upwith the traffic.

Automated

The master branch on the GitHub repo is watched by an automated docker build

Which builds docker imageermlab/nginx-lua on push to master

On success, the docker build triggers the docker repo's webhooks (if any)

Maintainers

License

http://www.apache.org/licenses/LICENSE-2.0

APACHE LICENSE-2.0 ... In other words, please use freely and do whatever you want with it for good of all people :)

  1. Run REDIS database, it is essential to name itredis, because the lua resty redis connection objects relies on hostname=redis,

    docker run -d --name redis redis
  2. Run nginx-lua-proxy container and linked it with redis

    docker run -d --link redis:redis -p 9090:80 --name $CONTAINER_NAME ermlab/nginx-lua-proxy
  3. Add to redis some hosts

    $ redis-cli rpush frontend:dynamic1.example.com mywebsite$ redis-cli rpush frontend:dynamic1.example.com http://192.168.0.50:80$ redis-cli rpush frontend:dynamic2.example.com mywebsite$ redis-cli rpush frontend:dynamic2.example.com http://192.168.0.100:80
  4. Check if everything is working

    curl -H 'Host: dynamic1.example.com' http://localhost:9090orcurl -H 'Host: dynamic2.example.com' http://localhost:9090
  5. If you want to test in the browser you should set the dns wildcard for domain *.example.com and it should point to your nginx proxy

Performance testing Hipache vs NGINX

Testing scenario:

  • at front sits haproxy and do routing between two backends: hipache.ermlab.com and nginx.ermlab.com
  • haproxy redirects traffic from *.hipache.ermlab.com to hipache proxy and *.nginx.ermlab.com to nginx-lua-proxy
  • haproxy, hipache, nginx-lua-proxy and redis are installed on the same server (proxy server)
  • there is one simple static website, it is available at 192.168.0.10 (web server)
  • redis contains two dynamic backends both point to the same website (192.168.0.10)
    • host for hipache: id1.hipache.ermlab.com->192.168.0.10
    • host for nginx-lua-proxy: id1.nginx.ermlab.com->192.168.0.10
  • software runs as docker containers: redis, hipache, nginx-lua-proxy
  • proxy server and web server have 2CPUs and 2GB RAM

Testing with apache benchmark

ab -n 20000 -c 200 http://id1.hipache.ermlab.comab -n 20000 -c 200 http://id1.nginx.ermlab.com

Results

ParameterHipacheNginx-lua-proxy
Concurrency Level:200200
*Time taken for tests:57.446 seconds14.951 seconds
Complete requests:2000020000
Failed requests:00
Write errors:00
Total transferred:6500000 bytes6380000 bytes
HTML transferred:2680000 bytes2560000 bytes
**Requests per second:348.15 [#/sec] (mean)1337.68 [#/sec] (mean)
*Time per request:348.464 [ms] (mean)149.513 [ms] (mean)
*Time per request:2.872 [ms]0.748 [ms]
**Transfer rate:110.50 [Kbytes/sec]416.65 [Kbytes/sec]

*Lower is better

**Higher is better

Our solution is 3-4x better than Hipache.

Hipache - connection times

Connection Times (ms)minmean[+/-sd]medianmax
Connect:020362.217001
Processing:4456653.239815349
Waiting:3453653.339515349
Total:5477744.640015350

Nginx-lua-proxy - connection times

Connection Times (ms)minmean[+/-sd]medianmax
Connect:010.5116
Processing:40143197.91103297
Waiting:40143197.91103297
Total:46144197.91113298

Percentage of the requests served within a certain time(ms) - lower is better

PercentHipache (ms)Nginx-lua-proxy (ms)
50%400111
66%484120
75%546126
80%584129
90%687138
95%794152
98%897198
99%1032515
100% (longest request)153503298

VHOST Configuration

All VHOST configuration is managed through a REDIS. This makes it possible to update the configurationdynamically and gracefully while the server is running, and have that stateshared across workers.

Let's take an example to proxify requests to 2 backends for the hostnameexample.com. The 2 backends IP are192.168.0.42 and192.168.0.43 andthey serve the HTTP traffic on the port80.

redis-cli is the standard client tool to talk to Redis from the terminal.

Follow these steps:

  1. Create the frontend and associate an identifier:

     $ redis-cli rpush frontend:example.com mywebsite (integer) 1

The frontend identifer ismywebsite, it could be anything.

  1. Associate the 2 backends:

     $ redis-cli rpush frontend:example.com http://192.168.0.42:80 (integer) 2 $ redis-cli rpush frontend:example.com http://192.168.0.43:80 (integer) 3
  2. Review the configuration:

     $ redis-cli lrange frontend:example.com 0 -1 1) "mywebsite" 2) "http://192.168.0.42:80" 3) "http://192.168.0.43:80"

While the server is running, any of these steps can be re-run without messing upwith the traffic.

Automated

The master branch on the github repo is watched by an automated docker build

Which builds docker imageermlab/nginx-lua on push to master

On success, the docker build triggers the docker repo's webhooks (if any)

Maintainers

License

http://www.apache.org/licenses/LICENSE-2.0

APACHE LICENSE-2.0 ... In other words, please use freely and do whatever you want with it for good of all people :)

About

Dynamic proxy with nginx. Dockerized Nginx+Lua dynamic proxy with upstreams stored in redis.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Dockerfile89.5%
  • Shell10.5%

[8]ページ先頭

©2009-2025 Movatter.jp