NGINX is known for its reverse proxy functionality: NGINX acts as a gatewayserver that can forward requests to a backend, while managing a large number ofconnections and ensuring clients are behaving correctly. Typically the serveryou proxy too is an entirely different process written in a different language.
WithOpenResty, your application server is NGINX. In all of myprojects I've typically used a single NGINX that handles internet traffic anddoes the application logic.
NGINX’s reverse proxy facilities are powerful though, so in this guide we'lluse them to point back to the same instance of NGINX. Then show how we can useNGINX caching, SSI, and gzip compression.
Before going into any of the detailed examples we'll set up a configurationusesproxy_pass to pass the request to the same instance of NGINX.
This configuration example isn’t completely standalone, so expect to adapt itfor your setup. If you have any questions on how to do that, leave a commentbelow.
http{server{server_namemywebsite.com;listen80;listen443ssl;location/{proxy_passhttp://127.0.0.1:80;proxy_set_headerHostmywebsite.local;# include details about the original requestproxy_set_headerX-Original-Host$http_host;proxy_set_headerX-Original-Scheme$scheme;proxy_set_headerX-Forwarded-For$remote_addr;}}server{# must match host header & port from aboveserver_namemywebsite.local;listen80;# can be used to prevent double logging requestsaccess_logoff;# only allow requests from same machingallow127.0.0.1;denyall;location/{## render you applicationcontent_by_lua_block'--...';}}}The following examples focus on making changes to the reverse proxy serverblock, they will only contain that part of the configuration. Refer to above tosee the rest of the configuration example.
gzip CompressionAdding gzip compression to your HTML responses is a good way to boost clientperformance. If you're using OpenResty to write a response for a webapplication thegzip configuration option does not work. You can,however, use the reverse proxy server to gzip the response before it returns itto the client. Make the following change:
location/{proxy_passhttp://127.0.0.1:80;proxy_set_headerHostmywebsite.local;gzipon;gzip_proxiedany;# if necessary, limit by content type:# gzip_types application/json text/html;# ...}TheNGINX proxy module contains a powerful caching system. It’s agreat alternative to using separate software like Varnish since it’s alreadybuilt in.
The cache utilizes the file system to store cached objects, so it survives aserver reboot and cached files can be purged by deleting the respective file.
There’s a rich set of configuration options for the cache, so adapt this basicexample to fit your needs. Additionally, the caching requirements ofapplications vary significantly.
A common usecase is caching logged out pages while enabling users who arelogged in to see content generated by the application server. In order toaccomplish this, the application server must be able to control the cachabilityof a response, and the proxy server must be able to know when to skip the cache.
Here’s a quck overview:
It’s important to get both of these right. Mistakes with caching can leakprivate account information or break your site.
server{# create a cache named 'pagecache'# has 1g cache with space for 100m of keysproxy_cache_path../pagecachelevels=1:2keys_zone=pagecache:100mmax_size=1ginactive=2huse_temp_path=off;location/{proxy_passhttp://127.0.0.1:80;proxy_set_headerHostmywebsite.local;# use our cache named 'pagecache'proxy_cachepagecache;# cache status code 200 responses for 10 minutesproxy_cache_valid20010m;# use the cache if there's a error on app server or it's updating from another requestproxy_cache_use_staleerrortimeoutupdatinghttp_500http_502http_503http_504;# add a header to debug cache statusadd_headerX-Cache-Status$upstream_cache_status;# don't let two requests try to populate the cache at the same timeproxy_cache_lockon;# bypass the cache if the session cookie is setproxy_cache_bypass$cookie_session;# ...}}I recommend going through each of these config directives and reading about howthey work on theNGINX documentation. The example above is notsomething you can copy and paste, but instead is a starting point forresearching the different caching options.
In the above example, the$cookie_session variable is used to toggle skippingthe cache. The cache should be skipped when for sessions that require dynamiccontent, typically users who are logged in. When using the Lua NGINX module,it’s easy to insert some code to set this variable:
set_by_lua$cookie_session'--pseudocodeexamplelocalparse_session=require("my_session_library")--ifasessionisavailable,return1totriggercachebypassifparse_session()thenreturn"1"elsereturn""end';proxy_cache_bypass$cookie_session;The response from the proxied request can control whether is able to be cachedby using HTTP headers likeCache-Control. By default, the NGINX cache isaware of a handful of headers, but you can disable them usingproxy_ignore_headers
The simplest way to prevent a request from being stored in the cache is to useCache-Control: no-store.
It’s important to send this header for any session-specific request, like alogged in user’s request. If this isn’t done, a logged-in view may be cachedand presented to everyone who visits your site. For some sites, this may evenleak sensitive data.
Server side includes, or SSI, is a NGINX module that allows you tomodify a request based by injecting special tags into your response that NGINXunderstansd. It can be used to compose the final response out of multiple HTTPrequests.
Combined with some of the techniques above, some interesting performanceoptimizations can be achived with the NGINX cache.
SSI can be enabled location, server, or http block. I recommend being asspecific as possible, and only enabling SSI on the locations where it isneeded. SSI can be a security vulnerability if untrusted SSI tags getevaluated.
If your website user submitted data then you need to be careful aboutsantiization. If a user is able to insert a SSI directive then your server maybe compromised. Since SSI tags use< and>, a standard HTML sanitizer willgenerally work, but verify before deploying.
To Enable SSI Only a single config line is necessary:
server{location/{ssion;# .. other configuration}}By creating a internal NGINX location that returns fragments of HTML servedthrough the NGINX cache, you can insert cached cached into parts of a page. Agood candidate for this might be a comment system, or a recommender system. Inthis approach, the main page can be rendered with dynamic content, and cachedcontent can be inserted through an SSI.
This works best when pages generally render fast, but there are some sectionsthat can be slow to render.
Because the NGINX cache is being used, all the features it has comes free, likestale revalidation and expiration.
SSIs tags are processed after the NGINX cache pulls a response. The SSI tagsare stored “as is” in the cache. When serving a cached result with SSIprocessing enabled, the cached response will be scanned for SSI tags, and anyreplacements will be made.
By using this approach you can cache an entire page, while still having dynamicsections in the page. A good candidate for this might be inserting data aboutthe user’s session, or adding CSRF tokens.
leafo.net · Generated Sun Oct 8 13:02:35 2023 bySitegenmastodon.social/@leafo