beresp.ttl
RTIME, can be read andset, but notunset.
Available infetch
Maximum amount of time for which the object will be considered fresh in the cache. By default, the value ofberesp.ttlwill be parsed from theSurrogate-Control,Cache-Control,Expires, andAge headers received from the backend. Setting this variable takes precedence over those headers.
Conversely, setting caching headers invcl_fetch will not affectberesp.ttl because the value ofberesp.ttl is derived from the headers only once, beforevcl_fetch is executed.
Once the TTL is reached, if the object has not been evicted or purged, it will becomestale.
Multi level caching with Age
If you have multiple levels of caching, settingberesp.ttl explicitly can result in content staying in cache longer than you expect. This often occurs when making use of Fastly'sshielding feature or if your backend server is serving content that has already spent time in a cache.
TheAge header is used by servers to communicate to a client how long the content has spent in cache, and if the client is also a cache, it can take this into account when calculating a TTL. This behavior is built intothe way Fastly calculates cache TTL by default. However, if you setberesp.ttl explicitly, you must do this calculation if you want to respect theAge header.
To do so, whenever you setberesp.ttl, subtract any existing Age:
setberesp.ttl=3600s;setberesp.ttl-=std.atoi(beresp.http.Age);Try it out
beresp.ttl is used in the following code examples. Examples apply VCL to real-world use cases and can be deployed as they are, or adapted for your own service. See thefull list of code examples for more inspiration.
Click RUN on a sample below to provision a Fastly service, execute the code on Fastly, and see how the function behaves.
Override TTLs based on content type
Set TTLs at the edge based on the type of resource. Better done at origin, but this can be a great 'quick fix' or a solution if you don't control the origin.
Serve stale content from cache while origins are offline
Deal with all potential scenarios for using stale content to satisfy requests when origin is unhealthy or misbehaving.