Cache

Cloudfront, caching & Skpr

By Karl Hepworth28 May 2021

Skpr now supports managed CloudFront cache policies!

In this blog, I'll be explaining what they are, how they work and how you can configure a project on Skpr to use them.

Background

If your website starts to send errors to users, wouldn't it make sense to send the user the last known good piece of content while the error is present?

This is one example of using a cache layer - it acts as a small Content Delivery Network for your application which will store and respond with your content with a couple of constraints such as time.

What is more problematic is when error pages make it into this 'bucket' and continue to serve even after the error on your website has gone away. A cache policy would allow the application to dynamically check and serve content accordingly if the error is gone before serving the content in the cache.

Monitoring an endpoint

To test the cache on an endpoint, we will need a way to monitor for changes as they happen. For this, we can run a shell script that will poll an endpoint at regular intervals.

#!/bin/bash

while true
do
    echo $(curl --HEAD -s ${ENDPOINT} | grep "x-cache");
    sleep 5
done

You can see this in action below:

Animated image showing the output of a script polling a website for an x-cache
header showing an endpoint entering CloudFront's cache

You'll notice when the endpoint isn't cached it'll return a Miss before continuing to a Hit and continuing to Hit. This means that the content at the endpoint was cached.

If we were to force an error to occur, we will be able to see the response change to a 'RefreshHit'.

Animated image showing the output of a script polling a website for an x-cache
header showing an endpoint leaving and entering CloudFront's cache before
leaving and serving stale content

This is the behaviour we want to see, but to get this we'll need a couple of settings to be configured.

How it works

So in order to roll out standardized configuration we'll be using a couple of Cache Policies in our CDN - CloudFront. These cache policies allow us to set the value of our Minimum TTL in a standardized and managed way.

The Minimum TTL is how long the cache object will stay in CloudFront as a minimum before fetching a new object from the origin endpoint. The content will first work its way into a separate bucket in the event it needs to be served as a RefreshHit, but it will naturally expire after the quantity of this field in seconds and be replaced as the cache lifecycle happens.

It's worth noting that the Minimum TTL respects the Max-Age header response of the application. CloudFront takes care of the heavy lifting from the headers of your application when the value isn't set to 0. Think of this as a way of unlocking the functionality of CloudFront rather than depending on the application to configure the CDN.

See it in action!

We're showing you (below) the behaviour if the cached object were to be invalidated, and the error goes into the service's cache. Following this, we resolve the error, and the object gets purged and replaced by the new object.

Animated image showing the output of a script polling a website for an
x-cache header showing an endpoint with a cached error recovering and caching a
fresh copy of content that isn't throwing an error

For our final demonstration, we'll show a working cache endpoint responding with an initially uncached object which goes into the cache, which then starts throwing a 503 error that is quickly resolved.

Animated image showing the output of a script polling a website for an
x-cache header showing the full lifecycle of a cached endpoint. First entering
cache, then expiring and being refreshed. Following this, the cache becomes
stale, being cleared out of cache and being replaced by a fresh piece of
non-erroring content.

How you can use it

We've got a couple of managed policies which you can use on Skpr. You can find more information on the official docs.

You can opt-into one of these using the example configuration below, substituting drupal for one of the managed cache policies we've created.

ingress:
  cache:
    policy: drupal

Final notes

It's a great idea to consider opting into using this feature ahead of becoming the default. It's an option that could potentially improve your uptime and improve the experience for your users.

Have a look at the Skpr docs and see if there's a cache policy available that will help you. If you need something more specific to your needs reach out!

Tags

skpr
cache
cloudfront

Getting Started

Interested in a demo?

🎉 Awesome!

Please check your inbox for a confirmation email. It might take a minute or so.

🤔 Whoops!

Something went wrong. Check that you have entered a valid email and try submitting the form again.

We'll be in touch shortly.