In the previous post, we set up a basic cache using services workers – one that simply cached responses to requests coming in so we could pull them from the service worker cache instead of over the network. And while this simple cache has some usefulness, there are a number of ways we can improve upon it. In this post, we’ll look at a few ways to level up our basic service worker cache, giving us more flexibility and precision in how we handle requests and responses.

Different Caches for Different Things

Instead of throwing all our cached responses into a single cache, we can also create different caches and store our responses in them based on the resources being fetched. For instance, we may want to have one cache for images we’re caching, one for pages we’re caching, and maybe another one specifically for offline-related content.

Why is this valuable?

One reason is that it gives us granular control over what goes into in each cache. We can also trim or updated each cache differently, depending on how each cache will be used.

We could create, for instance, a certain base cache that holds all the core assets needed for the site, and which will not change. At the same time, we could also have a different cache for pages that a user visits, which is dynamically filled based on the actions of the user. This allows us to cache what we need, without the need to cache everything, tailoring what we do cache to the individual user.

Setting up the Caches

Note: The examples in this post have been greatly influenced by the work of Lyza Danger Gardner, Jeremy Keith, and Brandon Rozek. Also, since the browsers that support service workers also have solid support for ES6, the following examples will be using some ES6 features and syntax.

To start, we’re going to create variables for the names of the various caches we want to create. We can then use these variables throughout the service worker to reference our caches.

We can also use a version number in the cache name to keep new caches separate from previous ones. One way of doing this is through a version variable that is used in naming the various caches. Then, whenever we want to create new caches (say, after a update to the service worker), we can simply update this variable. This could be done manually, or we could set it up to be dynamically generated based on the time stamp, or some other piece of data.

const version = '20170101::';

// Caches for different resources

const coreCacheName = version + 'core';
const pagesCacheName = version + 'pages';
const assetsCacheName = version + 'assets';

In this example, we now have three variables we can use to reference three different caches for our service worker. The ‘pages’ cache will be for HTML requests that get cached, and the ‘assets’ cache will be for non-HTML (images, CSS, JS, etc.).

We’ll also have a ‘core’ cache, which is a collection of all the pages and assets we want to ensure are cached immediately. For instance,

// Resources that will be always be cached

const coreCacheUrls = [
  '/',
  '/about/',
  '/offline/',
  '/assets/site.js',
  '/assets/logo.png'
];

This array can then be used in the install listener to ensure that these core urls are cached before installation is complete.

Populating the Caches

Now that we have names for the caches we want to use, we’ll set up a simple function to store a request in a specified cache. It will take the name of the cache we want to use, as well as the request and response we want to store.

function addToCache(cacheName, request, response) {
  caches.open(cacheName)
    .then( cache => cache.put(request, response) );
}

In the previous version of the service worker cache, we used a fetch listener to cache all responses that came through. In this version, we’re going to add a little logic to set up different ways to handle different kinds of resources. We can set up a listener on the fetch event, and then check the header of the request to see what kind of request it is. Once we know that, we can act accordingly.

HTML

For HTML responses, for instance, we may want to just send the network response first, and check the cache only when the network is down. That way, if a page is updated, we’ll get the freshest copy available. If neither the cache nor the network has the page, we can serve a custom /offline page.

self.addEventListener('fetch', event => {

  let request = event.request,
      acceptHeader = request.headers.get('Accept');

  // ...

  // HTML Requests

  if (acceptHeader.indexOf('text/html') !== -1) {

    // Try network first

    event.respondWith(
      fetch(request)
        .then(response => {
          if (response.ok)
            addToCache(pagesCacheName, request, response.clone());
          return response;
        })

      // Try cache second with offline fallback

      .catch( () => {
        return caches.match(request).then( response => {
            return response || caches.match('/offline/');
        })
      })
    );

    // ...
  }
});

Non-HTML

For non-HTML resources, on the other hand, we may want to use the cache as our first choice instead of the network. Images, for instance, don’t change that often, and pulling them from the cache could save us time and bandwidth. If they’re not cached, we would then check the network as a fallback. If neither of these work, we can serve a basic offline image.

self.addEventListener('fetch', event => {

  // ...

  if (acceptHeader.indexOf('text/html') == -1) {
    event.respondWith(
      caches.match(request)
        .then( response => {

          // Try cache, then network, then offline fallback

          return response || fetch(request)
            .then( response => {
              if (response.ok)
                addToCache(assetsCacheName, request, response.clone());
              return response;
            })

          // Offline fallback

          .catch( () => {
            return new Response('<svg role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns="http://www.w3.org/2000/svg"><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', { headers: { 'Content-Type': 'image/svg+xml' }});
          })
      })
    );
  }
});

Trimming the Cache(s)

The content of our ‘core’ cache will remain constant, but the content of the other caches will be dynamically populated. Now that each one is separated, we have the ability to proactively trim these caches so they don’t grow too large. And we can do this on a cache-by-cache basis, setting different size limits depending on the cache.

To do this, we’re going to use a few different pieces of code. First, we’re going to set up a simple function within the service worker to go through a specified cache, and then remove items that go beyond the specified max. We are also going to set up an event listener so that this preceeding function can be triggered from outside the service worker.

// sw.js

// Trim specified cache to max size

function trimCache(cacheName, maxItems) {
  caches.open(cacheName).then(function(cache) {
    cache.keys().then(function(keys) {
      if (keys.length > maxItems) {
        cache.delete(keys[0]).then(trimCache(cacheName, maxItems));
      }
    });
  });
}

self.addEventListener('message', event => {
  if (event.data.command == 'trimCaches') {
    trimCache(pagesCacheName, 20);
    trimCache(assetsCacheName, 20);
  }
});

Once these are in place, we can use the service worker postMessage function from the page under the service worker control to send a message that triggers the previous functions.

// site.js

window.addEventListener('load', function() {

  // Have service work trim caches

  if (navigator.serviceWorker.controller != null) {
    navigator.serviceWorker.controller.postMessage({'command': 'trimCaches'});
   }
});

Clearing Out Old Caches

In addition to trimming up the current caches we’re using, it would also make sense to purge previous versions of the cache. If we set the version as a variable, we can use the value of this variable to filter the existing caches, removing ones that are a different version. This function can then be used in the activate event, effectively cleaning up old caches whenever it takes control of a page.

// Remove old caches that done't match current version

function clearCaches() {
  return caches.keys().then(function(keys) {
    return Promise.all(keys.filter(function(key) {
        return key.indexOf(version) !== 0;
      }).map(function(key) {
        return caches.delete(key);
      })
    );
  })
}

self.addEventListener('activate', event => {
  event.waitUntil(
    clearCaches().then( () => {
      return self.clients.claim();
    })
  );
});

Smart Caching

Although we’ve set up some separate caches for different asset types, another optimization that we can add in is a simple filter that we can use to determine if we should even cache a request to begin with.

For instance, we may only want to cache resources coming from our domain, or only GET request types. We may also have specific areas of the site that we want to cache, and others that we want to ignore.

One way to do this is to create a simple function that handles the check of whether a request should be cached. It will check whatever conditions we code into it, and return true or false based on how the request is evaluated with these conditions.

// Check if request is something SW should handle

function shouldFetch(event) {
  let request = event.request,
      pathPattern = /^\/(?:(20[0-9]{2}|about|assets)\/(.+)?)?$/,
      url = new URL(request.url);

      return ( request.method === 'GET' &&
               !!(pathPattern.exec(url.pathname)) &&
               url.origin === self.location.origin )
}

Then, in the fetch listener, we can use this function to check whether to do anything with the response, or to simply let it go through.

self.addEventListener('fetch', event => {

    let request = event.request;

    // ...

    // Check if we should respond

    if (!shouldFetch(event)) {
      event.respondWith(
        fetch(request)

          // ...

        );

      // ...

    }

  // ...

}

Offline Fallbacks

You may have noticed in the above samples there are various offline fallbacks. For instance, for the HTML requests, there is a catch() in case the network fetch fails. In that case, the service worker will either return the response from the cache or will serve the /offline/ page stored in the cache.

// ...
// Try cache second with offline fallback

.catch( () => {
  return caches.match(request).then( response => {
      return response || caches.match('/offline/');
  })
})

We can do a similar thing for the non-HTML requests, but instead of the /offline/ page, we’ve set up a simple ‘offline’ SVG that can be used as a fallback.

// ...
.catch( () => {
  return new Response('<svg role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns="http://www.w3.org/2000/svg"><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', { headers: { 'Content-Type': 'image/svg+xml' }});
})

Always Updating

These are just a few enhancements that can be added to a simple service worker caching strategy. Having pages and assets cached by the service worker gives us the ability to serve these resources, even when the network goes down (e.g. the user is offline). It also gives us much more flexibility in how we handle certain requests. There are plenty of additional enhancements that could be made – this is just an example to get you started.

Since the above code has been split up based on the various types of improvements it’s related to, here’s a link to the current version of the service worker on this site, which will give you the complete context of all the snippets listed above.

Note: the service worker used here is a work in progress, and will continue to evolved. Feel free to make use or adapt it however you’d like.