In part 1, we looked at cutting down the actual file sizes of the assets used by a web page. In this post, we’ll take a look at another factor that impacts how fast a page loads: the total number of requests.
Reducing the Number of Requests
In addition to reducing the total size of the assets loaded, cutting down the number of requests a browser makes can also help to improve performance. Generally, the fewer number of requests we have to make, the faster the page will be able to load.
This is especially true of assets that the browser needs to load in order to render the page. The browser will sometimes delay the rendering of the page until it has downloaded the files it deems necessary. These files, specifically CSS and JS files, are considered blocking because they can block the browser from rendering the page. The fewer the number of these files the faster the user will be able to see the page.
How do we reduce the number of requests? In addition to removing files files that aren’t needed, two simple things that can help are concatenation and caching.
The first one we’ll look at is concatenation. This means taking multiple files and creating a single, larger file that contains all of them together. Even though the total size may be the same, being able to get all of the information in a single request instead of making multiple requests can help speed things up.
During development work, having multiple CSS and JS files can help with organizing and maintaining your code. However, when these files get moved to production, finding ways to combine these separate files is worth looking into. There are plenty of ways to do this—from manually using the
cat command on the command line, to using a plugin for your favorite task runner or development framework. Rails, for instance, has this built in, as do many other popular frameworks. Ideally, this step would be automated so that it can happen automatically without any additional effort once it’s set up.
Another way to minimize the total number of requests is by utilizing caching. If a browser caches an asset, it won’t need to download it again the next time it needs it, which will mean a faster render time for the page. To use caching efficiently, it’s a good idea to configure the server to return caching headers that specify how long a browser should cache the assets. The length of time a browser should probably differ from one asset type to another (e.g. html pages vs images vs css or js), depending on how often it will be updated.
The one thing to consider, though, is how to make sure a browser knows when a file gets updated. If it caches a file, and doesn’t know when it’s been updated, it will continue to used the previous cached copy until the cache expires.
One way to do this is by making sure the file name changes whenever the file changes. An easy way that is often used is to embed a timestamp or build number or some kind of fingerprinting into the name of the asset. For instance, instead of using
style.css, it may be named
style-d5c08c08d7130ce041b2456beb4f2998.css. And then, whenever it’s updated, this unique id would change, and thus force the browser to download the new file.
By implementing some kind of versioning system, you can leverage the benefits of caching, but allow ensure that updated files get used immediately. Ideally this system would be automated. Many popular frameworks already have this built in, and there are plenty of plugins that can help implement this into your build process.
Reducing the number of requests a web page requires is one important strategy in optimizing website performance. One way to do this is through combining multiple files into a single file through concatenation. Another way to reduce requests is through the effective use of a caching policy that helps browsers re-use previously downloaded files, while at the same making sure they know when files have been updated.