HTTP content compression has a significant impact on the client-side performance of a web app. In this blog post, I’ll describe different methods for compressing dynamic and static content in Ruby on Rails apps using Gzip and Brotli algorithms.
We’ll start by describing what exactly is a content compression, measure what’s the overhead compared to potential gains. We’ll also learn how to check if assets are correctly compressed using popular networking tools.
Let’s get started.
What exactly is an HTTP response compression?
A web client, like browser, mobile application, or networking library, can optionally specify what types of compression algorithms it supports. It is done by adding a special header to the request e.g.
Accept-Encoding: gzip, deflate, br
Its presence means that client knows how to handle data compressed using Gzip or Brotli algorithms and prefers it over a plaintext response.
If a server can respond to the request using compression it sets a header indicating the type of compression algorithm applied to the body for example:
Content-Encoding: gzip
for Gzip encoded responses.
To see the difference between plaintext and Gzip response, you can issue those two cURL commands:
The Gzip response is unreadable unless you unzip it:
Is Gzip or Brotli compression worth it?
Compressing static assets like JavaScript or CSS files, is a no-brainer. Because they are well static, they can be precompressed and smaller versions sent to the client without additional overhead.
What about dynamic content? It cannot be precompressed, so the server must burn additional CPU cycles for every single request. Is the additional overhead always worth it?
Short answer: usually yes. To answer in more detail, we’ll have a deep dive into Chrome’s networking tab.
Response transfer overhead
Let’s look at how long it takes to download 327kb of JSON payload on different quality on network connections with and without compression.
Waiting TTFB
is time spent waiting for the server response and initial byte transfer. Content download
is the actual transfer of data. We can see our API is reasonably quick responding in less than 100ms, and 110ms more is needed to download the full payload.
Now see the stats for Gzip-ed JSON response on WIFI:
We can see that the first byte was not delayed by a measurable amount. Network conditions vary, so compression overhead cannot be reliably measured like that, we’ll come back to it later. The actual transfer of data was ~80ms faster because 219kb less was transferred over the wire.
Speedup on poor network conditions
If you want to write fast web apps, use slow internet.
Shaving 80ms off on the client-side is a nice win but might not seem worth the effort. Let’s check the difference for a hypothetical slow mobile client consuming your API on a 3G network. Below measurements were carried out using Chrome’s throttle network speed feature with Fast 3G
preset.
As you can see on the 3G network, the first byte arrives after over 500ms, and the actual transfer of uncompressed JSON takes over two seconds more.
What if we apply Gzip or Brotli compression?
Just by applying the compression, we can shave off over 1.5 seconds for the client! That’s already a difference between an acceptable and a sluggish web app so unquestionably worth the effort.
Compression overhead
Now that we know how much we can gain on the client-side by compressing dynamic content, let’s measure the actual overhead of Gzip and Brotli algorithms.
I’ll be using the official Zlib library for Gzip and Brotli gem. Tests were carried out on Ruby 2.6.6.
Both implementations delegate the actual compression workload to C extensions, so performance is comparable to e.g., NGINX as reverse proxy.
I encourage you to run those code snippets yourself:
Now in IRB:
Reducing the 327kb of JSON data to around 100kb takes 16ms with Gzip and 13ms with Brotli. Compared to the client-side speedup, even on the WiFi network, the payoff is significant.
If you’re playing around with Brotli compression, remember to avoid using the default setting for dynamic content. By default, both the gem and command-line tool use the highest compression quality 11, which is order of magnitude slower than 5.
Compression measurements summary
As you can see, compressing both dynamic and static content is usually worth it and overhead is negligible. The way compression algorithms work is that the more repetition there is in data, the better the compression effects. HTML and JSON with their repetitive key names and markup components are, therefore, perfect candidates for compression. It is worth mentioning that compression does not usually make sense for JPG or PNG images.
Let’s move on to discussing how to implement compression in Rails apps.
Configuring Gzip and Brotli compression in Ruby on Rails apps
The easiest way to check if compression is currently enabled is to fire up the Firefox networking console and compare assets size vs. the amount of data transferred:
If Size
is the same as Transferred
it means that the content is not compressed.
To check the actual compression algorithm for a given asset, you need to inspect the response headers.
content-encoding: br
header means that Brotli compression was appliedYou can also use this tool to check compression for a specific URL.
If your web app is currently not using compression or you’d like to squeeze out a couple less kb using Brotli instead of Gzip, you have several tools at your disposal.
Use Cloudflare proxy
The most plug and play approach to globally applying compression for all your network traffic is to use Cloudflare. You can check out the docs for more info on what types of content they compress and how to configure it.
Long story short, you’d need to move your DNS nameservers to Cloudflare and enable proxy. I’m a big fan of Cloudflare and try to implement it in the projects whenever possible.
Compression using NGINX reverse proxy
If you don’t want to or cannot migrate to Cloudflare, your second best option is to use NGINX as a reverse proxy. It supports Gzip compression out of the box. You can start using it with the following sample config:
If you want to apply the Brotli compression, you’d need to compile additional modules. You can checkout out this tutorial for detailed info.
Once that’s done you can use the following config:
For brotli_static
to work, you need to precompress your static assets to Brotli encoding. If you’re using Webpacker, it generates br
files out of the box. For Sprockets configuration, you can check out this article.
Compression using Rails Rack Middleware
Heroku does not compress HTTP traffic by itself.
If you’re running on Heroku, the only way to add NGINX reverse proxy is to use a custom buildpack. Unfortunately, I was not able to get it working reliably with the Puma server.
Without reverse proxy in front of your Ruby servers, the only way to add compression for dynamic content is to use a Rack middleware. Rack::Deflater is part of a standard Rack distribution. Enabling is as simple as adding this line to config/application.rb
If you want to go with Brotli compression, your only choice is to use a not very popular by well-written rack-brotli gem. I’m using it without issues in one of my projects with the following config:
Make sure to follow the same order because otherwise, Gzip compression will always take precedence.
Serving compressed static assets
If you are running on Heroku without a custom NGINX buildpack, it means that Rails middleware ActionDispatch::Static
is serving your static assets. Currently, it is only able to handle Gziped files with gz
extension. There’s a PR waiting to be merged that adds support for Brotli br
files. Hopefully, it will not get stuck in the Rails freezer, and apps running on Heroku can benefit from this change soon.
Summary
Always measure both backend and client-side performance with throttled network conditions if you want to develop a speedy web app.
Applying compression is a low hanging fruit that can significantly improve web app responsiveness. Not using the compression is kind of like wasting the bandwidth of your users.