Get ready for heavy traffic: Adding a Varnish cache to a production Docker stack
All bloggers dream of the day that their site has an article that goes viral, reaching the front page of Reddit, a viral Tweet, or FB share. I've had it happen, and believe me it's an awful, awful feeling when your small VPS crashes, while 30 hits a second bounce off into oblivion.

All bloggers dream of the day that their site has an article that goes viral, reaching the front page of Reddit, a viral Tweet, or FB share. I've had it happen, and believe me it's an awful, awful feeling when your small VPS crashes, while 30 hits a second bounce off into oblivion, probably never to return. In my case, it was an unexpected Reddit front pager, and I had to scramble during downtime to reboot the VPS into a larger 4GB memory model. A cache could be a better answer should that day ever come for you.
It's never going to happen on an obscure tech blog like this one, but I will go ahead and build it anyway due to its simple chain and ease of implementation. Once it has a good track record and I am familiar with finessing its configuration, I plan to implement on some of my Wordpress sites.
The Tech Roads site is lightly loaded, but as I have a few hundred MB of memory unused, I will introduce a Varnish cache Docker container, which should be easily done with a container "chain".

Caddy is at the front of the stack, as it's where SSL/TLS termination is being done. Varnish comes next, as it's fronting the content and serving it up out of memory. Varnish has nifty features such as the ability to compress content that hasn't already been compressed, with claims it can compress more efficiently than web servers. It's conceivable that a web server could be set up doing no compression, and leave it all to Varnish. It's not what I'm doing here, but on heavy pages that could make a real difference.
This is a small VPS with 1GB of memory. Take memory headroom into consideration before installing. In trials I tried loading a bunch of pages and couldn't get it over 11MB Varnish memory use, but the docs allude to 100MB being taken. Without a busy site getting smashed by traffic, it's hard to say. YMMV.
Create a Caddy and Ghost stack
I will take the Ghost build tutorial in Caddy as a starting point, as covered in this article. In my production Compose file I have logging in here as well, but I am removing from all these examples for simplicity, along with the internal network, as we don't need it yet. My "before" position docker-compose.yml
is thus.
version: "3"
networks:
web:
external: true
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /data/caddy/Caddyfile:/etc/caddy/Caddyfile
- /data/caddy/data:/data
- /data/caddy/config:/config
networks:
- web
myghostblog:
image: ghost:3.22-alpine
restart: unless-stopped
environment:
- url=https://techroads.org
volumes:
- /data/ghostapp:/var/lib/ghost/content
- /data/ghostapp/themes/post.hbs:/var/lib/ghost/content/themes/casper/post.hbs
networks:
- web
The full compose file including Varnish is at the end of this post.
Performance before starting
This VPS does coexist with one small site, but the current headroom is good. VPS usage under no load is 325MB, of which the Ghost instance is using 100MB and Caddy 28MB.
Page load performance - before
Let's run a page speed test on a site page, to see where we are starting.
My document complete time comes in at 1.61s with 305KB. So it's pretty trim. The page is tiny anyway but it will do as a good demonstration for Varnish.
Benchmark - before
With a small benchmark sample:
Time taken for tests: 10.044 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 3683000 bytes
HTML transferred: 3644200 bytes
Requests per second: 9.96 [#/sec] (mean)
Time per request: 1004.386 [ms] (mean)
Time per request: 100.439 [ms] (mean, across all concurrent requests)
Transfer rate: 358.10 [Kbytes/sec] received
Create a Varnish data directory
I usually create data directories for my containers under /data. The official Varnish image on Docker Hub says by default we only need to share one file, but I will give that one file a directory of its own anyway, in case there are file caching options in future.
$ mkdir /data/varnish-techroads
Create a Varnish default.vcl file
In this directory, as directed by the container readme, I want to create a default.vcl file, which defines the network address of where the origin server is. With your favourite text editor, create a /data/varnish-techroads/default.vcl
file, with these contents.
vcl 4.0;
backend default {
.host = "myghostblog:2368";
}
sub vcl_recv {
# Do not cache the admin and preview pages
if (req.url ~ "/(admin|p|ghost)/") {
return (pass);
}
}
sub vcl_backend_response {
if (beresp.http.content-type ~ "text/plain|text/css|application/json|application/x-javascript|text/xml|application/xml|application/xml+rss|text/javascript") {
set beresp.do_gzip = true;
set beresp.http.cache-control = "public, max-age=1209600";
}
}
Add Varnish to docker-compose.yml
We will need to append an entry for Varnish, and, because Ghost is no longer getting traffic from Caddy, "move" the Ghost container into a new internal network. The internal network will need to be added, in readiness for the Varnish and updated Ghost entries.
The Varnish container will get traffic from Caddy on the default port 80, and call the ghost container via the docker private network we have called "internal".
varnish-techroads:
image: varnish:stable
restart: unless-stopped
volumes:
- /data/varnish-techroads/default.vcl:/etc/varnish/default.vcl
networks:
- web
- internal
Update Caddyfile
One last change is to update the Caddyfile to send traffic to Varnish instead of directly to Ghost. Varnish itself will act as a reverse proxy and forward traffic to Ghost. So for me, I am commenting out the Ghost target and replacing with Varnish, leaving the rest as is. Remember the target name should match the Compose service name.
#reverse_proxy myghostblog:2368
reverse_proxy varnish-techroads:80
Change the Ghost blog network
We are moving the Ghost blog from the front facing network back to the internal network, so the network in the Compose has to change from web to internal.
Apply the Compose config
Once your Caddyfile, Varnish config file and compose files are complete, it can be applied.
$ docker-compose up -d
Test Varnish is working
After install, you can test that Varnish is working with curl, where you will see Varnish in the via.
$ curl -I 'https://yourblog.com/yourpage/'
HTTP/2 200
accept-ranges: bytes
age: 0
cache-control: public, max-age=0
content-type: text/html; charset=utf-8
date: Tue, 10 Sep 2019 05:01:27 GMT
etag: W/"123456789"
vary: Accept-Encoding
via: 1.1 varnish (Varnish/6.0)
x-powered-by: Express
x-varnish: 30
From the command line, you can run the varnishstat command to get an interactive usage screen of cache activity from inside the container.
$ docker exec -it caddy_varnish-techroads_1 varnishstat
Page load performance - after
My document complete time comes in at 1.56s with 274KB. So it's a hair faster.
Benchmark - after
With the same - small - test sample, performance improvement is there, but not significant. I am reluctant to crank up the benchmark tests further for fear of incurring the wrath of the AWS DDoS shield.
Concurrency Level: 10
Time taken for tests: 9.798 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 3691028 bytes
HTML transferred: 3644200 bytes
Requests per second: 10.21 [#/sec] (mean)
Time per request: 979.781 [ms] (mean)
Time per request: 97.978 [ms] (mean, across all concurrent requests)
Transfer rate: 367.89 [Kbytes/sec] received
Conclusion
There is no discernible difference to the speed on such a small and light page, with everything on defaults. I believe Varnish will really come into it's own on a large, production site, especially on a particular traffic spike where the same URL is getting hammered many times in sequence. There is oodles to read on the Varnish web site.
I am intrigued by the possibility of moving compression workload from web servers to Varnish, and also, while poking around the web, found examples of filtering out some advertising tracking nasties.
I wish the documentaion and mechanisms around memory allocation were a little more solid. It seems fundamental that the cache size be settable to a hard limit, easily, from something mainstream like Compose. Especially when it's an image with 1 million + downloads. But I am still super grateful for this nice bit of software.
I for one will be adding it to all my services, where I have the memory headroom. And I'll be getting the logging set up in line with the rest of my containers. At least, the functional example is here to get started should you want to go down this road!
Complete example of docker-compose.yml with Caddy, Varnish, Ghost
With logging config removed.
version: "3"
networks:
web:
external: true
internal:
external: false
driver: bridge
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /data/caddy/Caddyfile:/etc/caddy/Caddyfile
- /data/caddy/data:/data
- /data/caddy/config:/config
networks:
- web
varnish-techroads:
image: varnish:stable
restart: unless-stopped
volumes:
- /data/varnish-techroads/default.vcl:/etc/varnish/default.vcl
networks:
- web
- internal
myghostblog:
image: ghost:3.22-alpine
restart: unless-stopped
environment:
- url=https://techroads.org
volumes:
- /data/ghostapp:/var/lib/ghost/content
- /data/ghostapp/themes/post.hbs:/var/lib/ghost/content/themes/casper/post.hbs
networks:
- internal
Main photo courtesy of Jacob Campbell on Unsplash
You are welcome to comment anonymously, but bear in mind you won't get notified of any replies! Registration details (which are tiny) are stored on my private EC2 server and never shared. You can also use github creds.