Matomo + Apache killed my server. Then Matomo + Nginx killed my server.
When accessing the Matomo console, the VPS memory usage spikes, and kills the server. I suspect Apache is the memory hog.

Some time ago I set up self-hosted Matomo in a Docker environment as a replacement for Google Analytics, mainly for privacy reasons. The Matomo agents dutifully gather data from my web sites with the injected analytics code with no significant performance impact, however, when accessing the Matomo console, the VPS memory usage spikes, and kills the server. I suspect Apache is the memory hog.
Matomo make available Apache based and FPM (FastCGI Process Manager) based containers. At the time of setup, I chose the Apache combo version for simplicity, as the web server is encapsulated in the container. Were I to take the FPM option, it requires an external web server.

I am running a small VPS with 1GB memory, coexisting with other apps, so probably by upgrading to a 2GB version it would also solve this problem, but I'm all for efficiency, and making my life difficult! So my mission - replace the Matomo-apache container with Matomo-fpm and Nginx containers.
I will add a complete docker-compose.yml file at the end of the post.
Starting Environment
I have a $5 AWS Lightsail image with 1GB memory. The Matomo stack including database is here, coexisting with this very Ghost blog, the Commento system that adorns this site, also with a database, and another small HTML/PHP site. That might seem like a lot for one small VPS, however in normal operation it runs at around 500MB usage. Running a few screens and giving it some work to do can easily go over 600MB. So it's ok, but any memory spike can blow it.
NAME MEM USAGE / LIMIT MEM %
caddy_matomo_1 52.9MiB / 978.6MiB 5.41%
caddy_mariadb1_1 91.16MiB / 978.6MiB 9.31%
caddy_commento_1 14.54MiB / 978.6MiB 1.49%
caddy_postgresdb_1 10.36MiB / 978.6MiB 1.06%
caddy_techdbo_1 2.344MiB / 978.6MiB 0.24%
caddy_phpt_1 6.082MiB / 978.6MiB 0.62%
caddy_caddy_1 11.38MiB / 978.6MiB 1.16%
caddy_ghosttechdbc_1 87.61MiB / 978.6MiB 8.95%
My biggest consumers are this blog (with embedded DB) at 87MB, the Matomo database at 91MB, and Matomo itself with 52MB. When invoking the Matomo console, that 52MB blows out which is the problem bringing me here.
Now that I have eyeballed this list, I also realise I can reduce that memory-hungry 96MB Matomo database footprint by following my own process, so I will go eat my own dog food while I am at it.
My VPS image is Ubuntu 18.04, with Caddy 2, and the Docker Compose environment is set up as per this great Digital Ocean article.
For many of these config files, I will base them on the Matomo Docker example files on Github.
First up it's a backup - in my case a Lightsail server snapshot.
Preparing the Nginx container
This will involve:
- Creating an Nginx conf file
- Adding Nginx-alpine to the compose file
- Adding an entry to the Caddyfile
Creating an Nginx conf file
I am going to stage Nginx in its own data directory, so first I create it.
$ mkdir -p /data/nginx-matomo
$ cd /data/nginx-matomo
In there, I create a new .conf file, which is slightly updated from the default .conf that comes with the container, adding the GZIP detail and a logfile change. With my favourite text editor I create a file named nginx.conf.matomo
and add these contents.
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
#access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 2;
gzip_buffers 16 8k;
gzip_min_length 1100;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
}
I'll need to map this to the container.
Create an Nginx server block
In the same directory, I create a default.conf
file based on the developer recommendation, with these contents. There are two changes to make.
upstream php-handler {
server matomo-fpm:9000; # This name should match the Docker Compose name
}
server {
listen 80;
add_header Referrer-Policy origin; # make sure outgoing links don't show the URL to the Matomo instance
root /var/www/html;
index index.php;
try_files $uri $uri/ =404;
real_ip_header X-Forwarded-For;
set_real_ip_from caddy; # Your internal Caddy network name
access_log /dev/stdout; # send all my logs to stdout for external capture
error_log /dev/stdout warn;
## only allow accessing the following php files
location ~ ^/(index|matomo|piwik|js/index|plugins/HeatmapSessionRecording/configs).php {
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# Check that the PHP script exists before passing it
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTP_PROXY ""; # prohibit httpoxy: https://httpoxy.org/
fastcgi_pass php-handler;
}
## deny access to all other .php files
location ~* ^.+\.php$ {
deny all;
return 403;
}
## disable all access to the following directories
location ~ /(config|tmp|core|lang) {
deny all;
return 403; # replace with 404 to not show these directories exist
}
location ~ /\.ht {
deny all;
return 403;
}
location ~ js/container_.*_preview\.js$ {
expires off;
add_header Cache-Control 'private, no-cache, no-store';
}
location ~ \.(gif|ico|jpg|png|svg|js|css|htm|html|mp3|mp4|wav|ogg|avi|ttf|eot|woff|woff2|json)$ {
allow all;
## Cache images,CSS,JS and webfonts for an hour
## Increasing the duration may improve the load-time, but may cause old files to show after an Matomo upgrade
expires 1h;
add_header Pragma public;
add_header Cache-Control "public";
}
location ~ /(libs|vendor|plugins|misc/user) {
deny all;
return 403;
}
## properly display textfiles in root directory
location ~/(.*\.md|LEGALNOTICE|LICENSE) {
default_type text/plain;
}
}
# vim: filetype=nginx
The matomo-fpm
name here should reflect the name given for the new Matomo in your Docker Compose file. For the set_real_ip_from caddy;
entry, set caddy here to be the same name as the caddy entry in your Docker Compose file.
Adding Nginx to the compose file
Changing directory to /data/caddy, I update the docker-compose.yml file to include an Nginx block thus.
nginx-matomo:
image: nginx:alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- /data/nginx-matomo/default.conf:/etc/nginx/conf.d/default.conf:ro
- /data/nginx-matomo/nginx.conf.matomo:/etc/nginx/nginx.conf:ro
- /data/matomo-fpm:/var/www/html
networks:
- web
- internal
depends_on:
- matomo-fpm
Preparing the Matomo container
This will involve:
- Adding the Matomo FPM image to the compose file
- Setting up the database container
- Creating the Database env file
- Disabling the current Apache based Matomo
- Replacing the Caddyfile entry and enabling the new chain
Q: I'm migrating - Do I keep the contents of /data/matomo?
A: All my stats should be separate in the DB directory so in theory the Matomo-apache volume isn't needed. Given my vanilla setup, and that this is mapped to the /var/www/html root, I will try it with a scratch build of the FPM application package, and see if I need to copy anything over from the Apache one.
Adding the Matomo FPM image to the compose file
So, the up to date images and names available are here on Docker Hub. I am going to make this sticky with V3 but allow upgrading of minor releases so choose matomo:3-fpm-alpine
.
I create an entry in the docker-compose.yml
file based on a developer example. Again this could be new, replacement or additional depending on your situation.
matomo-fpm:
image: matomo:3-fpm-alpine
restart: unless-stopped
user: "82"
links: # Links deprecated, assuming it's here to share vars
- mariadb1
volumes:
- /data/matomo-fpm:/var/www/html
environment:
- MATOMO_DATABASE_HOST=mariadb1
- VIRTUAL_HOST=
env_file:
- ./matomo-db.env
networks:
- internal
The database host for me will be the existing one, as replacing I will keep it unchanged.
Setting up the Database container
I will be keeping the database hence the legacy mariadb1
name, but for the compose entry I am going to change to reflect the env file construct in the Github examples. For the initial build under Traefik with more details, see the article here.
mariadb1:
image: mariadb:10
command: --max-allowed-packet=64MB
restart: unless-stopped
networks:
- internal
volumes:
- /data/mariadb1:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD:
env_file:
- ./matomo-db.env
Create the Matomo database env file
I didn't do this the first time around, but I am adding it now as it's better practice and aligns with the developer's examples.
With your favourite editor create the matomo-db.env
file in the same directory as your Docker Compose file, in my case /data/caddy
. Having a poke around my existing DB, I fear this will clash from some of my existing config, though I will change the database name to match.
If you are new-building, the mariadb1 entries below can be replaced with simply matomo as per the Github examples - mariadb1 is just the network name which I adopted as database name way back, so will try and keep.
MYSQL_PASSWORD=
MYSQL_DATABASE=mariadb1
MYSQL_USER=matomo
MATOMO_DATABASE_ADAPTER=mysql
MATOMO_DATABASE_TABLES_PREFIX=matomo_
MATOMO_DATABASE_USERNAME=matomo
MATOMO_DATABASE_PASSWORD=
MATOMO_DATABASE_DBNAME=mariadb1
Disabling the current Matomo-Apache
In the event you are also replacing an existing Matomo with this one, you'll need to back up the compose file and remove or comment out the Apache entry. And halt it.
docker-compose stop matomo mariadb1
Copy the compose file to a backup, and comment out or remove any old versions.
Create/Replace the Caddyfile entry and enable the new chain
We want to terminate TLS in the Caddy container for the public URL matomo.privateapps.techroads.org
, and reverse proxy the traffic to Nginx, which is listening on port 80. Additionally there are some security headers that can be added to address security issues raised in the Webpagetest.org analysis. I will replace two Caddy blocks like so.
matomo.privateapps.techroads.org { # Your front facing domain/subdomain
reverse_proxy nginx-matomo:80 # The name of the Nginx block in the Compose file
header { # Caddy example security header options
# enable HSTS
Strict-Transport-Security max-age=31536000;
# disable clients from sniffing the media type
X-Content-Type-Options nosniff
# clickjacking protection
X-Frame-Options SAMEORIGIN
# keep referrer data off of HTTP connections
Referrer-Policy no-referrer-when-downgrade
}
}
www.matomo.privateapps.techroads.org { # Optional redirect of www to no-www
redir https://matomo.privateapps.techroads.org{uri}
}
Running the Matomo compose stack
There are a myriad of issues you might get when you get to the compose update.
I found that the FPM version is similar to the Wordpress FPM version whereby the files are owned by www-data
and the FPM package cannot use them. I got around this with the user: "33"
entry you can see. Note: I am trying this in current versions with user 82 and it works so far.
Logging is your friend. In the final compose file at the end you will see the log drivers I have added. This will not work without the accompanying config as per my earlier logging article, so if you are not settig up logging leave the logging parts out. I would recommend getting into logging, those log files have helped me find problems more times than I can remember.
Running the Matomo installer
I won't dive too much into running the installer here, there is tons on the Matomo web site, and an example in a previous post.
I can testify however, that if you happen to have a database full of existing stats, and you are reinstalling the app, if reconnecting with the same creds, the installer will recognise you have existing database tables, and asks you if you would like to reuse them or recreate them. Which was lucky for me for my transition to the FPM version as I could reuse, and keep my stats.
Performance with the console
So my whole reason for the rebuild is because of concerns over Apache killing my server. Early tests are positive. I am on the console, cranking up reports and charts, and the memory footprint is consistently stable, I can't get it to go over 520MB in total.
With the MariaDB (MySQL) database hovering at 110MB, it's time to run through my optimisotron.. a few minutes later, and the database footprint is looking smaller at 40MB. My new table is thus.
NAME MEM USAGE / LIMIT MEM %
caddy_nginx-matomo_1 2.352MiB / 978.6MiB 0.24%
caddy_matomo-fpm_1 26.34MiB / 978.6MiB 2.69%
caddy_mariadb1_1 40.37MiB / 978.6MiB 4.13%
caddy_commento_1 12.02MiB / 978.6MiB 1.23%
caddy_postgresdb_1 10.94MiB / 978.6MiB 1.12%
caddy_techdbo_1 2.422MiB / 978.6MiB 0.25%
caddy_phpt_1 5.324MiB / 978.6MiB 0.54%
caddy_caddy_1 10.86MiB / 978.6MiB 1.11%
caddy_ghosttechdbc_1 77.88MiB / 978.6MiB 7.96%
Notably the former Apache based Matomo of 52MB has been replaced with the Nginx/FPM combo of 2 + 26, so with shrunken MySQL, the overall server total is ticking over around 460MB, I thought that might be the end to this tech road, but alas no.
So then Matomo-FPM-Nginx killed my server too
Things were ticking over quite nicely, with the Matomo agents collecting away like they were before. Memory usage was below 500MB.
I decided to have a play around on the console, and try out a few graphs and different analytics views. As I did so, the memory usage started to creep up. 700.. 800.. until swapping started and the server was doomed. Doomed!
And then I found the solution. ARCHIVING.
In Matomo-speak, archiving is storing cached data. I discovered that in default config, Matomo attempts to archive when the data is accessed through the GUI. The trouble is that because that is what was crashing my server, I had never had a successful, historic archive.
The efficient method is to run an automated job to regularly archive in the background. This important to set up!
Create a cron job to archive
There are probably more elegant ways to schedule jobs in a container environment than by using local cron and docker exec, but this does work for me.
First verify on the command line that you have the right syntax for archiving, this worked for me with Matomo v4.
docker exec -u 82 -w / caddy_matomo-fpm_1 sh -c "/usr/local/bin/php /var/www/html/console core:archive --url=https://matomo.privateapps.techroads.org"
In /etc/cron.hourly
, with your favourite text editor create a file called say matomo-archive
and add these contents.
Alter this example so that caddy_matomo-fpm_1
reflects the name of your Matomo container as listed with docker ps
, and choose a suitable log file name.
#!/bin/sh
set -e
echo "Commencing Matomo archive `date`" >> /var/log/matomo-archives.log
/usr/bin/docker exec -u 82 -w / caddy_matomo-fpm_1 sh -c "/usr/local/bin/php /var/www/html/console core:archive --url=https://matomo.privateapps.techroads.org" >> /var/log/matomo-archives.log 2>&1
echo "Completed Matomo archive `date`" >> /var/log/matomo-archives.log
Make it executable: sudo chmod +x matomo-archive
Once you have passed the cron hourly time slot (check when that is in /etc/crontab
), be sure to check the log file you designated. You should see archive activites.
Disable GUI archive triggering
As recommended in the docs, go to System -> General Settings, and set the browser trigger to No and the report interval to 3600 seconds, and Save.

Wrapping up
I will keep the FPM stack. Matomo is now super fast when filtering reports for longer time periods. The plan, is to split off my "privateapps", being Matomo and Commento, to their own server.
It's good practise anyway to keep the number of running containers per server a bit on the low side, to keep blast radius risk low. 9 is now feeling a bit high to me. It will also give me a bit of headroom to get some memory caching in there.. more on that later.
It might also be a good time for a new Ubuntu 20.04 build, though AWS are being a bit slow with releasing a pre-rolled Lightsail image, so if I was to do that today I would have to manually upgrade from 18.04.
Matomo production readiness
Security
So, if you are actually going to build this, there are some typical considerations before planting your creation in the sticky swamp of the public internet. But hey, at least it's yours, and you're not feeding Google.
Probably most importantly, by default you are sticking a console on the public internet to be munched on by every bot in the world. At the very least choose a strong and unique password. I love Bitwarden for this. Choose a better, custom admin user name other than "admin". I believe there is 2FA available too.
Personally, I am a fan of IP whitelisting, but haven't yet looked at it for Matomo console. Use caution with this as being too tight might exclude the Matomo agent itself which calls the same URL, it needs to be widely accessible.
Performance
Bear in mind also that your Matomo server is going to be called from wherever your web server is every time a remote page loads. That is going to have a performance and data transfer impact, if small. I would like to think this offsets performance hits and privacy concerns from feeding Google Analytics.
If you are going to run this on a busy site, keep an eye on logs. While they are immensely valuable during build and testing, you might not need a huge log dump, or may have an issue with storage space.
If you haven't whitelisted, it might still be worth running your console link in webpagetest.org, sometimes the output can tell you things that need fixing.
Likewise, once you are injecting Matomo code into your public facing web site, have a look at those pages to see if there are any issues.
Resilience
If your site is important, get onto logging, backups and keeping your images up to date. At some future time I will go down a bit of a "poor man's monitoring" road.
The finished docker-compose.yml file
Note this includes logging, and the above my.cnf update. You may bork your system if you blindly cut and paste. Do I really need to tell you that? If in doubt remove the logging and my.cnf bits.
version: "3"
networks:
web:
external: true
internal:
external: false
driver: bridge
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
logging:
driver: syslog
options:
tag: docker-caddy
ports:
- "80:80"
- "443:443"
volumes:
- /data/caddy/Caddyfile:/etc/caddy/Caddyfile
- /data/caddy/data:/data
- /data/caddy/config:/config
networks:
- web
# Lots of other unrelated stacks removed from here
nginx-matomo:
image: nginx:alpine
restart: unless-stopped
logging:
driver: syslog
options:
tag: docker-nginx-matomo
ports:
- '80'
volumes:
- /data/nginx-matomo/default.conf:/etc/nginx/conf.d/default.conf:ro
- /data/nginx-matomo/nginx.conf.matomo:/etc/nginx/nginx.conf:ro
- /data/matomo-fpm:/var/www/html
networks:
- web
- internal
depends_on:
- matomo-fpm
matomo-fpm:
image: matomo:3-fpm-alpine
restart: unless-stopped
user: "82"
logging:
driver: syslog
options:
tag: docker-matomo-fpm
links: # Links deprecated, assuming it's here to share vars
- mariadb1
volumes:
- /data/matomo-fpm:/var/www/html
environment:
- MATOMO_DATABASE_HOST=mariadb1
- VIRTUAL_HOST=
env_file:
- ./matomo-db.env
networks:
- internal
depends_on:
- mariadb1
mariadb1:
image: mariadb:10
command: --max-allowed-packet=64MB
restart: unless-stopped
logging:
driver: syslog
options:
tag: docker-mariadb1-matomo
networks:
- internal
volumes:
- /data/mariadb1:/var/lib/mysql
- /data/mariadb1/my.cnf:/etc/mysql/my.cnf # If you have one!
environment:
MYSQL_ROOT_PASSWORD:
env_file:
- ./matomo-db.env
Main photo courtesy of Markus Spiske on Unsplash.
You are welcome to comment anonymously, but bear in mind you won't get notified of any replies! Registration details (which are tiny) are stored on my private EC2 server and never shared. You can also use github creds.