Putting Multiple WordPress Containers into Production – Proxy Container

written by William Patton on March 28, 2017 in Docker with 54 comments

This is follow on to my adventures putting Docker containers into production. The previous article covered building WordPress containers for production. This article deals with how you would run multiple WordPress instances on a single host – by using a proxy.

I discovered running multiple WordPress sites using separate Docker containers in production wasn’t hard at all once you had an overview of the basic ideas involved. To break it down simply – WordPress sites and their databases run in a group that is started with docker-compose. They don’t have their ports exposed to the Host OS. Instead users can connect to it through a separate proxy container, capable of connecting to several WordPress containers. The proxy container does expose its ports on the host.

The proxy allows us to run multiple WordPress containers on the same machine, and for each to bind to the ports they desire on their own private – docker assigned – IP address, without causing port collisions on the Host OS.

Things To Know Before Starting

The ideas explored here to run multiple applications behind a proxy aren’t solely applicable to WordPress. Any kind of standard app that responds to http requests could be used instead. By the end of the article you will have multiple WordPress instances running through a reverse proxy and the ability to add more running WordPress containers in just a few moments.

The prerequisites to follow this guide are:

  • At least 1 WordPress Instance you would like to run, preferably 2 or more.
  • A domain, or subdomain, you can point to the site and access to an external DNS that you can point them from.
  • A Machine to act as a Docker Host. It needs to have ports 80 and 443 free to run on default configurations.

A few other things to note is that each container group will will have it’s own directory and all of the containers will be running on the same private network created and managed by Docker.

Your hosting box could be anything capable of running Docker on, mine is a VPS running Ubuntu 16.04 with the latest Docker Engine. I covered Installing Docker on Ubuntu in a previous article. The host needs ports 80 and 443 free so the proxy image can bind to them on the Host OS.

The overarching idea of this proxy through docker idea is easy to understand when visualized.

Users connect to the host machine on the ports they expect – either 80 or 443. Those connections are bridged through to the proxy container which knows how to talk to the WordPress containers running behind it.

Container Groups To Be Run In Production

There are 2 main groups of containers that are to be run on the production server.

  • The proxy group – This group contains 4 individual images.
    • An instance of NGINX.
    • An image to expose a whoami service.
    • A docker-gen image used to re-write config files.
    • An image to get LetsEncrypt certificates.
  • The wordpress group It contains 3 images.
    • A WordPress Image – I use a slightly custom build of the official WP image to make memcached available with PHP 7.1. The official image or any custom one you may have should work.
    • A Database container – I use the MariaDB image here but the official MySQL image is configured, and works, exactly the same.
    • An image for running WP-CLI on the WP instance.

The Reverse Proxy Container

The container that will act as a proxy is a specially configured NGINX service. By large the actual installation and base configuration of NGINX is a very close match of the upstream package.

The customization is handled by rewriting config files and creating vhosts that map to each of the WordPress containers. Once the proxy is running none of the NGINX configuration changes are done manually, they are handled by the docker-gen image and a template.

So that it’s possible to update configuration files on the fly through other container in the group we mount the some volumes to hold files that NGINX uses so they can can be shared with other containers in the group.

The volume configuration and template file are crucial to automating the various config changes needed when starting a new WordPress instance or when changes are made.

Additional Images In the Proxy Group.

There are 3 other images used in the proxy group.

  1. whoami – This is a simple image used to return a container ID. Here it is used only as an easy way to test if the vhost mapping works correctly.
  2. docker-gen – Used to rewrite config files based on a provided template.
  3. letsencrypt-nginx-proxy-companion – This image initiates a connection to the LetsEncrypt service to complete the necessary steps in requesting a certificate, storing it in a shared volume then making the necessary changes to the NGINX config to enable it for the domain.

docker-gen and a 3rd that is the companion LetsEncrypt image. The LetsEncrypt image automatically gets SSL certificates for sites running through the proxy. This way the proxy can secure the connection between it and the end-user.

Docker-gen is a clever little image. While limited in the scope of what it does it is extremely useful. All it is does is write files based on a template. It fills that template with information from environment variables of containers you start and other information in queries about the container. It’s used here to build configurations files with the correct domains and forwarding addresses and to add the references to the domain certificate to ensure secure connections.

Make a Network – Communication isn’t Hard

By default docker-compose puts container groups into their own private network and bridges it on the host. The NGINX image has ways of working within this but for the sake of ease we’ll negate this issue by placing NGINX and the WordPress containers on the same private network. All of our containers will run on this same network.

Before starting any containers make the network (you only have to do this once and you can run the command from any directory).

docker network create nginx-proxy

The NGINX-proxy image

The first group of images we want to get running is the proxy and the rest of the supporting containers.

A good starter compose file is present in the repo of the proxy image that I used. Clone it and enter the directory. You will see there are 2 different docker compose files. We’ll be using the compose file for separate containers. Remove the existing docker-compose.yml file (it is for a single container with docker-gen embeded) and rename the file for separate containers docker-compose.yml.

cp docker-compose.yml docker-compose-single-container.yml
rm docker-compose.yml
cp docker-compose-separate-containers.yml docker-compose.yml
nano docker-compose.yml

Edit the compose file so it matches this:

version: '2'
services:
  nginx:
    image: jwilder/nginx-proxy
    container_name: nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /etc/nginx/conf.d
      - /etc/nginx/vhost.d
      - /usr/share/nginx/html
      - ./certs:/etc/nginx/certs:ro

  dockergen:
    image: jwilder/docker-gen
    container_name: dockergen
    command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
    volumes_from:
      - nginx
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro

  whoami:
    image: jwilder/whoami
    environment:
      - VIRTUAL_HOST=whoami.local

  nginx-letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    environment:
#      ACME_CA_URI: https://acme-staging.api.letsencrypt.org/directory
       NGINX_DOCKER_GEN_CONTAINER:dockergen
    container_name: nginx-letsencrypt
    volumes_from:
      - nginx
    volumes:
      - ./certs:/etc/nginx/certs:rw
      - /var/run/docker.sock:/var/run/docker.sock:ro

networks:
  default:
    external:
      name: nginx-proxy

There are 3 sets of modifications compared to the original.

  1. At the end I’ve configured it to use the nginx-proxy network.
  2. I added the nginx-letencrypt service.
  3. I am mounting some additional volumes in some of the services. Pay attention to what is mounted and where.

If you are testing you can bypass the 5 certs/per week/per domain rule and still use letsencrypt and SSL during testing – just uncomment the environment variables in the nginx-lestencrypt service.

Before you start-up the server note that the proxy will try bind to ports and expose them on the Host OS. Those ports need to be available for use on the public facing network interface.

On first run the image for grabbing certificates needs to generate a diffie-hellman group file used in key generation. That may take a few minutes but it’s a one time run thing so let it generate.

# from inside the directory of your nginx-proxy group
docker-compose up

Keep this terminal open so you can see the output when you start the next set of containers.

Note about DNS

Undoubtedly we could handle DNS with another group of containers but I am going to handle the DNS externally, in the way I am used to. Many hosting and domain providers have a DNS service you can access. Handle the DNS however you choose but the domain is going to need to point to IP address of the machine that is acting as the Docker host.

Additionally each database container will need to be assigned a unique hostname, using links, inside of the wordpress containers. This is because each wordpress container references the database host by it’s hostname and, since they are all started with hostname of mysql, a round-robin behaviour is seen caused by the docker networking layer. To ensure each wordpress container can speak to it’s own database container it needs to be a unique hostname. With multiple database containers and replication setup you could take advantage of that round-robin and use it as load balancing, here we are using just a single db container for each site.

The WordPress Container

The next thing to do is get a docker-compose file together for running WordPress sites.

The WordPress image I’m using here I built with memcached support running PHP7.1. Guides for building that image is in the preview article about building WordPress containers for production. You could use the official WP image here if you preferred and it would work exactly the same way.

There is another custom image here for adding WP-CLI support. That image is built as somewhat of a container-as-a-tool, rather than as a service. It does not remain running when it’s not in use. You run it and pass desired commands, it performs it’s tasks and provides output then it stops.

When you start these containers with compose it will pull the images, pre-built, from the Docker Hub on the production server.

version: '2'

services:

  wordpress:
    image: pattonwebz/wordpress-php7-1-apache-memcached
    ports:
      - 80
    environment:
      WORDPRESS_DB_USER: database
      WORDPRESS_DB_PASSWORD: kgB7yJCwGYq2jeQH
      WORDPRESS_DB_NAME: wp_database
      WORDPRESS_TABLE_PREFIX: wp_
      WORDPRESS_DB_HOST: mysql_suffix:3306
      VIRTUAL_HOST: example.com,www.example.com
#      VIRTUAL_PROTO: https
      LETSENCRYPT_HOST: example.com,www.example.com
      LETSENCRYPT_EMAIL: user@example.com
    volumes:
      - data_volume:/var/www/html
      - ./home/wp:/home/wp
    links:
      - mysql:mysql_suffix

  mysql:
    image: mariadb
    environment:
      MYSQL_ROOT_PASSWORD: example
      MYSQL_RANDOM_ROOT_PASSWORD: "yes"
      MYSQL_DATABASE: wp_database
      MYSQL_USER: database
      MYSQL_PASSWORD: kgB7yJCwGYq2jeQH
    volumes:
      - db_data:/var/lib/mysql
      - ./home/db:/home/db

  wp:
    image: pattonwebz/docker-wpcli
    volumes_from:
      - wordpress
    links:
      - mysql:mysql_suffix
    entrypoint: wp
    command: "--info"

volumes:
    db_data:
    data_volume:

networks:
  default:
    external:
      name: nginx-proxy

Make sure that a unique password is replaced in the file for the DB user. Because this file contains a plain text password it needs to be stored in a place that is secure. The alternative would be to use docker-secrets to add secure passwords. That’s part of a future article.

The main difference between this file and the one I from the previous article is the addition of a network (it’s the same network as used in the nginx-proxy image) and 4 lines in the WordPress service. These 4 lines are used in combination with the proxy container.

      VIRTUAL_HOST: example.com,www.example.com
#      VIRTUAL_PROTO: https
      LETSENCRYPT_HOST: example.com,www.example.com
      LETSENCRYPT_EMAIL: user@example.com

The LETSENCRYPT_HOST should be the same as the VIRTUAL_HOSTwhich should be changed to your domain name. Also add the correct email address you want tied to the certificate issued from lets encrypt. You can add multiple hosts, here we have the root and www. subdomains.

The default connection method from nginx-proxy works with the lines above and requires no additional configuration to be up and running for anything that serves content over http or https. If you had a base image that served through https only then tell nginx to only connect to it with https by uncommenting the VIRTUAL_PROTO: https line.

Remapping the Database Hostname to be Unique

Each container set you run will need to have a unique mysql hostname mapped inside the wordpress container. In the example code above anywhere that uses mysql_suffix should be replaced with a unique hostname for that container set. That means adding updating the mysql_suffix in these lines to something unique.

 links:
 - mysql:mysql_suffix

And then updating the db host line that wordpress uses to match the new hostname you linked to the db container.

WORDPRESS_DB_HOST: mysql_suffix:3306

Starting A WordPress Container in Production

The above docker-compose.yml file is enough to get a fresh WordPress site running ready to have a username added and perform the web-based install. It handles data persistence between sessions using volumes and has a local directory mounted inside the containers home directories to make passing files between the container and the Host OS painless.

So long as you have the nginx-proxy group of containers running when you start a container it’ll check for the environment variables added vibrating VIRTUAL_HOST etc and docker-gen will write it’s config files. If the LETSENCRYPT environment variables are set it’ll queue it for a certificate check and fetch. If a change has occurred to the config’s then a restart will be triggered on NGINX.

# from inside your WordPress instance folder
docker-compose up

If you watch the proxy group terminal while the WordPress containers start you will see several messages about it detecting new containers. There will be lines about containers starting and stopping. Not all of the containers need action so don’t worry that most of them out stopped status.

The output from the wordpress group on first run will show the container has to grab an archive containing WordPress, unpack it, and also create it’s database. This takes a few moments, and you may see some MySQl connection errors during the start-up. Next time will be a lot faster.

NOTE: That if your site doesn’t already point to the domain you are running the site for then letsencrypt certificate challenge will fail and no certificate will be issued for the site.

If your domain is already pointed at the correct IP you can access the site right now at its domain. It’s live.

If not you could add an entry to your local hosts file and point it there. For example IP 192.0.1.251 at domain example.com would look something like this:

192.0.1.251 example.com
192.0.1.251 www.example.com

Configuring the WordPress instance – A Fresh Site

Let’s say you were starting a fresh site. The quickest way to get this running is by using the included WP-CLI tool available through each groups wp service.

If you enter the directory of a WordPress instance you can issue WP-CLI commands as if you were local inside the container. You are able to run any WP-CLI command you want – you just need to prefix with docker-compose run --rm .

After the prefix you can run commands exactly like you would usually. The service is named wp, which matches the name usually give to the WP-CLI php executable when it’s added to a user’s PATH.

# this will print the info wp-cli usually gives so you can confirm it works working
docker-compose run --rm wp
# this will reset the database
docker-compose run --rm wp db reset --yes

You can enter this series of commands at the terminal sequentially or add them to a script that can run them all with a single command. I suggest you use a script for the sake of efficiency. A script like this does a lot of things. It first resets the database and configures a fresh instance of WordPress – based on the values provided. It then runs a core update, gets some plugins and a theme then does some work to regenerate images and the permalink structure.

#!/bin/bash

docker-compose run --rm wp db reset --yes
# change values on this line
docker-compose run --rm wp core install --url=http://example.com --title="This is the site title" --admin_user=admin --admin_password=admin --admin_email=example@example.com
# change value on this line
docker-compose run --rm wp option update blogdescription "This is the tagline."
docker-compose run --rm wp core update
docker-compose run --rm wp plugin install customizer-theme-resizer jetpack --activate
docker-compose run --rm wp plugin update --all
# set your theme choice on this line
docker-compose run --rm wp theme install https://downloads.wordpress.org/theme/best-reloaded.0.14.0.zip --activate
docker-compose run --rm wp media regenerate --yes
docker-compose run --rm wp rewrite structure '/%year%/%monthnum%/%postname%' --hard

In the script above replace the setup details with your own – the url, title, email etc. Also update the theme url with the download link of a theme of your choosing. In he example above I picked a random new theme from the WPORG repo. You can install any valid theme so long as it is available at the url you provide.

Save it in a file called fresh.sh and execute it like this:

# create the file and open for edit
nano fresh.sh
# enter the script from above replacing your own values where needed and save it
# set the scripts execute but and run it
sudo chmod +x fresh.sh
sh fresh.sh

When it’s complete you will have a freshly installed WordPress site ready to add your content.

Configuring the WordPress instance – A Site Import

A more realistic situation is you’re importing an existing site into a wordpress group. You will need to pull over your files and the database from the old host. We have mounted volumes pointing for convenience inside the home directory to make this easy. Through a shell inside the containers we’ll be able to extract any archives and run the import tasks.

First you’ll need to grab the site files – usually just the wp-content directory. Remember the container installs a recent WP version and you can use WP-CLI to update core, that will save you transferring that the migration. You’ll also need a copy of the database. A mysqldump file is fine, or you could use phpMyAdmin or any other MySQL management tool. You’ll be able unzip or otherwise decompress files inside your containers so archive them up to save on data transfer and time.

Inside your WordPress container directory you’ll have a home folder. Inside will be a directory that is mounted inside the wordpress container and the database container. Save the files you’ll need in each container to the correct directory and let’s get started the import. In the following commands you will be connecting via name. You can check the names of currently running containers using docker ps command.

#Remember to use your own container name here and replace any other items inside [square brackets] with your own values.
docker ps
sudo docker exec -it [wordpress_container] /bin/bash
# from inside the container
cd /home/wp/
unzip [wp-content.zip]
# note might be permission issues to deal with due to running cp as root
cp wp-content/* /var/www/html/wp-content/ -r
# set correct user/group on the moved files to prevent any issues with root owned files
chown www-data:www-data /var/www/html/wp-content/ -R

Importing the database is a simple mysql command. Just give it a username, tell it to prompt for password, set the database and direct the output of the sql file to it. It’s a one liner but it can take some time for large imports.

#Remember to use your own container name here and replace any other items inside [square brackets] with your own values.
docker ps
sudo docker exec -it [db_container] /bin/bash
# from inside the container
cd /home/db
mysql -u [db_username] -p [db_name] < [db_importfile.sql]

Your site import should be finished and accessible via domain (if you’ve already pointed it that is). You could always use the hosts file trick as a temporary way to access it via domain and test that it functions.

If you wanted to ensure WordPress core and the plugins were at their latest versions at this point you could run this command from the WordPress instance folder.

docker-compose run --rm wp core update
docker-compose run --rm wp plugin update --all

Wrapping up WordPress Docker Containers In Production

So as it happens running WordPress in production with an infrastructure built around Docker wasn’t nearly as hard as it sounds. Once you understand the whole overview shown in the visualization it just becomes a case of deciding on your base configurations. Once you reach the point of deploy it takes only seconds to get a fresh instance. And that’s totally scalable. You can continue to spin up more instances in just a few seconds, you can stop them even faster.

Containerized infrastructure built from individual isolated services is where a lot of momentum in web development world is focused. Deploying WordPress Containers in this way through a proxy and relying on Docker to run your services isolated is a great way to get to grips with many of the overarching concepts of working with Docker.