Save yourself from a disaster #5: Redundancy of Web Servers

This is the fifth part of the series Save yourself from a disaster: Redundancy on a budget.

How can we make sure our second most important asset is safely secured in case of a disaster?

We could mainly 3 things:

  • Duplicate VM
  • Docker
  • Kubernetes

Disclaimer
This guide won’t cover everything, it won’t be a comprehensive guide, and the steps that are shown need to be carefully reviewed and tested in your development/pre-production environment. I don’t take any responsibility for any damage, interruption of service nor leak/loss of data for the use of the instructions in the ebook (nor from any external website I’ve mentioned).

Duplicate VM

Nowadays many cloud providers (also virtualization platforms) are giving you the possibility to take a snapshot of the VM and then restore/clone it. I’ll not cover it in this tutorial as we’ll increase the overall cost of the infrastructure. Although, sometimes (based on the application) it can be very time-saving doing a clone of the VM compared to the other method I’m proposing here below.

Docker

We live in 2021, everyone is running containers and wishing to have a k8s cluster to play with. So, let’s convert the simple applications into containers, there are a lot of already-ready containers on Docker Hub.

Docker Swarm

Let’s start nice and easy, with Docker Swarm (which eliminates the extra complexity of Kubernetes) on ONE node (then we can scale out as much as we like).

First, setup your nodes, I’m going to use standard images for my dockerized infrastructure, no custom images (for now – I’ve got pretty simple configurations). I’ve picked bitnami images, as they cover a lot of scenarios and provide pre-packaged images for most of the popular server software (more reasons why pick them).

If you really want to start using custom images you could publish them publicly for free on Docker Hub (but has got recently some limitations) or on Canister. After the announcement from Docker Hub about limiting the rates of pull, AWS decided to offer public repositories (and they are almost free if you don’t exceed 500GB/month when not logged or 5TB/month when logged).

Docker Compose

This is an example of a WordPress website configured with docker-compose:

version: "3.9"

services:
  wordpress:
    image: wordpress:5.7.0
    ports:
      - 8000:80
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
    extra_hosts:
      - "host.docker.internal:host-gateway"
    environment:
      WORDPRESS_DB_HOST: host.docker.internal:3306
      WORDPRESS_DB_USER: ***
      WORDPRESS_DB_PASSWORD: ***
      WORDPRESS_DB_NAME: ***
    volumes:
      - /path/to/wp-content:/var/www/html/wp-content
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost"]
      interval: 30s
      timeout: 10s
      retries: 3

Ingress

When using Docker Swarm with lots of containers and services (which bounds a dedicated port), you’ll need an ingress system to sort the requests to the right service. You could use one of the 2 most used solutions: Nginx or Traefik.
I decided to use a simple bitnami/nginx with a custom config (pretty straightforward proxy):

version: "3.9"

services:
  client:
    image: bitnami/nginx:1.19.8
    ports:
      - 80:8080
      - 443:8443
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
    extra_hosts:
      - "host.docker.internal:host-gateway"
    volumes:
      - /root/docker-compose/nginx/lb.conf:/opt/bitnami/nginx/conf/server_blocks/lb.conf:ro
      - /etc/letsencrypt:/etc/letsencrypt

TLS Termination

This is the tricky part. If you have already bought the certificates (eg. from SSLs) you’re good for 1 year (at least). If you don’t want to buy them and want to rely on Let’s Encrypt, you’ll need to be ready to sweat a bit to set it up. Setting it up on one node is pretty simple, but if you need to replicate it on multiple nodes then you need to start being creative.

One proposed solution would be having a primary node that generates (or renews) the certificate(s) and then it’ll spread them to the other servers:

rsync -e "ssh -i $HOME/.ssh/somekey" -auv --progress /etc/letsencrypt/ syncerssl@<IP2>:/etc/letsencrypt
rsync -e "ssh -i $HOME/.ssh/somekey" -auv --progress /etc/letsencrypt/ syncerssl@<IP3>:/etc/letsencrypt

Kubernetes

Kubernetes is more complex and require more time to configure it, but once done there could be no vendor lock-in for you (as many providers are offering managed k8s), also it is more extensible (but more complex than swarm).

If you have already a Docker Swarm cluster and want to migrate try following these guides:

Remember to either use a dockerized database or rely on cloud-native managed solutions.ย 


The next post will be about Redundancy of DNS, Stay Tuned.

Check out the whole version of this post in the ebook.