Save yourself from a disaster #1: Secure the Database

This is the first part of the series Save yourself from a disaster: Redundancy on a budget.

How can we make sure our most important asset (which is the DB) is safely secured in case of a disaster?

We could do mainly 2 things (and you better do both of them):

Disclaimer
This guide won’t cover everything, it won’t be a comprehensive guide, and the steps that are shown need to be carefully reviewed and tested in your development/pre-production environment. I don’t take any responsibility for any damage, interruption of service nor leak/loss of data for the use of the instructions in the ebook (nor from any external website I’ve mentioned).

Backups

Start doing the DB backups (with mysqldump or xtrabackup) and define a policy for RTO and RPO, so you’ll know what is the accepted loss (there’s always loss – even if very minimal). RTO defines how long can the infrastructure can be down, and RPO defines how much data can you afford to lose (ie. how old the latest backup is).

Rotation

To rotate the DB backups we could simply use logrotate.
We could simply start with a basic daily backup rotation (or any interval you have defined as RPO):

/var/backups/daily/alldb.sql.gz {
  notifempty
  daily
  rotate 7
  nocompress
  create 640 root adm
  dateext
  dateformat -%Y%m%d-%s

  postrotate
    mysqldump -u$USER -p$PASSWD --single-transaction --all-databases | gzip -9f > /var/backups/daily/alldb.sql.gz
  endscript
}

This will create the rotated DB backups on the same server where logrotate is running (most likely the same DB instance). We have seen that this is very wrong, so you must always store the backups somewhere else (and also offline).

Remote Storage
With a simple change, we can upload to an AWS S3 bucket (with cold storage access set to rarely-used):

  lastaction
    BUCKET="..."
    REGION="eu-west-1"
    aws s3 sync /var/backups/hourly "s3://$BUCKET/daily/" --region $REGION --exclude "*" --include "*.gz-$FORMAT*" --storage-class GLACIER
  endscript

Local Storage

Just do a rsync (better if scheduled) to download it locally to an external hard-drive:

rsync -e "ssh -i $HOME/.ssh/id_rsa" --progress -auv <USER>@<IP>:/var/backups ./path/to/backups

There you go, you have now backups on-site (for faster restore), remote on another provider (for more reliability), offline (for more peace of mind).

Security

Remember the good practices, and do not forget about GDPR, the backups must be stored encrypted at-rest (and use a key instead of a plain password).

Restore

Once everything is backed up, you need to think about how to restore the dump properly, or at least switch the connection to the other node. I’ll cover this in the Disaster Recovery Plan post.


The next post will be about Secure the Storage, Stay Tuned.

Check out the whole version of this post in the ebook.