I recently moved to a new dedicated server and decided it also was a good to time do start doing things “the good way” tm. A good backup strategy was especially needed.
Most articles I found on the net explains how to backup your data and they do it well. But they lack something essential that might someday become a real issue in case there is a disaster. Main disk crash ? Yes, you know what I mean
Let me introduce Duplicity command line utility. It supports multiple storage backends including S3, FTP, SFTP as well as regular mounted folder. It supports incremental backup and automatic older archive removal. Last but not least: all archives are fully encrypted by default !
That’s enough words.
- Define backup frequency. I use daily for my server, weekly for my personal computer
- Define full backup frequency. I use one per month
- Define full backup lifetime. I use 6 month as this is not too critical
- Define incremental backup lifetime. I use 1 month
Lets rephrase all this into plain English: Backup my data every single day. Every month, start backup from scratch. Keep a full month of daily history. For older, you can keep only the monthly full copy.
Incremental backup helps to save space on the remote storage but slows down the recovery as every intermediate file up to the previous full backup will need to be read.
Here is my generic backup script. It is fully configurable and will automatically walk into /root/server/backup.d to find target files. These are trivial files containing the full path to a single folder to save. The name of the file determines the target.
#!/bin/bash #File: /root/server/backup.sh # to backup a set of folder, put its name # in a file in backup.d. There maybe only # one folder per file # - enable the backup with 'chmod +x' # - disable the backup with 'chmod -x' FTP_URL="ftp://<login>@<server.tld>/backup" FTP_PASS="<your ftp pass goes here>" BK_FULL_FREQ="1M" # create a new full backup every... BK_FULL_LIFE="6M" # delete any backup older than this BK_KEEP_FULL="1" # How many full+inc cycle to keep BK_PASS="<your very secret encryption key goes here>" export APT='apt-get -q -y' export CONF='/root/conf' ################################ # enter section ################################ function enter_section { echo "" echo "==============================" echo "$1: $2" echo "==============================" } ################################ # do backup ################################ function do_backup { enter_section "backing up" "$2 -> $1" export FTP_PASSWORD=$FTP_PASS export PASSPHRASE="$BK_PASS" duplicity --full-if-older-than $BK_FULL_FREQ $3 "$2" --asynchronous-upload "$FTP_URL/$1" duplicity remove-older-than $BK_FULL_LIFE --force "$FTP_URL/$1" duplicity remove-all-inc-of-but-n-full $BK_KEEP_FULL --force "$FTP_URL/$1" unset PASSPHRASE unset FTP_PASSWORD } ################################ # run sub-scripts ################################ # backup should be independant from the system state # always make sure the required tools are ready $APT install duplicity ncftp > /dev/null for PARAM in /root/server/backup.d/* do if [ -f $PARAM -a -x $PARAM ] then do_backup $(basename "$PARAM") `cat $PARAM` fi done exit 0
Example: Backup /root folder to “42” subfolder of backup target:
echo "/root" /root/server/backup.d/42 chmod +x /root/server/backup.d/42
Run it daily as root:
echo "25 2 * * * root /root/backup.sh" >> /etc/crontab
Beware that there is a major drawback with this method. Backing-up /var/lib/mysql with this method will probably result in data corruption as the tables are not locked. Again, most articles forgets to mention this… You can workaround this by first running ‘mysqldump’ then archiving the resulting file. This is left as an exercise to the reader 😉
In a next article, I will try yo address the restore issue.