One of the biggest issues in running a server1 is making sure if everything disappears you can be up and running as quickly as possible. So how do I do it?
Simple answer is I use a cron job that runs every day and does daily, weekly and monthly database and file system backups and then pushes those to Amazon S3. I rolled my own bash script to perform the backups and after a few months of both testing and improving it’s ready to be shown off.
The script is extremly simple:
- Import config settings from a file
- Dump MySQL Databases, gzip and move the file to your backup folder
- Dump PostgreSQL Databases, gzip and move the file to your backup folder
- Dump MongoDB Databases, gzip and move the file to your backup folder
- Tar and gzip the local webroot and move the file to your backup folder
- Delete daily backup files older than 7 days from the backup folder
- If Monday
- Copy just created database and webroot backups to be weekly backups
- Delete weekly backup files older than 28 days from the backup folder
- If First of Month
- Copy just created database and webroot backups to be monthly backups
- Delete monthly backup files older than 365 days from the backup folder
- Use S3 Tools to essentially rsync the backup folder with an Amazon S3 Bucket
It’s clean, quick and above all has worked without fail for several months now. The slowest part of the process is uploading the files to S3 which has never taken that terribly long. It’s also repeating the mantra from my earlier post of “tar it then sync”.
This method is simple and it seems to work great for most single server setups. I haven’t optimized the database dumps, mainly because that is highly dependent upon your particular use of each. If you have multiple servers or separate database and web servers, why are you taking sys admin advice from me?
It’s available on GitHub: S3_Backup