If you have a database, then you need to back it up regularly, and preferably not manually. To perform regular backups is only half the battle, you also have to consider where to store them. Saving the backups on the same server where MySQL Server is installed isn’t safe, because if it crashes, you will lose everything.
Fifteen years ago the backups were transferred via FTP to a streamer for storing, life was beautiful and the drives were magnetic.
However, now the security requirements are different, it is good practice to store the backup in encrypted form, on the other side of the planet in a bunker under a mountain. Yes, yes, we are talking primarily about Amazon Simple Storage Service (AWS S3).
How to perform backups via AWS Command Line interface
Let’s take a look at how to perform backups of MySQL Server and send the backups to S3:
- First of all, you have to install AWS Command Line interface:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install
you can find more information here.
- Then please configure AWS CLI. To do this, execute:
This command will ask you for an access key and a secret key, it looks like this:
How to create access keys for access only to S3 can be found here. The list of supported regions you can check here. It is not necessary to specify the region, but we highly recommend you to do so, otherwise you will not be able to understand where the backups were uploaded.
- Now it’s time to perform a backup, the easiest way to do it is via mysqldump:
sudo mysqldump sakila | gzip > sakila.gz
- To upload the backup into a bucket, use the utility you previously installed:
aws s3 cp sakila.gz s3://<bucket-name>/mysql-backup/
- Finally, it will be useful to create a cron job to regularly perform backups and upload them to S3
sudo crontab -e
There you can add a string that performs a backup every 24 hours at midnight, giving the backup a unique name. Thus, every night, you’ll get a new backup on S3.
0 */24 * * * sudo mysqldump sakila | gzip > `date +"%Y-%m-%d"-sakila.gz` && aws s3 cp `date +"%Y-%m-%d"-sakila.gz` s3://<bucket-name>/mysql-backup/
After adding the above cron job, you can sleep peacefully. Until something unexpected happens. For example, a manager will forget to pay the bill for AWS S3. Or you run out of disk space…backups need to be deleted.
But what if you have several databases, and for each of them, you have to apply different backup strategies, or would you like to have logs to check what is happening in order to quickly resolve the issues? Then the cron job will not be enough.
Third-party services can help with this.
How to perform backups via SqlBak
One of these services is SqlBak. To set up automatic backups of MySQL Server databases and send them to S3 via SqlBak, please follow these steps:
- First of all, create a SqlBak account
- Then install SqlBak app on the server. To install this you have to execute the command that is specified on the download page. If the installation is successful, the terminal will display the following text:
- Now set the connection to your MySQL Server
- This completes the SqlBak app installation. Now you need to create a backup job on your dashboard page, just click on the “Add new job” button.
- On the opened backup job settings page choose the databases that should be backed up at the “Select databases” section.
- Then in the “Store backups in destinations” section choose Amazon S3 as a destination place and specify access and secret key
- Set a schedule you need
- The last step is to click the “Run Now” button. The backup job will be run and the log of the execution of the upload to S3 will be displayed.
In addition, SqlBak can send you email notifications if there are problems with a backup job, encrypt the backups, set up an automatic restore to another server, and delete outdated backups.
Finally, no matter how you defend your databases, always test your restore scenarios. If something goes wrong, you must be sure that you can restore the backups quickly and reliably.