Backup best practices, a shortlist
This shortlist will hopefully be a great step forward for you and for your backups. If you find anything in this list that you’re already doing, then great!
But, if you know any best practices that you think are missing from this list, please leave a comment and we will consider it and possibily add it to our list, and this way we will contribute to the knowledge ‘database’ about backups, which might someday help someone else in your position.
Backups should not be on the same physical storage as your database files
So, as you might suspect, having the database backup on the same physical drive as the database itself is not a particular good idea, because in case something bad happens to the drive you lose both your database and your backup.
This is why one of the first tips is to put your backups on a different storage, as far as possible, in terms of risk, from the same place where the database it backs up.
It might look like a simple and obvious thing to do, but in case you’re not doing it, you should, because it will save you a lot of work.
Set up schedules to automate your backups
The next step in prevention and backup safety and reliability is to set up (automate) the backup process through the means of schedules.
Creating such schedules is important because, with the passing of time, your backups get older and out of date. Also, with time, there is a higher risk of data loss.
Keep yourself protected and be sure that you always have at hand a way to reestablish your data up until the point where the database failed.
These schedules will provide you an exact history of your data with the frequency you specify, depending on the business needs of your application or company, the importance of your data or other particular factors.
Use a test server to verify the restore procedure of your backups
You might think your data is safe if the backup process has finished succesfully, but there is still another step that has to be done before you can safely say you have backed up your data, the restore process.
Without going through the entire process of recovering your data, through the restoring process, you cannot be sure that you can still access it. Other factors might influence this, like media corruption, and you cannot get hold of your data after backup.
Anoter item in the list of backup best practices is to frequently do a restoring test with all of the options you are going to use in a real-life scenario, on a test server. This will minimize the chances that you are caught off guard in the event of an unfortunate situation.
Test your recovery strategies
The next bulletpoint in the backup best practices list is in regard to recovery strategies.
The success of a company is determined by its ability to quickly respond to changes. Although backup strategies are created to cover a full array of events, with time, they might become out of date.
This is why, periodically, you should take the time and go over all of the scenarios and see if there are any new elements that should be introduced in your specific strategy.
These scenarios should take into consideration both the impact on your individual database restores and also on the backups of your system databases.
Use all available verification options during backup process
One more tip we can give about backup best practices is that you should be using all of the available verification options when you are doing your backups. This will help you be sure that all of the backups that you create will be created correctly and will be consistent from a transactional point of view.
Now, depending on how you prefer to do your backups, with T-SQL, through a wizard or other applications, there are multiple options.
For example, if you are using T-SQL:
- Make sure that when you write your BACKUP command you also include the CHECKSUM condition in the WITH clause. This will force the operation to verify that each page of data written to the backup is passed through a checksum operation to ensure consistency on the backup media. A sample query would be like this:
BACKUP DATABASE MainDB TO DISK = 'X:BackupsMainDB.bak' WITH CHECKSUM;
In the case you’re using a visual interface, for example SSMS (SQL Server Management Studio), make sure to have ticked the first two options in the Reliability section:
- The options, Verify backup when finished and Perform checksum before writing to media are going to act as an additional step of verification in the backup workflow and will guarantee that your backup is consistent, along with the data in it.
A different approach of making backups would be through SQL Server Maintenance Plans:
- Here you will have to tick a single checkbox, the option for Database Integrity.
If all of the above options seem to be for any reason out of hand for you, then we recommend using another piece of software, SQLBackupAndFTP. It is a very very simple tool to use and it has all of the most important options.
Now, we’re just over half-way through our list, and if you have started to incorporate any of these practices in your backup process, then you will surely have skipped yourself a few headaches in the future.
Now, continuing with our list, the next items in our backup best practices list are:
Doing a FULL backup daily
This might sound as a bit too resource intensive operation, but it is the best way to prevent any data loss.
Now, depending on the business needs of your organization you should pick a recovery model (more details here) that will be your organization’s first line of protection against data loss. If your organization can accept to lose 15, 30 or 60 minutes worth of data, then you should go for a simple recovery model.
Most cases would require you to do this FULL backup on a daily basis, but despite the overhead added by the time and occupied resources for such a process, the guarantee that your data is safe is, in our opinion, worth it.
If the organization you’re doing backups for has small databases with data that is easily recoverable, then a FULL backup should definetly be the correct way to go.
As we have already mentioned SQLBackupAndFTP earlier, if you have been curious and downloaded it, you will be surprised to see that it has, by default, set its backup schedule to do a FULL backup every 24 hours.
This is very helpful as this way there is no more risk that a detail that is most important will be overlooked.
If you are to incorporate just this item from our list of backup best practices, then you have a great advantage as the recovery is easier if the backup is just one file and no database logs are needed.
Although, there is a downside to this as data can only be recovered up until the date when the backup was created and, depending on your database size, the backup can occupy a lot of disk space.
Doing differential backups more frequently
We just said that you should consider doing FULL backups on a daily basis, but even if you do not do it everyday, differential backups should be done even more frequently than FULL backups.
Differential backups are not so resource hungry and data loss is reduced even more if they are done on a frequent basis, after having done a FULL backup.
In terms of advantages, differential backups will take less time to complete and also less disk space as they will contain only the data that has been created since the last FULL backup. No database logs are needed in this case either and like we’ve said before, data will be more up to date.
The disadvantage of this backup type is also the fact that data can be recovered only up until the date when the last differential backup was created and the restore process of the data will be slightly more complicated as now there are two files that have to be incorporated into the restore process.
Do transaction log backups even more frequently
The following item in our list of backup best practices is one that leads the FULL and differential backups in terms of the frequency it should be made.
Because transaction logs actually contain all the recent activity that has occurred in the database, it can be used to make a restore of the database to a specific point in time, and this is its biggest advantage!
If the frequency of new data that is created in your database is very high, then we recommend doing a transaction log backup every 10 minutes, while for other databases which are less active doing such a backup every 30 or 60 minutes should be sufficient.
The downside of this backup type, though, is that the database has to have transaction logging activated which will increase the size of the database and will also increase the effort when doing a restore of a transaction log.
Back up system databases as well
You might think that having a backup strategy with all of the points mentioned above is more than enough to protect your organization from data loss, but it is not.
A backup strategy is incomplete if you do not have a backup plan for your SQL Server system databases, master, model and msdb.
These databases are absolutely essential because they contain system configuration and also the SQL Server job information which will need to be restored in case of a total system restore.
You should make a strict plan of backing up your system databases, at least as frequent as your FULL backups, and at least daily if you’re frequently changing instances. But in the case where your instalation is more stable, you should do this less frequently.
If up until this point you still haven’t considered taking a look at SQLBackupAndFTP for your backup process, there is still some good news that might make you change your mind.
As you can see in the screenshot above, you have the possibility of also backing up your system databases, along with your user databases, with just a few clicks. You can still back up these tables within a schedule and add the backups to your specified location either on the same disk or on a different machine.
Backup your most sensitive data to tape
This might not be a very common situation, but if your organization wants to be fully protected against data loss, then be sure to back up the data that it cannot afford to lose to tape.
Tape is much more resistant to physical shock than disk drives and due to this it has a major advantage. Just imagine a situation where you drop a box full of disk drives containing the last 3 months worth of data down the stairs. At that very second, you lose 3 months worth of data, but with tape drives you can pick the tapes up, and continue on.
Because there is no magnetic and nearly no electrical components in the drive, it uses less power than a conventional disk drive.
They are less sensitive to wear, as general media life for tapes are around 20 to 30 years and the lifespan of the drive is around 10 years.
So in conclusion, with tape you get:
- Very long media lifetime (20 – 30 years)
- Very easy to transport
- Lower power consumption
- Higher reliability
- A long lifetime for the drive (around 10 years)
Today’s tapes, which are the 5th generation, LTO-5 have a high speed at around 140 MB/sec and can store around 1.5 TB of uncompressed data, which is still pretty much sufficient for today’s needs.As a conclusion, if you are going to incorporate these backup best practices, if not all then at lease a few in your backup process you will eventually find the best combination of tools and tricks that will make your process more efficient and your data will be more safe. We recommend using SQLBackupAndFTP as it helps you cover most of the important steps above very fast so you can be sure you will not have problems with your backups in the future. By anyway you choose to implement these tips, please share the knowledge, share this post in order to help others that might not be aware of what they could do to improve their backup process and get rid of future headaches.