Image
{{'2017-03-13T11:37:06.6913393Z' | utcToLocalDate }}
Richie Lee

Turbo LogShip 1.0 Released

A long long time ago, on a blog far far away, I wrote a long post about log shipping. One of the key points I mentioned was how slow restoring can be for a read only log shipped database. If going and reading the whole thing is too much effort for you now, I'll save you the effort (you child of the internet you) and tell you it's because the database needs to be kept transactionally consistent in between restores when making it available for read-only. It creates a .tuf file (transaction undo file) to keep the progress of all the pages that have uncommitted transactions saved to them. The checking, creation, applying and re-applying of this can take some time where there is a significant amount of uncommitted transactions within a log backup.

To resolve this, you can restore files under NORECOVERY, then switch to STANDBY: when restoring a log backup, you have two restore choices: NORECOVERY and STANDBY. Both these choices will allow further log restores, but STANDBY is the option to choose if you want the database to be read-only. NORECOVERY leaves the database in a transactionally inconsistent state: it does not roll back uncommitted transactions into a tuf file. So it is possible to restore the log files in NORECOVERY mode, and then restore a final log with the STANDBY option to enable the database to be read-only (it is pretty neat that you can switch between STANDBY and NORECOVERY in this way.) We can do this because we honestly don't care about all those in-between restores being transactionally consistent. Sadly, this option is not an out-the-box operation, and so requires writing a custom job to restore the log files. I’ve read online a few methods to achieve this, and I have written my own custom restore process. The steps that it follows are:

1. Get all files in the “copy to” location using master.sys.xp_dirtree
2. Compare these files to the last restored file in the msdb.dbo.log_shipping_secondary_databases table (and create a count of the total files)
3. Loop through the count for each file
4. Use the master.sys.sp_change_log_shipping_secondary_database sproc to alter the recovery mode to NORECOVERY and to only restore one file at a time
5. Start the default restore job using msdb.dbo.sp_start_job
6. Check if any other files have been copied over, and add them to the count
7. Continue restoring one file at a time in NORECOVERY mode until there is one file left (COUNT=1)
8. Use the master.sys.sp_change_log_shipping_secondary_database sproc to alter recovery mode back to STANDBY and to restore all files (just to be sure)
9. Start the default restore job using msdb.dbo.sp_start_job.

Neat right? I shared this on SQL Server Central, got some feedback, improved it, and now to make it more available to people. I have uploaded it to GitHub, and have the corresponding release version available on NuGet. Essentially the NuGet is just a dacpac of the GitHub release.

I'd recommend using this dacpac as a reference to the database you want to use as a readonly log shipped database. This means it's available to be run on the secondary thanks to the magic of log shipping. Then create a SQL Agent job that executes the stored procedure for the frequency you usually have the default job at. Disable the default job so that it is only executed by the custom job.

The stored procedure only requires two variables; the name of the secondary database and the default restore job that was created when log shipping was set up. All other work (such as log file locations and file names etc.) is sorted by the sproc itself. The sproc will execute the default restore job whenever it needs to restore a log file, so all customisations to the log shipping process are external of the built-in process. The main benefit of doing this is so that the custom job can be deleted and the default job can be used again without having to recreate log shipping. It also means the default alert job can be continued to be used.

This process has been in active use in several different places for months and I have had no issues using it. But as ever, this is always a work in progress. Plus there's a few minor changes I'd like to make to make the stored procedure smaller, but this is a fully functional first release. But test, test, and test again before you go live with it. Changes can be submitted via GitHub. The project has one branch and is protected, so a pull request is required before changes are accepted.

comments powered by Disqus