Image
{{'2017-05-09T21:28:22.2807204Z' | utcToLocalDate }}
Richie Lee

T SQL Tuesday: Shipping Database Changes with SSDT

Hello!

Let’s see how this goes: this months subject for T SQL Tuesday is about shipping database changes, something we here are all familiar with. So I thought I’d make some notes about a tool I’m very familiar with, SQL Server Data Tools.

The Good

  1. It’s free! SSDT works with Visual Studio Community up to Ultimate, and from Visual Studio 2015 onwards it comes with it’s own Visual Studio IDE.
  2. SSDT Has a NuGet package available. So you don’t need to install Visual Studio to get builds running, and crucially can control which version is used to compile at a sqlproj level. Tidy!
  3. The dev team are very accessible on Twitter and respond to Connect issues quickly.
  4. SSDT is still actively developed on, and supports new features. There release cycle is not-quite monthly, but it’s several times per year. [1]
  5. They still have version for Visual Studio 2013, though why you’re on an old version of Visual Studio I have no idea…..
  6. You can use SSDT via sqlpackage.exe or the DacFX api to deploy database changes manually. Or better yet you can integrate them into an automated deployment process.

The Bad

  1. Did somebody mention SSIS/SSRS/SSAS? All of these solutions are supported in SSDT, but crucially they are not supported via any NuGet packaging for compiling. In fact, you have to rely on calling devenv via cmdline to compile the solutions. And guidance for best practices in deploying each of these is murky at best (there’s a whole wealth of rants about the BI stack and deploying them, but that’s for another day.)
  2. NuGet isn’t supported:, right, there is a hack to get NuGet working with database solutions, but it is a hack, and so I’m not going to tell you how (but Google might.)
  3. pre-deploy scripts. These are run before the database diff script is executed, but after the diff script has been generated. I have seen very little use for these being used.

The Ugly

  1. You have to understand that a publish.xml file is a very useful, and yet very powerful part of the deployment process. And if you have more than one publish.xml file (ie as in one for each environment) then you’re doing it wrong [1].
  2. SSDT isn’t going to solve all your problems, and will probably create a few new ones for you along the way. As an example, if you use “IN” to define accepted values in a constraint, it will be deployed as “OR=, OR=” etc. This causes the constraint to be re-built each deployment. This is only a minor annoyance, but those unfamiliar with this nuance could be punished on a large table.
  3. Static data. This can be loaded via post-deploy scripts, but you may not want the scripts to run every time, and if you do then they need to be as efficient as possible. Again, another subject for another day.

Database deployments are hard. Truly, they are. And automated database deployments are even harder. It’s not that you’re doing it wrong after all. And the tools used to deploy changes are not perfect. But I genuinely feel thatSSDT is the best database deployment tool out there for the reasons outlined above.

 

[1] A case in point for Connect issues and active development is the fact that Connect issues fixed are included in the release notes for a version of SSDT.

[2] Each deployment to each environment must follow the exact same process, using the same dacpac and publish.xml file. Because deployments to dev/test are testing your deployment process, which is crucial to a smooth deployment come production.

comments powered by Disqus