Moving a sql server from on-prem to cloud isn't nearly as bad as it use to be, but I ran into an interesting problem around restoring the production copy of a database to my local docker container. Soooo I created a script to fully, completely restore a local database. I'll say this again for those in the back...

This script builds from nothing, each time. If you need to save stuff, modify it as necessary.

The only tool you'll need to download is SqlPackage. It's small and if you have dotnet CLI already installed, it's painless. Grab it here and pick your flavor of host OS.

Also I'm assuming you already have docker installed and some sql database out in Azure.


Feel free to skip ahead and go right to the one stop gist shop I created. Drop the files in a folder, the script will figure it out. Also, on the first run it will throw an error, but be completely fine, it's because you don't have a named container yet, but you will soon. Also, I've included it all down below (it's also from gist).

Details and breakdown. A modern take on an old script "DBNuke"

A while ago, the team I worked on had a script that was called "DBNuke". It was the script I ran when I had messed up my database so bad, I had to completely start over. It was a fantastic thing and I did it regularly and this is sort of a modern take on that idea. Start over, with a known set of data. You could use this to do exactly that if you wish. Check in a DB version you like into your code and point a curl command to it - maybe for another time.


Now for the details of each of these. Let's start with the bacpac file and I'll skip the parameters, because those are somewhat boring and boilerplate.

docker pull

Pull yourself a sql server container. I didn't test this out with 2019 but prior versions of this, I did, and the spin up procedure use to be a bit different, I had to set an SA password in a separate step so just be aware. At the time of this writing the size was around 1.8gb.

docker kill ${containerName}
docker container rm ${containerName}

This freaks out on the first run because I didn't bother to create a check to see if the containers already existed and if they're running. After the first time, you might see the docker kill command throw an error (because the container isn't running) but meh, it's good enough and doesn't hurt anything.

$scriptpath = $MyInvocation.MyCommand.Path
$dir = Split-Path $scriptpath
cd $dir

Boiler plate to get the location that the script is running from.

SqlPackage /Action:Export /TargetFile:"./${azureDbServer}-backup.bacpac" /SourceConnectionString:"Server=tcp:${azureDbServer},1433;Initial Catalog=${dbName};User ID=${azureSqlUser};Password=${azureSqlPassword};Persist Security Info=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;"

Most, if not all of this I pulled from an example on Microsoft's site and parameterized it. This is entirely runnable independently and I plan on created a cron job to have an offsite backup.

docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=${localSqlSAPassword}" --name ${containerName} -p 1433:1433 -v /var/opt/mssql/backups -d

#wait for the container to spin up
Start-Sleep -Seconds 10

This hurts me to do a wait-state (they're a huge code smell, this is this no different) - I wasn't able to find a way to know if the docker container had fully spun up. So when the docker run command fires off, it takes a little bit to come up. My local has a very healthy amount of power and still takes at least 7 seconds. Adjust this as necessary, your mileage may vary.

SqlPackage /Action:Import /SourceFile:".\${azureDbServer}-backup.bacpac" /TargetConnectionString:"Server=localhost;Initial Catalog=${dbName};User ID=sa;Password=${localSqlSAPassword};TrustServerCertificate=True;"

Now we restore that database to the local (and ideally fully running) sql server! Go look at the log output during this time. It's kinda fun to see it doing its thing.

docker cp ./addUser.sql ${containerName}:/var/opt/mssql/backups
docker exec -it ${containerName} /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${localSqlSAPassword} -i "/var/opt/mssql/backups/addUser.sql"

These last two lines are more security minded. The application I have doesn't use the SA account to run the server so script setups a db user with datareader and datawriter. Again, your mileage may vary, but for most, this should be sufficient.


Most places I know of are still on-prem with regular sql backups that result in a .bak file. These are totally fine too! The beginning of the script is the same, until the actual restore.

#copy files to container
docker cp ./${backupFileName} ${containerName}:/var/opt/mssql/backups
docker cp ./restore.sql ${containerName}:/var/opt/mssql/backups
docker cp ./addUser.sql ${containerName}:/var/opt/mssql/backups

docker exec -it ${containerName} /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${localSqlSAPassword} -i "/var/opt/mssql/backups/restore.sql"
docker exec -it ${containerName} /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${localSqlSAPassword} -i "/var/opt/mssql/backups/addUser.sql"

In this case, I copy over the necessary files into the container and execute them from the inside. It's worth noting and considering if I should leave those files behind or not -- I can hear the security team saying DO IT!!! Again, your mileage may vary (but yes, you should clean up those files)

And that's it!

I really like the idea of being able to go from nothing to full, operational database in a few minutes. It was something that was awesome years ago ...except now, it's WAY faster!

And here's the code for those that want to save a click or two.