Yesterday, the domain controller in my Hyper-V setup went rogue; and that’s to say it went very wrong. I was lucky enough to have a copy of the virtual hard disk and was able to recover the setup. New as I am to Hyper-V it took a while to figure out how to make it all happen, and I don’t plan on going into that in detail but let’s say it all works again and I’m a happy camper.
Setting up the SQL instances, I attached additional storage to each VM to replicate what might be found in a real SQL Server setup, adding data and log drives to make the distinction as would some real production worlds present.
It’s been a few days since I’ve written anything about this. I was working on some things to get the VM’s setup the way I wanted them. So, Despite the instructions in the study manual the real world differs in some ways. Data and Log drives usually hold the data, so I’ve configured S: and T: for data and log respectively. Doing this on the SQL-Core instance was a challenge on Server 2016. The firewall and rules for Virtual Volume Management had to all be configured to allow remote management, and these had to be done locally outside of Group Policy Objects. I found a way to do this using Powershell, “Set-NetFirewallRule -DisplayGroup “Remove Volume Management” -Enabled True”, worked like a charm and then I could use Computer Manager to configure the disks created via Hyper-V manager.
All of that is aside from the SQL Server Management this was supposed to be all about, but in the real world these things need to be considered. Each VM has a S: and T: drive now, and the data and log will be distributed respectively. Why this is is important? Because later when we have a discussion about SQL Server Availability Groups (HADR) this will be come important. For the time being, know that all this extra work was worth it.
The exercises in the manual use Adventure Works 2014 Demo databases, a quick internet search will find those and once downloaded the files will need to be unpacked.
Configure the SQL Instance to use the S: and T: drives for data and log files. Right click on the SQL Server instance node in SQL Manager, and select “Database Settings”.
Resetting the Data and Log values to S:\Data and T:\Log will have the effect of putting the database files for any new database.
Once the database files are unpacked, the first way to get the database in place is to use the Database Restore functionality. There are options, T-SQL to do this or using the SQL Server Manager. I used the SQL manager in this case. But first, something else needs to be considered. The new S: and T: drives don’t allow the SQL Server service to use them, so permissions must be changed. Allow the SQL Server service account read/write/execute permissions in the folders.
Using the SQL Server Manager, right click on the SQL Server instance desired and then select “Restore Database”. Remap the data and log files to the new drives and then restore the database. It will churn for a few minutes, and then the database will be online.
Special thing to notice is the database owner, it’s the login of the person who restored the database. This isn’t the desired long term owner, so I’ll change that to be ‘sa’ which has no password and cannot be used at this point.
and then the database is in place, and the database system administrator is the owner.
Next time, we will put the database on the SQL-Core instance, create a SQL Project in Visual Studio for the database and talk more about security.