Import missing VM into Hyper-V

I want to share a little story which once again proved to me how easier IT is when you learn a little Powershell.

I recently had an outage on my Hyper-V server (Windows 10 server build 9841 btw) which hold my lab environment at home. The server lost connection with an SSD drive (E:\) containing almost 15 VMs, but this was luckily fixed my reattaching the SATA-cable to the drive.

However, when the server booted and my E:\ drive had returned, all the VMs on the drive was missing. Both in the Hyper-V management console and in powershell when I ran “Get-VM”. The files and VHDs was intact so it was only a matter of importing them to Hyper-V.

So here I had two choices:

  1. Import the VMs one by one in a 5-click wizard
  2. Import the VMs with Powershell

After fiddling around with sending the configuration files for each VM into a foreach loop, and still not making it work I tried something simpler. All I needed was a 1-liner which listed the config files and piped them into the import-vm cmdlet and the following line imported all the VMs on my E: drive into Hyper-V and I could start the VMs with no need to change any kind of configuration.

Get-ChildItem E:\Hyper-V -Recurse *.vmcx | Import-VM

Once again Powershell proves to be an amazing tool.

Hyper-V backup using Windows Server Backup

A new feature in Windows Server 2012 is that Windows Server Backup (WSB) now has Hyper-V support, meaning you can use it to take backup of and restore virtual machines running on Hyper-V. This provides a complete backup and restore solution out-of-the-box which can prove to be good enough in some environments, particularly in the SMB market.

Install Windows Server Backup

To install Windows Server Backup you can use

  • Server Manager – Added as a feature under “”Add/Remove Roles and Features”
  • Command Line – Run “start /w ocsetup WindowsServerBackup”
  • Powershell (2012) – Run “Add-WindowsFeature Windows-Server-Backup”
  • Powershell (2008 R2) – Run “Import-Module ServerManager” and then “Add-WindowsFeature -Name Backup-features -IncludeAllSubFeature”

Nice 2 know about Windows Server Backup 

  1. WSB uses VSS (Volume Shadow copy Service) to create a .VHDX file which contains a snapshot of the virtual machines that is backed up. This also enables WSB to take full backup of and flush the transaction logs of VSS-aware databases like Active Directory and Microsoft Exchange when you select “VSS full backup”, this is not selected by default.
  2. WSB uses VSS to manage the backup versions, and since VSS is pr-volume this makes WSB unable to maintain several versions of a backup job when you backup to a network share. A backup to a network share will overwrite the previous backup. If you backup to you locally connected drive you can have several versions.
  3. When you backup a VM you get a warning saying the VM will be put in saved state while the backup runs. This is not the case. The VM will continue to run uninterrupted and no one will notice you are backing it up.

Backing up a VM

“wbadmin start backup” is the primary command to backup you vm’s and I won’t go through all the options and switches but there are a few examples.

To backup a VM named “Server1” to the disk mapped as Y, run the following command:

wbadmin start backup -backuptarget:Y: -hyperv:Server1

To backup a VM named “Server1” to a shared folder, run the following command:

wbadmin start backup -backuptarget:\\server2\backup -hyperv:Server1

To backup a VM named “DC1” to a the mapped as Y and flush the transaction logs of AD, run the following command:

wbadmin start backup -backuptarget:Y: -hyperv:DC1 -vssFull

Restore a VM

“wbadmin start recovery” is the primary command to recover a VM from backup. Recovering a VM is slightly more trickier than backing it up, but I have never heard of a backup product where a restore is easier than taking a backup. The command has several options and switches but I’ll stick to the basic ones in this post.

The restore procedure involves finding the version of your backups you want to restore, then which items within that version before the restore itself. To begin with you find your backup versions with the following command

wbadmin get versions


This will provide a list for the backups taken from the local machine. Look for the field “Version Identifier” which you need in the next command. Then we take a look what resides in this backup version with the next command

wbadmin get items -version:(version identifier)


Here you see I have a VM named “LAB2-PC2” that I am able to restore from this backup. To do so I have to grab the “vm identifier” value and the backup version number from before and run the following command

wbadmin start recovery -itemtype:hyperv -version:(version identifier) -items:(VM identifier)


Notice the warnings that it will delete the VM if it still exists and restore the VM from the backup. Also you have to verify the network settings of the VM after the restore. As mentioned this command has a numerous options for restoring to alternate locations and such so I would suggest that you go exploring with “wbadmin -?” or have a look at

Restore a single file or folder

WSB only provides a snapshot og the vm and you have to restore the entire vm or nothing at all. But if you just need to restore a file or a folder, then locate the .VHDX file in the backup and mount in disk manager and extract the files from there. Alternatively you can restore on another hyper-v host and boot it up ther to extract the files.

Final words

I’ll keep this short and straight to the point: TEST YOUR BACKUP!

Longer version: I’m convinced one of the more common failures among IT is that people does not try a proper restore until the day they need it the most. I can’t express how important it is that you test your backups and try a restore. Create a restore procedure and write it down! When the day comes that you need it, you will thank yourself that you did.

Thank you for reading, hopefully you have enjoyed it.

Time service on virtualized domain controllers

In Active Directory, the time configuration is an important topic as the domain controllers has to be in sync with each other, member servers and clients. Using the default Kerberos settings, a time difference of more than 5 minutes will cause logon issues as the logon token will be outdated.

The domain controllers

In an Active Directory environment the domain controller hosting the PDC emulator FSMO role is the master time server. All other DC’s, member server and clients should synchronize their time with this server. The PDC emulator should always retrieve the time settings from an external, reliable source. Personally I’ve used as a time source with no issues. To check your current configuration, run this line from an elevated command prompt:

w32tm /query /source

The outout shuld either display the name/ip of your current time source. If you synchronize from the bios it will display “Local CMOS clock”.

To set the time source, run this command from an elevated command prompt on the DC hosting the PDC emulator, replace “” with a source of your choice.

w32tm /config /syncfromflags:manual / /update

But when the domain controller is virtualized, the virtualization platform has an agent installed in the virtual machine (vmware tools, hyper-v integration services etc) and these often synchronize the time with the host servers time as default, and you should avoid using this setting on your DCs. This will display “VM IC Time Synchronization Provider” from the command above if your DC is virtual on a Hyper-V server. If you run Hyper-V you can leave the time sync enabled under integration tools, and run this line from an elevated command prompt on the virtual domain controller. Select yes to overwrite if you are prompted for it.

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0

The domain members:

As for the rest of the domain it should be able to sync up automatically, but should you need to specify a server or client to us the domain sync you can run these lines from an elevated command prompt:

w32tm /config /syncfromflags:domhier /update
net stop w32time
net start w32time

The network

Finally , don’t forget to open your firewalls. NTP requires UDP port 123 to be opened and forwarded to the DC hosting the PDC emulator, and also remember the local firewall on the server.

Hyper-V limits

A quick list of the limitations to Hyper-V as of Windows 2008 R2:

Hyper-V host:

  • Virtual CPUs: 512
  • Maximum Memory:  32GB (standard), 2TB (enterprise and datacenter)
  • VMs pr host: 384
  • Storage: Limited only by Windows
  • Network Interface Cards: Limited only by Windows

Virtual Machine:

  • Virtual CPUs: 4 (if the host has at least 4)
  • Maximum Memory: 64GB
  • IDE drives: 4 (Remember the vm must boot from a IDE-attached VHD file)
  • SCSI drives: 256 (up to 4 scsi-controllers with up to 64 drives each)
  • VHD limit: 2 TB
  • Total storage (VHDs): 520 TB (260 drives on 2 TB each) + any pass-though drives
  • Pass-through drives: Number and size limited by Windows only
  • Virtual Network Interface Cards: 12 (8 synthetic and 4 legacy)

The magic of CSV

Ah, the lovely feeling inside when you start to plan a Hyper-V environment with servers, clusters, network and storage. 🙂 There are so many things to think about, to plan, to set up, to test and to fix. In the enterprise Hyper-V in Windows 2008 came really short in my honest opinion since it was a pain to cluster. As you may know, cluster nodes are assigned storage in LUNs, and only one node can access any given LUN at the time. This means that when you store virtual machines and their data on a LUN then you have to migrate everything on the LUN from one node to another node when you migrate the virtual machine. So to make the virtual machines independent of each other you have  to make a separate LUN for every single virtual machine in the cluster! At only 50 virtual machines the administrative tasks are getting a little out of hand.

But in Windows 2008 R2 we got CSV which is short for Cluster Shared Volumes (not comma separated values). This is a new cluster storage type which simply enables several virtual machines to access their VHD files on the same LUN and still operate, or move between, different cluster nodes independently. That’s the basic desciption and there are more detailed information here:

Some benefits include

  • You need far less LUNs (obviously)
  • Every LUN needs some space to allow for datagrowth in the virtual machines. With fewer LUNs you’ll make much better use of your available storage space
  • The path to the CSV is stored under the folder \ClusterStorage under the systemdrive and it’s the same path regardless of which node you’re looking from. So tracking the paths are easier now
  • Virtual machines can still fail over and move between cluster nodes independently
  • You can set up some clever designs, for example where the system drives, data drives and page files to the virtual machines are placed on separate physical sotage solutions (and LUNS) for performance gains without having to implement a jungle of LUNs.

CSV are available in all editions of Windows Server 2008 R2 that includes Failover Clustering, as well as Hyper-V Server 2008 R2. Also notice that putting other clustered resources on CSV is not support, it’s for virtual machines only.

Hyper-V virtual hardware options

In the Hyper-V console you can find a few options regarding the virtual hardware of each vm and there are several nice features you all should know about. Let’s begin with synthetic and emulated devices.

Emulated devices (aka legacy devices) are the more traditional virtual hardware which is supported by pretty much any OS out of the box. It still uses CPU cycles on the host to emulate a hardware and transfer its I/O. The synthetic devices (aka Enlightened devices) are new virtual hardware designed to be much more efficient on virutal machines than emulated devices. It performs way better as it transfers I/O directly to the vmbus instead of emulating a device in between, but the operating system needs the Hyper-V integration software to detect synthetic devices. In short, emulated has better compatibility but synthetic gives better performance.

This explaines why the harddrive containing the OS in a vm must be an IDE drive. IDE drives in Hyper-V are emulated at boot, but can transition seamlessly into synthetic devices. SCSI controllers are synthetic only and is therefore not visible for the guest OS until the Hyper-V drivers has been loaded. In performance there are no practical difference between an IDE drive and a SCSI drive in Hyper-V, but you can hot-add disks on the SCSI controller.

While we’re on virtual hard drives. The VHD files can be set up in 3 different ways in Hyper-V: Fixed size, dynamically expanding, and differencing disks.

  1. A fixed size VHD will allocate all space when the file is created, so if you create a 50GB fixed size VHD the files will be 50GB in size even if it only conaints “free space”. The advantages is that this file will unlikely be fragmented and perform slightly better than dynamically expanding disks. Fixed size disks are recommended for production environments. The obvious downside is that you always will allocate space which isn’t spent.
  2. Dynamically expanding on the other hand will only allocate the space which is used by data inside it, and allows you to utilize the storage capasity much better. Though these drives doesn’t perform as well as fixed drives since it has to spend disk writes to expand the file when data is added. Also you will likely end up with defragmented VHD files. But for testing purposes this is a very good option to increase what’s called the “vm density” (I smell a later blog post about that). Just be very careful that the VHD files has space to grow or you risk bringing all your vm down all of a sudden when your physical drive has no space for growth.
  3. Differencing disks is that you set up a “parent disk” and then create other “child disks” that uses the parent as a reference. All changes from parent disks are stored in the children. This configuration is seldom used for other than testing purposes as failure on the parent disk will make all children disks unusable.

In addition we have pass-through disks which connects a volume directly to the vm and not as a vhd-file. This gives by far the best performance, but the disk is completely assigned that particular vm and can’t be accessed by any other vm or the host.

RemoteFX is a new feature in Windows Server 2008 R2 Service Pack 1. It is a virtual display adapter that enables what Microsoft calls “Rich desktop experience”, which means high-end graphics in a vm and over RDP. The graphics are virtulized in the display adapter. This requires support in the graphics driver and you’ll usually need an enterprise version display adapter from Nvidia or ATI.

The amount of memory assigned to a vm can either be static, or as of Windows Server 2008 R2 SP1 it can be dynamic. By selecting dynamic memory you can assign amout of memory the vm shall have at boot, and the maximum amount of memory it can allocate. There is also a slidebar used to determine the priority the vm has when several vms has to assign memory between themselves.

CPU Cores is prettystraightforward. It is the amout of logical CPU cores the vm has installed. I’ll come back to about the pros and cons of assigning a vm one or more cpu cores. Here you can also reserve CPU power to the vm so it’s guaranteed some resources, and you can also limit the amount of CPU power by percentage the vm can load the host with.

Welcome to Hyper-V (and my blog)

Hello and welcome to my very first blog post ever. I’ve decided to start blogging about all the wondrous technology and solutions I bump into on a daily basis and if you are reading this I hope this can provide both entertaining and educational value to you. I’ll kick it off nice and easy with a quick introduction to Hyper-V and server virtualization.

Hyper-V is a type 1 hypervisor which Microsoft provides both as a free product and included in the 64 bit versions of the Windows server 2008 (and later) server family with some restrictions. Hyper-V provides an easy, cost-effective and secure way to implement server virtualization without having to purchase additional 3rd party products. But why should we virtualize our servers?

In the traditional serverroom scenario we have dedicated server hardeware for each server which leads to a tremendous amount of hardware to purchase, power consumption, heat and maintenance. In addition these servers tend to be very underutilized so the benefits of the powerful hardware is mostly wasted. Using virtualization you can create and run several virtual servers on one physical box and get a lot more computer power in return of your investments. The heat generated and power consumption falls drasticly and there’s alot less hardware then needs maintenance and replacement. One should be a little careful however and keep in mind that if one server breaks it can bring many servers down, but there are several ways to provide redundancy in such a scenario. Once again planning and designing is half the job.

Another clear advantage with virtualization is that servers can easily be provided with more disk, memory and CPU power when needed, for example by providing more resources or even moving the virtual server over to a physical box with more resources. In addition, backup and restore of servers are siginifically easier as the entire server and its configuration basically is a small set of files.

I will wrap this up for now, but there will be many more posts about virtualization in different forms. I hope you all have enjoyed this post and I’ll go much deeper into this in my later posts. Feel free to add any comment you may have. Thank you