Time service on virtualized domain controllers

In Active Directory, the time configuration is an important topic as the domain controllers has to be in sync with each other, member servers and clients. Using the default Kerberos settings, a time difference of more than 5 minutes will cause logon issues as the logon token will be outdated.

The domain controllers

In an Active Directory environment the domain controller hosting the PDC emulator FSMO role is the master time server. All other DC’s, member server and clients should synchronize their time with this server. The PDC emulator should always retrieve the time settings from an external, reliable source. Personally I’ve used no.pool.ntp.org as a time source with no issues. To check your current configuration, run this line from an elevated command prompt:

w32tm /query /source

The outout shuld either display the name/ip of your current time source. If you synchronize from the bios it will display “Local CMOS clock”.

To set the time source, run this command from an elevated command prompt on the DC hosting the PDC emulator, replace “no.pool.ntp.org” with a source of your choice.

w32tm /config /syncfromflags:manual /manualpeerlist:no.pool.ntp.org /update

But when the domain controller is virtualized, the virtualization platform has an agent installed in the virtual machine (vmware tools, hyper-v integration services etc) and these often synchronize the time with the host servers time as default, and you should avoid using this setting on your DCs. This will display “VM IC Time Synchronization Provider” from the command above if your DC is virtual on a Hyper-V server. If you run Hyper-V you can leave the time sync enabled under integration tools, and run this line from an elevated command prompt on the virtual domain controller. Select yes to overwrite if you are prompted for it.

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0

The domain members:

As for the rest of the domain it should be able to sync up automatically, but should you need to specify a server or client to us the domain sync you can run these lines from an elevated command prompt:

w32tm /config /syncfromflags:domhier /update
net stop w32time
net start w32time

The network

Finally , don’t forget to open your firewalls. NTP requires UDP port 123 to be opened and forwarded to the DC hosting the PDC emulator, and also remember the local firewall on the server.

Hyper-V virtual hardware options

In the Hyper-V console you can find a few options regarding the virtual hardware of each vm and there are several nice features you all should know about. Let’s begin with synthetic and emulated devices.

Emulated devices (aka legacy devices) are the more traditional virtual hardware which is supported by pretty much any OS out of the box. It still uses CPU cycles on the host to emulate a hardware and transfer its I/O. The synthetic devices (aka Enlightened devices) are new virtual hardware designed to be much more efficient on virutal machines than emulated devices. It performs way better as it transfers I/O directly to the vmbus instead of emulating a device in between, but the operating system needs the Hyper-V integration software to detect synthetic devices. In short, emulated has better compatibility but synthetic gives better performance.

This explaines why the harddrive containing the OS in a vm must be an IDE drive. IDE drives in Hyper-V are emulated at boot, but can transition seamlessly into synthetic devices. SCSI controllers are synthetic only and is therefore not visible for the guest OS until the Hyper-V drivers has been loaded. In performance there are no practical difference between an IDE drive and a SCSI drive in Hyper-V, but you can hot-add disks on the SCSI controller.

While we’re on virtual hard drives. The VHD files can be set up in 3 different ways in Hyper-V: Fixed size, dynamically expanding, and differencing disks.

  1. A fixed size VHD will allocate all space when the file is created, so if you create a 50GB fixed size VHD the files will be 50GB in size even if it only conaints “free space”. The advantages is that this file will unlikely be fragmented and perform slightly better than dynamically expanding disks. Fixed size disks are recommended for production environments. The obvious downside is that you always will allocate space which isn’t spent.
  2. Dynamically expanding on the other hand will only allocate the space which is used by data inside it, and allows you to utilize the storage capasity much better. Though these drives doesn’t perform as well as fixed drives since it has to spend disk writes to expand the file when data is added. Also you will likely end up with defragmented VHD files. But for testing purposes this is a very good option to increase what’s called the “vm density” (I smell a later blog post about that). Just be very careful that the VHD files has space to grow or you risk bringing all your vm down all of a sudden when your physical drive has no space for growth.
  3. Differencing disks is that you set up a “parent disk” and then create other “child disks” that uses the parent as a reference. All changes from parent disks are stored in the children. This configuration is seldom used for other than testing purposes as failure on the parent disk will make all children disks unusable.

In addition we have pass-through disks which connects a volume directly to the vm and not as a vhd-file. This gives by far the best performance, but the disk is completely assigned that particular vm and can’t be accessed by any other vm or the host.

RemoteFX is a new feature in Windows Server 2008 R2 Service Pack 1. It is a virtual display adapter that enables what Microsoft calls “Rich desktop experience”, which means high-end graphics in a vm and over RDP. The graphics are virtulized in the display adapter. This requires support in the graphics driver and you’ll usually need an enterprise version display adapter from Nvidia or ATI.

The amount of memory assigned to a vm can either be static, or as of Windows Server 2008 R2 SP1 it can be dynamic. By selecting dynamic memory you can assign amout of memory the vm shall have at boot, and the maximum amount of memory it can allocate. There is also a slidebar used to determine the priority the vm has when several vms has to assign memory between themselves.

CPU Cores is prettystraightforward. It is the amout of logical CPU cores the vm has installed. I’ll come back to about the pros and cons of assigning a vm one or more cpu cores. Here you can also reserve CPU power to the vm so it’s guaranteed some resources, and you can also limit the amount of CPU power by percentage the vm can load the host with.

Welcome to Hyper-V (and my blog)

Hello and welcome to my very first blog post ever. I’ve decided to start blogging about all the wondrous technology and solutions I bump into on a daily basis and if you are reading this I hope this can provide both entertaining and educational value to you. I’ll kick it off nice and easy with a quick introduction to Hyper-V and server virtualization.

Hyper-V is a type 1 hypervisor which Microsoft provides both as a free product and included in the 64 bit versions of the Windows server 2008 (and later) server family with some restrictions. Hyper-V provides an easy, cost-effective and secure way to implement server virtualization without having to purchase additional 3rd party products. But why should we virtualize our servers?

In the traditional serverroom scenario we have dedicated server hardeware for each server which leads to a tremendous amount of hardware to purchase, power consumption, heat and maintenance. In addition these servers tend to be very underutilized so the benefits of the powerful hardware is mostly wasted. Using virtualization you can create and run several virtual servers on one physical box and get a lot more computer power in return of your investments. The heat generated and power consumption falls drasticly and there’s alot less hardware then needs maintenance and replacement. One should be a little careful however and keep in mind that if one server breaks it can bring many servers down, but there are several ways to provide redundancy in such a scenario. Once again planning and designing is half the job.

Another clear advantage with virtualization is that servers can easily be provided with more disk, memory and CPU power when needed, for example by providing more resources or even moving the virtual server over to a physical box with more resources. In addition, backup and restore of servers are siginifically easier as the entire server and its configuration basically is a small set of files.

I will wrap this up for now, but there will be many more posts about virtualization in different forms. I hope you all have enjoyed this post and I’ll go much deeper into this in my later posts. Feel free to add any comment you may have. Thank you