My Hyper-V Beast

Because I now do so much work from my home office, I needed to ramp up my capacity for running up test servers and labs.  I have a couple of test machines, but when you’re running up environments which need a few servers to support them (SCCM testing, for example), single machines don’t really cut it, and while virtual servers are obviously the answer, the host system really needs quite a bit of grunt to make it all possible.

So, here’s what I ended up with.

Hardware

ProcessorIntel Core i7 950

Nothing I run is particularly processor-intensive, so this CPU offers more than enough processing power for all the VMs with capacity to spare.

Motherboard Gigabyte GA-X58A-UD5

Has pretty much everything onboard including a couple of RAID controllers for the system and data arrays.  Supports USB 3.0 and SATA3.

Memory – 2 x 12GB Kingston PC3-10600 DDR3 kits (6 x 4GB DIMMs)

Memory is becoming the bottleneck for running up loads of VMs (depending on their actual workload, of course). 24GB gives me lots of capacity, and Server 2008 R2 SP1 Dynamic Memory brings even greater opportunity for higher VM density.

HDD – 8 x 1TB WD Caviar Blue SATA-II 7200rpm 32MB cache

No, they’re not 10K rpm drives, which is the minimum recommended standard for VMs running on Hyper-V (or any other enterprise-class virtual machine), but 7200rpm is adequate for lab environments.  Plus it keeps the cost down while still providing large amounts of storage.

RAID – Gigabyte GBB36X and Intel ICH10R

In an ideal world I would have purchased a dedicated 8-port SATA RAID adapter, but that is a pretty expensive item.  In the end I configured a 2-disk RAID-1 array on the Gigabyte controller for the operating system volume (around 900 GB), and a 6-disk RAID-5 array on the Intel controller for the data/VM volume (around 4700 GB).  Yes, there are a number of different ways of doing it and the motherboard is definitely a critical point of failure, but mitigating these risks would cost more than I was able to spend on what is essentially a home server.

Fortunately, nothing I’m going to run up in this environment will have high enough disk read/writes to bog down the arrays, but if I were planning on multiple SQL servers for example, an argument could definitely be made to break up the data array into multiple smaller arrays rather than use one great big one.

Case Fractal Design Define R3

Normally the case wouldn’t be that important, but as this is sat under my desk at home I needed something which would run cool and quiet.  And I have to say, I’m staggeringly impressed at how good this case is.  Bearing in mind that there are eight 3.5″ hard drives plus an extra fan to cool them down, the first time I powered the system on I thought there was something wrong with it because I couldn’t hear anything.  In addition, the air venting out the back is always slightly warm, never hot.  Definitely going to insist on Fractal Design cases on all my systems from here on in.

Scorpion Technology put the case together for me, and did a sterling job as always – always highly recommended.  The whole build came it at a fraction over AUD $2500 (including the build with no software), however this was at the end of last year and the unit price of a number of the components will have come down since then, or been replaced by better ones at a comparable price point.

Software

Operating System – Windows Server 2008 R2 Enterprise SP1

Runs unbelievably quickly on a system with these sorts of specs.  I was briefly tempted to make it the gaming rig as well 🙂  The license came from my TechNet Plus subscription.

At the moment only the Hyper-V role is enabled.  Usual advice is to keep the parent partition as streamlined as possible and not enable other roles or features.  The reality is that you can enable as much as you like – it will all run fine, although you might start to impact performance if you install too many applications.  The future plan is to enable the File Services role and maybe a couple of others, but that all application workloads like SQL and WDS/WSUS will reside in VMs.

I’ve recently upgraded it to Service Pack 1, and have so far been very impressed with the memory usage of VMs with Dynamic Memory enabled – it’s a feature which will definitely enable me to run up more sophisticated Windows-based labs.  More blogs to come on my hands-on experiences with Dynamic Memory.

Additionally, I’ve installed Remote Desktop Services to start playing around with RemoteFX.  The graphics card is an Asus Radeon HD5450, which is a gaming-type GPU rather than a workstation GPU, and therefore it won’t ever be supported for RemoteFX.  However, like so many other things in Windows world just because it isn’t supported, doesn’t mean it doesn’t work 🙂  I did have to manually install the AMD Catalyst drivers, but the card is working fine and RemoteFX is now available.  More on that to come.

The entire system is running (or will be running) as a self-contained AD domain.  This has some interesting implications for DCs running on VMs as well as remote management of the system from non-domain workstations.  Expect plenty of blog posts about the various resources and workarounds for getting an environment like this operating smoothly.

Until next time 🙂

10 comments to My Hyper-V Beast

  • Nice one James. How much did it all cost?

    • James Bannan

      Hey jeff – just over $2500. The next jump in cost would have been for better hard drives and RAID controllers – couldn’t really justify the expense 🙂

  • Scott Brown

    What an awesome rig. This is something I am looking at doing up for my home machine to test visualized environments.

  • stephen

    very similar to my own beasts. the gigabyte boards work decently; try the amd 850+ south bridges for lower cost and better throughput plus the amd boards seem better equipped to support remotefx. one other suggestion: buy a cheap 60 GB SSD for either the os or a dedicated sql vhd store. i’ve been able to run the entire system center suite, sharepoint, and a couple of vdi win7 farm boxes on two 16 Gig boxes for the past week with no serious issues, other than lack of familiarity with the new betas. But so far, i’m really impressed with both the new SCCM 2012 beta and SCVMM, even running on my “gray box” hyper-v beasts 🙂

  • Thorsten

    Hi,
    i have read your Blog and buyed the HD5450 too, because i dont found a valid list with compatible payable cards. I cant add the RemoteFX card to a VM. The button is grayed out. Which graphics Driver did you have installed?

    I dont have the Enterprise Version. Its Hyper-V RU2 SP1.

    Greetings
    Thorsten

    • James Bannan

      I used the AMD Catalyst driver package. On Server 2008 R2 the installer may not detect the graphics hardware, so you might need to extract the files and use Device Manager to update the GPU driver. Also, did you enable RemoteFX via Server Manager?

  • Hi James

    I’m finally able to start work on creating my virtual lab now that the school upgrade has been completed but I’m having a little trouble designing the networking and I’m not sure if what I’m looking for is possible in Hyper-V.

    The server has two network connections and what I’d like to do is have one connected to the internet for WSUS upgrades and downloading new ISOs, the second would be connected to a separate switch that will allow me to deploy to physical machines. If possible I’d also like to make the host a member of the virtual domain. The problem is that the internet connected NIC would be connected to a production network and I need to make sure that it’s protected from things like the virtual DHCP server leaking out.

    The production NIC is connected on 10.6.1.x but would be a DHCP client so I can easily move the server to other networks and the internal network would be 192.168.10.x

    Could you confirm if this is possible (preferably without a VLAN) and if so point me in the right direction to achieving this?

    Thanks

    Martyn

    • James Bannan

      Hi Martyn – apologies for the delayed response. It’s a little confusing, but I think I get what you’re trying to do. As a general rule, I would do the following: 1) unless you want your Hyper-V VMs to communicate with the production/physical network, always connect them to Internal networks; 2) you can install Routing and Remote Access Services (RRAS) on the Hyper-V host for it to act as a LAN router for the Internal Hyper-V networks; 3) You can make the host a member of the virtual domain, but then you have to manually configure IP/DNS which probably precludes using DHCP on the NIC connected to the production network.

      I think you’re going to have to use a static IP on the production NIC and set up RRAS so that the VMs can talk out over the production network, at least so that the DC/DNS server can do external name resolution. Hope that helps.

  • motnis

    Hi there, great article, i plan to create lab pc, i have a question, as far as i know Hyper-v requires DEP, and hardware assoc. virtualisation, How i can check if the setup that i want to buy will have this features ? I’m really confused which cpu and motherboard to buy, i do not want to loose my money, is there any rule in the cpu, motherboard ? i cannot find this in the specifications. which motherboards and PC form AMD or Intel will support this ?

    • James Bannan

      Hi – thanks for the comment. When planning a new Hyper-V setup, the motherboard isn’t really that critical – it’s the functionality of the CPU which is important. I haven’t looked at setting up Hyper-V on AMD systems, so my experience is only with Intel CPUs. Intel has an online tool for assessing whether or not a CPU is VT-capable. As a general rule, if a CPU supports Intel-VT (especially modern CPUs) then it support hardware DEP as well.

Leave a Reply to James Bannan Cancel reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>