Because I now do so much work from my home office, I needed to ramp up my capacity for running up test servers and labs. I have a couple of test machines, but when you’re running up environments which need a few servers to support them (SCCM testing, for example), single machines don’t really cut it, and while virtual servers are obviously the answer, the host system really needs quite a bit of grunt to make it all possible.
So, here’s what I ended up with.
Processor – Intel Core i7 950
Nothing I run is particularly processor-intensive, so this CPU offers more than enough processing power for all the VMs with capacity to spare.
Motherboard – Gigabyte GA-X58A-UD5
Has pretty much everything onboard including a couple of RAID controllers for the system and data arrays. Supports USB 3.0 and SATA3.
Memory – 2 x 12GB Kingston PC3-10600 DDR3 kits (6 x 4GB DIMMs)
Memory is becoming the bottleneck for running up loads of VMs (depending on their actual workload, of course). 24GB gives me lots of capacity, and Server 2008 R2 SP1 Dynamic Memory brings even greater opportunity for higher VM density.
HDD – 8 x 1TB WD Caviar Blue SATA-II 7200rpm 32MB cache
No, they’re not 10K rpm drives, which is the minimum recommended standard for VMs running on Hyper-V (or any other enterprise-class virtual machine), but 7200rpm is adequate for lab environments. Plus it keeps the cost down while still providing large amounts of storage.
RAID – Gigabyte GBB36X and Intel ICH10R
In an ideal world I would have purchased a dedicated 8-port SATA RAID adapter, but that is a pretty expensive item. In the end I configured a 2-disk RAID-1 array on the Gigabyte controller for the operating system volume (around 900 GB), and a 6-disk RAID-5 array on the Intel controller for the data/VM volume (around 4700 GB). Yes, there are a number of different ways of doing it and the motherboard is definitely a critical point of failure, but mitigating these risks would cost more than I was able to spend on what is essentially a home server.
Fortunately, nothing I’m going to run up in this environment will have high enough disk read/writes to bog down the arrays, but if I were planning on multiple SQL servers for example, an argument could definitely be made to break up the data array into multiple smaller arrays rather than use one great big one.
Case – Fractal Design Define R3
Normally the case wouldn’t be that important, but as this is sat under my desk at home I needed something which would run cool and quiet. And I have to say, I’m staggeringly impressed at how good this case is. Bearing in mind that there are eight 3.5″ hard drives plus an extra fan to cool them down, the first time I powered the system on I thought there was something wrong with it because I couldn’t hear anything. In addition, the air venting out the back is always slightly warm, never hot. Definitely going to insist on Fractal Design cases on all my systems from here on in.
Scorpion Technology put the case together for me, and did a sterling job as always – always highly recommended. The whole build came it at a fraction over AUD $2500 (including the build with no software), however this was at the end of last year and the unit price of a number of the components will have come down since then, or been replaced by better ones at a comparable price point.
Operating System – Windows Server 2008 R2 Enterprise SP1
Runs unbelievably quickly on a system with these sorts of specs. I was briefly tempted to make it the gaming rig as well 🙂 The license came from my TechNet Plus subscription.
At the moment only the Hyper-V role is enabled. Usual advice is to keep the parent partition as streamlined as possible and not enable other roles or features. The reality is that you can enable as much as you like – it will all run fine, although you might start to impact performance if you install too many applications. The future plan is to enable the File Services role and maybe a couple of others, but that all application workloads like SQL and WDS/WSUS will reside in VMs.
I’ve recently upgraded it to Service Pack 1, and have so far been very impressed with the memory usage of VMs with Dynamic Memory enabled – it’s a feature which will definitely enable me to run up more sophisticated Windows-based labs. More blogs to come on my hands-on experiences with Dynamic Memory.
Additionally, I’ve installed Remote Desktop Services to start playing around with RemoteFX. The graphics card is an Asus Radeon HD5450, which is a gaming-type GPU rather than a workstation GPU, and therefore it won’t ever be supported for RemoteFX. However, like so many other things in Windows world just because it isn’t supported, doesn’t mean it doesn’t work 🙂 I did have to manually install the AMD Catalyst drivers, but the card is working fine and RemoteFX is now available. More on that to come.
The entire system is running (or will be running) as a self-contained AD domain. This has some interesting implications for DCs running on VMs as well as remote management of the system from non-domain workstations. Expect plenty of blog posts about the various resources and workarounds for getting an environment like this operating smoothly.
Until next time 🙂