Saturday, 19 December 2009

Troubleshooting a dodgy PSU and finally building the servers

In between ordering the Crucial 2x2Gb mem and it arriving my PSU in the box I'm planning to use as the VM host decided to start acting up - while playing Bioshock the PC powered off completely then powered back on a few seconds later. It continued to run for about a minute them powered off and on again. This continued, with the actual amount of time the PC was running reducing each time to the point where it would power off on the XP loading screen.

My initial thought was that it could be memory related so I swapped this out but the issue continued. One thing that was notable was the fact that the issue would go away if the PC was powered off for 5 minutes +, which made me think it could be something overheating - I checked the CPU and GPU coolers and these were fine. The 'something overheating' analysis also fitted in with the PC powering off after playing Bioshock for a while as this game is pretty demanding on the PC.

In the end I narrowed it down to the PSU - I took out the graphics card and used the onboard VGA to rebuild the PC on a spare HDD and it powered off and on again during the XP setup. I still had the old factory fitted Dell PSU so swapped out the OCZ 600W PSU for this and started XP installing again and it didn't power off again - bingo! Just got to ship the PSU back to Ebuyer for an RMA now.

Back to the original task...

The 2x2Gb mem arrived and I made the schoolboy error of forgetting the 4Gb limit on XP 32bit, so I rebuilt the PC with XP 64bit and it happily recognised the 4Gb and made all of it available. I also installed a second HDD I had spare with the aim of using this as the VM drive. I installed VMware Server 2.0 and built a DC with Windows Server 2008 64bit. The lab at this point consisted of a single server with the following spec:


  • Windows Server 2008 64bit SP1

  • Active Directory

  • DNS (AD integrated)

  • 1Gb RAM initially, reduced to 512Mb to free up mem for the other servers

  • Single CPU core



I built the DC in the normal way - added the AD role then ran DCPROMO.

I decided that all VMs should only use a single core so to leave some processing power for the OS so the PC is still usable - may be flawed logic so I need to check this out in more detail.

Once the DC was fully updated and activated I started another Server 2008 64bit build - this next VM was to be Exchange 2007. This time however, after applying updates but before setting a static IP and joining to the domain, I shut down the VM and took a copy of the folder containing all the VM files. My plan was to use sysprep to reset the SID, etc on the VM to make provisioning of VMs quicker - I'll talk about this in the next post.

Saturday, 5 December 2009

Home Lab Setup

I've been building OCS labs on and off for the past few years since I left IT and moved to presales (the Dark side). The infrastructure for this is virtualised on a number of servers in the pre-stage area in our offices. It is used for stuff like engineer training and customer PoCs where OCS is being integrated with another VoIP manufacturer's systems. As the driver for each lab build is usually testing for a specific customer project I have been finding that I'm not getting as much time as I want to check out all the new OCS features, especially since R2 has been released.

Soo, I've decided to build my own lab at home...

Home Lab Infrastructure Build

The brief for the lab is pretty straight forward:
  • Be able to run 32- and 64-bit guest OS (OCS 2007 R2 - all roles, Exchange 2007 and 2010)
  • Low power consumption so I can have it running 24/7 and not be killing too many polar bears
  • With the above in mind, as many spindles as possible per host machine
  • As much RAM as poss per host machine
  • As cheap as possible
I'm starting the lab with my existing Dell E520 which I use for gaming and web browsing and at this stage I want to continue to use it as my home desktop PC. This means the decision on virtulisation software is already made:
  • Windows Server 2008 Hyper-V - Nope, would mean having to run Server 2008 as the host OS - not ideal for gaming
  • VMware ESXi - Nope, can't use the box for anything else
  • VMware Server - Yep, can continue to use the box for gaming and stuff

The Dell E520 has the following spec at the mo:

  • Intel Core 2 1.86GHz 6300XFX GeForce 9800 GT (yep, not really critical for a virtual machine host)
  • OCZ StealthXStream 600W PSU (had to get this for the graphics card)
  • Hitachi DeskStar 7K160 7,200 RPM 160Gb SATA HDD
I'm planning to upgrade the box to the max 8Gb RAM and chuck in 2 more SATA HDDs that I have kicking around from previous upgrades to give me more spindles to spread the VMs across. For the mem upgrade I've already ordered the following from eBuyer:

4Gb (2x2Gb) Crucial DDR2 800MHz PC-6400 Ballistix CL4 2.0V http://www.ebuyer.com/product/143844

This will be enough to get me started on building the guest machines and I'll order the second set of 4Gb RAM when I've hit capacity - Exchange UM is going to be the biggest killer on mem I reckon.

Once this is up and running and I have a few VMs live I plan to benchmark the power consumption with an energy monitor similar to this: http://www.maplin.co.uk/Module.aspx?moduleno=38343

Based on the results of this I can work out if it will be more cost effective to rip out the 600W PSU and GeForce 9800 GT and put this into a new gaming machine and run the E520 purely as a virtualisation platform.