While there is a decent amount of info available on guidelines for storage, memory, etc for virtualisation, I haven't seen much in the way of real-world data, so thought I'd make some notes on the performance I've noticed in my lab virtualisation environment:
Not enough guest memory leads to high disk utilisation and poor performanceThis is by far the most important resource for a virtual machine. You should never let the VM exceed its assigned physical memory otherwise performance of the VM in question, and also potentially other VMs sharing the same physical disk, will grind to a halt. I've noticed that if the guest VM starts to use virtual memory the physical disk utilisation on the host machine shoots up. The VM will max out the disk utilisation, starving any other VMs from timely read/write access to the disk.
Heavy disk utilisation outside of the VMs causes poor VM performanceIf you carry out disk read/write operations outside of the VM environment, e.g. copying VMs to/from the disk your VMs are on in a Windows box running VMware Server, this can seriously affect the performance of the VMs on that disk. This is because a copy operation between 2 disks on the host system can grab the full bandwidth available between physical disks on the controller. Server 2008 resource monitor shows this quite nicely - go to the 'Disk' tab and expand the 'Disk Activity' section and look for the files you're copying. If you're copying between disks on the same system you might see read/write numbers in the region of 80,000,000 - 80MB a sec, which on my system is pretty close to the realistic full throughput the controller & disks can support. If this is affecting performance you can confirm by checking in the 'Storage' section and seeing the 'Disk Queue' value for your disk. I find this doesn't affect performance too much until it goes above 5-ish, once it gets to 10+ the VMs start to slow noticeably.
Too little memory and too much contention for access to the disk for multiple VMsThis is a nightmare scenario for VMs and when it happens the VMs will grind to a halt, sometimes event the host machine too, depending on where you store your VMs. If more than one of your VMs maxs out its physical mem and so starts to use virtual memory, both machines will be contending for very high read/write disk utilisation. This might be sustainable in the short term if it was just one machine but more than one then the contention for access to the disk kills the VMs.
Suggestions for a usable virtualisation setupFirst and foremost, enough physical memory assigned per VM that the VMs never go above it
A maximum of 2-3 VMs per core seems to provide adequate performance, e.g. 6 VMs on a dual core runs ok, and 8 on a quad runs ok.
6-8 VMs per SATA 7200rpm disk seems to run ok, as long as that disk is only for those VMs, and any ISOs used by the VMs are on a separate disk.
Run your VMs on a dedicated disk - don't share this with anything else, especially the OS!
Avoid copy operations to/from the physical disk the VMs are on while the VMs are busy.
These notes relate specifically to a lab environment, where occasional performance degradation isn't a major problem. For an operational environment you should really be using an enterprise class virtualisation product such as VMware ESX & VMotion or similar and a SAN. That said, the same ideas discussed here apply in an operational enterprise setup.