The last couple of weeks I’ve seen all these performance numbers(most not publicly available though) of vSphere, one even more impressing than the other. I think every one will agree that the latest one is really impressive, 364.00 IOPS is just insane. There’s no load vSphere can’t handle, when correctly sized of course.
But something that even made a bigger impression on me, as a consolidation fanatic, is the following line from the latest performance study:
VMware’s new paravirtualized SCSI adapter (pvSCSI) offered 12% improvement in throughput at 18% less CPU cost compared to LSI virtual adapter
Now this may not sound like much, but when you are running 50 hosts it will make a difference. It will save you on cooling / rack space / power / hardware / maintenance, in other words this will have it’s effect on your ROI and TCO. This is the kind of info that I would love to see more, where did we cut down on “overhead”… Which improvements will make our consolidation numbers go up?!
Etienne Pouliot says
Is using that new paravirtualized SCSI adapter just a matter of installing new VMware tools that come with Vsphere ?
pironet says
LSI and Buslogic are emulated SCSI-2 drivers. pvscsi drivers gives direct access to the HBA hence the ‘pv’ for paravirtualization.
The performance gain comes certainly from the ‘pv’ thing…
Althought there is a serious caveat of this technology is that the guest OS still has to boot from a non PVSCSI adapter, LSI by default.
Now I’m not sure it does increase consolidation ratio because in a shared hardware environment, giving priority or direct access to a particular guest just reduce the ‘shares’ for the other guests and in fine reduce the consolidation ratio.
Cheers,
Didier
JustinE says
Etienne,
It appears as a totally different hardware device in your VMs. There are 4 choices for storage controllers in vSphere: BusLogic, LSILogic (Fusion MPT), LSILogic SAS (useful for Server 2008 clustering), and PVSCSI.
Unfortunately booting from the PVSCSI adapter is not supported at this time (not saying it won’t work…) so you’re supposed to have a boot volume and then a data volume on the PVSCSI controller.
NiTRo says
I heard the pvscsi adapter makes vmotion impossible. That would make a lots of “but” for me…
Jason Boche says
pironet is correct in that you cannot boot from the paravirtualized SCSI controller which means you’ll have a minimum of two SCSI controllers per VM running the paravirtualized controller. I’m not sure if I’d consider this a “serious” caveat, but it is something to be aware of anyway.
Not being able to VMotion is a serious enough caveat though.
jonatj says
The new performance gains are very impressive. I couldn’t believe my eyes when I saw the 320 powered on VM limit for ESX/ESXi. Now that you can have 64 cores and 1TB of RAM on a server the consolidation possibilities are insane. I just wonder the best way to spec a server so that you don’t have a single point of failure. I’ve already had a Purple Screen of Death because of a bad DIMM. DRS is nice and all, but it’s still an outage for the 25 VMs on the physical host. With FT’s 1vCPU limitation, its not quite the perfect solution yet. So the 320 to 1 consolidation that’s possible on one host is frightening.
How do you guys spec your hardware?
toha says
pvscsi is to disk IO what vmxnet is to networking, it does not stop you doing vmotion. You can boot Windows from pvscsi controller if you copy pvscsi driver to floppy and feed it to Windows during installation. Linux will boot fine also from pvscsi adapter if you know Linux well enough to fiddle with driver modules.
daniel says
@jonath – I certainly agree with the single point of failure that is 4-way machines and larger, in order to trust one piece of hardware with more and more VM’s we need even more fault tolerance in the hardware/hypervisor, the loss of a CPU or DIMM should affect only the VM’s using those resources, not the entire host. Until then you’re better off scaling out instead of up.
Duncan Epping says
As Toha already mentioned, PVSCSI doesn’t keep you from VMotioning. VMDirectPath I/O does.
http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R2.pdf
Eknath says
Hai , could any one of u plz tell me the difference between vsphere versions 3.5, 4, 4.1, 5
Duncan Epping says
That is a difficult question @Eknath. As there are no direct performance reports which show all 4 versions next to eachother. What are you exactly looking for?
Or Arnon says
Hi Duncan,
I was wondering if VMware let a new PVSCSI version out with the latest ESXi 5.1 and hardware 9.
The old (or current?) PVSCSI was not recommanded for low I/O VMs, has that changed?
Matt says
Arnon,
I just got off the phone with VMware and they basically told me that at 2000 IOPs is where you are going to be able to actually track a performance gain with the paravirtual controller. Unless you are using DAS storage, using vmware snapshots, or use MSCS then VMware’s recommendation to me was to try the controller and if it performed better, use it. The main reason they recommend the LSI Logic controller is because most OS support it within VMware.
The reason I called them about this was because I had a 150 IOP machine that was having some software lag. All performance metrics CPU, Memory, Latency, etc… showed that this machine was performing very well. As a last ditch effort I switched the VM to the paravirtual controller and the lag went away. We are now going to look to start switching more 2k8 R2 VMs over to this driver.