• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vSphere performance

Duncan Epping · May 19, 2009 ·

The last couple of weeks I’ve seen all these performance numbers(most not publicly available though)  of vSphere, one even more impressing than the other. I think every one will agree that the latest one is really impressive, 364.00 IOPS is just insane. There’s no load vSphere can’t handle, when correctly sized of course.

But something that even made a bigger impression on me, as a consolidation fanatic, is the following line from the latest performance study:

VMware’s new paravirtualized SCSI adapter (pvSCSI) offered 12% improvement in throughput at 18% less CPU cost compared to LSI virtual adapter

Now this may not sound like much, but when you are running 50 hosts it will make a difference. It will save you on cooling / rack space / power / hardware / maintenance, in other words this will have it’s effect on your ROI and TCO. This is the kind of info that I would love to see more, where did we cut down on “overhead”… Which improvements will make our consolidation numbers go up?!

Share it:

  • Tweet

Related

Server performance, Storage, vSphere

Reader Interactions

Comments

  1. Etienne Pouliot says

    19 May, 2009 at 16:38

    Is using that new paravirtualized SCSI adapter just a matter of installing new VMware tools that come with Vsphere ?

  2. pironet says

    19 May, 2009 at 17:00

    LSI and Buslogic are emulated SCSI-2 drivers. pvscsi drivers gives direct access to the HBA hence the ‘pv’ for paravirtualization.
    The performance gain comes certainly from the ‘pv’ thing…

    Althought there is a serious caveat of this technology is that the guest OS still has to boot from a non PVSCSI adapter, LSI by default.

    Now I’m not sure it does increase consolidation ratio because in a shared hardware environment, giving priority or direct access to a particular guest just reduce the ‘shares’ for the other guests and in fine reduce the consolidation ratio.

    Cheers,
    Didier

  3. JustinE says

    19 May, 2009 at 19:36

    Etienne,
    It appears as a totally different hardware device in your VMs. There are 4 choices for storage controllers in vSphere: BusLogic, LSILogic (Fusion MPT), LSILogic SAS (useful for Server 2008 clustering), and PVSCSI.
    Unfortunately booting from the PVSCSI adapter is not supported at this time (not saying it won’t work…) so you’re supposed to have a boot volume and then a data volume on the PVSCSI controller.

  4. NiTRo says

    19 May, 2009 at 23:54

    I heard the pvscsi adapter makes vmotion impossible. That would make a lots of “but” for me…

  5. Jason Boche says

    20 May, 2009 at 02:14

    pironet is correct in that you cannot boot from the paravirtualized SCSI controller which means you’ll have a minimum of two SCSI controllers per VM running the paravirtualized controller. I’m not sure if I’d consider this a “serious” caveat, but it is something to be aware of anyway.

    Not being able to VMotion is a serious enough caveat though.

  6. jonatj says

    20 May, 2009 at 06:56

    The new performance gains are very impressive. I couldn’t believe my eyes when I saw the 320 powered on VM limit for ESX/ESXi. Now that you can have 64 cores and 1TB of RAM on a server the consolidation possibilities are insane. I just wonder the best way to spec a server so that you don’t have a single point of failure. I’ve already had a Purple Screen of Death because of a bad DIMM. DRS is nice and all, but it’s still an outage for the 25 VMs on the physical host. With FT’s 1vCPU limitation, its not quite the perfect solution yet. So the 320 to 1 consolidation that’s possible on one host is frightening.

    How do you guys spec your hardware?

  7. toha says

    20 May, 2009 at 11:55

    pvscsi is to disk IO what vmxnet is to networking, it does not stop you doing vmotion. You can boot Windows from pvscsi controller if you copy pvscsi driver to floppy and feed it to Windows during installation. Linux will boot fine also from pvscsi adapter if you know Linux well enough to fiddle with driver modules.

  8. daniel says

    20 May, 2009 at 12:14

    @jonath – I certainly agree with the single point of failure that is 4-way machines and larger, in order to trust one piece of hardware with more and more VM’s we need even more fault tolerance in the hardware/hypervisor, the loss of a CPU or DIMM should affect only the VM’s using those resources, not the entire host. Until then you’re better off scaling out instead of up.

  9. Duncan Epping says

    20 May, 2009 at 22:33

    As Toha already mentioned, PVSCSI doesn’t keep you from VMotioning. VMDirectPath I/O does.

    http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R2.pdf

  10. Eknath says

    2 July, 2012 at 15:19

    Hai , could any one of u plz tell me the difference between vsphere versions 3.5, 4, 4.1, 5

  11. Duncan Epping says

    3 July, 2012 at 07:39

    That is a difficult question @Eknath. As there are no direct performance reports which show all 4 versions next to eachother. What are you exactly looking for?

  12. Or Arnon says

    21 January, 2013 at 13:10

    Hi Duncan,
    I was wondering if VMware let a new PVSCSI version out with the latest ESXi 5.1 and hardware 9.
    The old (or current?) PVSCSI was not recommanded for low I/O VMs, has that changed?

  13. Matt says

    7 March, 2013 at 16:29

    Arnon,

    I just got off the phone with VMware and they basically told me that at 2000 IOPs is where you are going to be able to actually track a performance gain with the paravirtual controller. Unless you are using DAS storage, using vmware snapshots, or use MSCS then VMware’s recommendation to me was to try the controller and if it performed better, use it. The main reason they recommend the LSI Logic controller is because most OS support it within VMware.

    The reason I called them about this was because I had a 150 IOP machine that was having some software lag. All performance metrics CPU, Memory, Latency, etc… showed that this machine was performing very well. As a last ditch effort I switched the VM to the paravirtual controller and the lag went away. We are now going to look to start switching more 2k8 R2 VMs over to this driver.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in