• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Storage vMotion performance difference?

Duncan Epping · Feb 24, 2011 ·

Last week I wrote about the different datamovers being used when a Storage vMotion is initiated and the destination VMFS volume has a different blocksize as the source VMFS volume. Not only will it make a difference in terms of reclaiming zero space, but as mentioned it also makes a different in performance. The question that always arises is how much difference does it make? Well this week there was a question on the VMTN community regarding a SvMotion from FC to FATA and the slow performance. Of course within a second FATA was blamed, but that wasn’t actually the cause of this problem. The FATA disks were formatted with a different blocksize and that cause the legacy datamover to be used. I asked Paul, who started the thread, if he could check what the difference would be when equal blocksizes were used. Today Paul did his tests and he blogged about it here and but I copied the table which contains the details that shows you what performance improvement the fs3dm (please note, that VAAI is not used… this is purely a different datamover) brought:

From To Duration in minutes
FC datastore 1MB blocksize FATA datastore 4MB blocksize 08:01
FATA datastore 4MB blocksize FC datastore 1MB blocksize 12:49
FC datastore 4MB blocksize FATA datastore 4MB blocksize 02:36
FATA datastore 4MB blocksize FC datastore 4MB blocksize 02:24

As I explained in my article about the datamover, the difference is caused by the fact that the data doesn’t travel all the way up the stack… and yes the difference is huge!

Related

Server, Various 4.1, performance, Storage, storage vmotion, vSphere

Reader Interactions

Comments

  1. Yuri Semenikhin says

    24 February, 2011 at 19:48

    Greate Examples

    this is a good explanation for that people who use VAAI but cannot see performance improvement, as VAAI dont support deferent block size

    http://vmlab.ge/when-vaai-fail/

    and this a one additional reason to use some block size for example 8MB

  2. Brandon says

    24 February, 2011 at 19:58

    I was going to ask whether hw offload was used or not, but I found it myself on the original thread. It looks like this was on an EVA which is slated for VAAI later this year. Those time differences are impressive without hw offload, with hw offload it would be even a larger gap. Wow.

    • Duncan Epping says

      24 February, 2011 at 21:46

      Indeed, this is not even VAAI just a different datamover. With VAAI it would be even better.

  3. Yuri Semenikhin says

    24 February, 2011 at 20:15

    agree difference with VAAI will be much bigger, and this is only for one VM

  4. ALESSANDRO DI FENZA says

    25 February, 2011 at 17:16

    what about svmotion from or to NFS datastore? what kind of datamover is used?

    • Duncan Epping says

      25 February, 2011 at 19:15

      The legacy datamover fsdm

  5. cwjking says

    26 February, 2011 at 00:03

    Interesting. We use all NetAPPA arrays currently andusually keep all our VMs on the same datastore. Is there a place that explains this in much greater detail? Be interested in how we could improve svmotions on ourcurrent platform.

  6. ryder0707 says

    28 February, 2011 at 16:18

    Just wondering, how long to SVMotion from FATA datastore to FATA datastore with same blocksize or FC datastore to FC datastore with same blocksize?

  7. shdadmin says

    3 March, 2011 at 02:31

    Do the same datamover concepts apply with cloning? In our infrastructure I notice that cloning a VM or deploying a template is far slower if the source has a snapshot, even if the associated deltas are tiny. With our iSCSI datastores we can see cloning speed on the order of 1GB/min for a ‘clean’ source VM/template, but adding a template cuts the speed in half or worse.

    • Duncan Epping says

      3 March, 2011 at 08:45

      Yes they do.

  8. Robert Kloosterhuis says

    10 May, 2011 at 15:22

    Until VMware gives us a bit more control over this, or at least gives us a supported way to shrink the volumes, we have chosen to maintain a group of LUN’s that have a different blocksize (4 and 8). Just so we have the flexibility in-place if we need to shrink something.
    We will have to accept the performance hit for now.

  9. Rawley Burbridge says

    12 May, 2011 at 17:29

    Found this post after some questions came up while performing some VAAI testing on the IBM Storwize V7000.

    What is the data transfer size that ESX uses when performing a storage vmotion task? I know the transfer sizes for VMs differ based on OS/Application but I am not sure about transfers done by the host.

    Is it directly related to the VMFS block size being used?

  10. VM says

    5 August, 2011 at 14:22

    Nice doc. But why you don´t test with 8MB ?

    • Duncan Epping says

      5 August, 2011 at 20:26

      Because that won’t make much difference from a performance perspective…

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in