• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vstorage

Real life RAID penalty example added to the IOps article

Duncan Epping · Jan 11, 2010 ·

I just added a real life RAID penalty example to the IOps article. I know Sys Admins are lazy,  so here’s the info I just added:

I have two IX4-200Ds at home which are capable of doing RAID-0, RAID-10 and RAID-5. As I was rebuilding my homelab I thought I would try to see what changing RAID levels would do on these homelab / s(m)b devices. Keep in mind this is by no means an extensive test. I used IOmeter with 100% Write(Sequential) and 100% Read(Sequential). Read was consistent at 111MB for every single RAID level. However for Write I/O this was clearly different, as expected. I did all tests 4 times to get an average and used a block size of 64KB as Gabes testing showed this was the optimal setting for the IX4.

In other words, we are seeing what we were expecting to see. As you can see RAID-0 had an average throughput of 44MB/s, RAID-10 still managed to reach 39MB/s but RAID-5 dropped to 31MB/s which is roughly 21% less than RAID-10.

I hope I can do the “same” tests on one of the arrays or preferably both (EMC NS20 or NetApp FAS2050) we have in our lab in Frimley!

Changed block tracking?

Duncan Epping · Dec 21, 2009 ·

I was reading Eric Siebert’s excellent article on Changed Block Tracking(CBT) and the article on Punching Cloud on this new feature which is part of vSphere. CBT enables incremental backups of full VMDKs. Something that isn’t covered is what the “block” part of Changed Block Tracking actually stands for.

Someone asked me on the VMTN Communities and it’s something I had not looked into yet. The question was around VMFS block sizes and the way it could potentially have its effect on the size of a backup which uses CBT. The assumption was made that CBT on a 1MB block size VMFS volume uses 1MB blocks and on an 8MB block size VMFS volume uses 8MB blocks. This is not the case.

So what’s the size of the block that CBT refers to? Good question I’ve asked around and the answer is that it’s not a specific size but it has a variable size. The block always starts with 64KB and the bigger the VMDK becomes the bigger the blocks become.

Just for the sake of it:

  • CBT is on a per VMDK level and not on a VMFS level.
  • CBT has variable block sizes which are dictated by the size of the VMDK.
  • CBT is a feature that lives within the VMKernel and not within VMFS.
  • CBT is a FS Filter as shown in the VMworld slide below

vscsiStats output in esxtop format?

Duncan Epping · Dec 17, 2009 ·

This week we(Frank Denneman and I) played around with vscsiStats, it’s a weird command and hard to get used to when you normally dive into esxtop when there are performance issues. While asking around for more info on the metrics and values someone emailed us nfstop. I assumed it was NDA or at least not suitable for publication yet  but William Lam pointed me to a topic on the VMTN Communities which contains this great script. Definitely worth checking out. This tool parses the vscsiStats output into an esxtop format. Below a screenshot of what that looks like:

SPC-2 set or not?

Duncan Epping · Dec 8, 2009 ·

For those like me who see different types of Arrays daily it is hard to keep up with all the specific settings that need to be configured. Especially when we are talking about enterprise level storage there are several dependencies and requirements.

One of the settings that is often overlooked on EMC DMX storage is the SPC-2 bit. I already noticed a while back what kind of impact it can have on your environment and witness it again today.

During the creation of a VMFS volume we received an error which basically stated that it was impossible to create the volume. The error message was a bit misleading but I noticed in the detailed section that the LUN was identified as “sym.<identifier string>”. This normally should state “naa.<identifier string>” and that triggered me to check the documentation of the array.

When an additional front-end port is zoned to an ESX host, to provide further connectivity to devices, the SPC-2 bit must be set; otherwise, the Symmetrix devices will not be properly identified. Instead of identifying each device with their proper Network Authority Address (NAA), the devices will show up with a SYM identification number. Any device provisioned to the non-SPC-2 compliant port will be identified as a new device by the ESX host system.

Again, it is hard to keep up with every single vendor out there. Let alone all the different type of arrays and all the different settings. Luckily EMC acknowledged that and created the “EMC Storage Viewer for vSphere”. The EMC Storage Viewer actually shows you if the “SPC-2” (amongst other settings) is enabled or not… This will save you a lot of pain and discussion with the Storage Team when push comes to shove. Definitely one of the reasons I would recommend to use this plugin.

For those facing spc-2 bit issues make sure to read “H4116-enabling-spc2-compl-emc-symmetrix-dmx-vmware-envnmt-wp.pdf”. (Available via EMC’s powerlink.)

Performance: Thin Provisioning

Duncan Epping · Nov 15, 2009 ·

I had a discussion about Thin Provisioning with a colleague last week. One of the reasons for me not to recommend it yet for high I/O VMs was performance. I had not seen a white-paper or test yet that showed their was little impact of growing the VMDK. Eric Gray of Vcritical.com had the scoop, VMware just published an excellent whitepaper called “Performance study of VMware vStorage Thin Provisioning“. I highly recommend it!

Surprisingly enough there is no performance penalty for writing to a Thin Provisioned VMDK when it comes to locking. I expected that due to SCSI reservations there would at least be some sort of hit but there isn’t. (Except for zero’ing of course, see paragraph below) The key take away for me still is: operational procedures.

Make sure you set the correct alarms when thin provisioning a VMDK. You need to regularly check what the level of “overcommitment” is, what the total capacity is and the percentage of disk space still available.

Another key take away is around performance though:

The figure shows that the aggregate throughput of the workload is around 180MBps in the post-zeroing phase of both thin and thick disks, and around 60MBps when the disks are in zeroing phase.

In other words, when the disk is zeroed out while writing there’s a HUGE and I mean HUGE performance hit. To avoid this for thick disks there’s an option called “eager zeroed thick”. Although this type is currently only available from the command line and takes longer to provision, as it zeroes out the disk on creation, it could lead to a substantial performance increase. This would only be beneficial for write intensive VMs of course, but it definitely is something that needs to taken into account.

Please note: On page two, bottom, it states that VMDKs on NFS are thin by default. This is not the case. It’s the NFS server that dictates the type of disks used. (Source: page 99)

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Interim pages omitted …
  • Page 11
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in