• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

4.1

HA/DRS Deepdive now available on Amazon.co.uk and Amazon.de

Duncan Epping · Mar 13, 2011 ·

After 14 emails with absolutely no reply whatsoever our book, vSphere 4.1 HA and DRS Technical Deepdive, popped up on both the German and UK version of Amazon. For those who haven’t ordered it yet through comcol.nl you can also get it here:

  • Amazon.de
  • Amazon.co.uk

Sorry about the delay and I hope they will continue selling it for a very long time. (It seems they don’t have it on stock currently so delivery might take a while.)

<edit – 15/03>

And in France as well through Amazon I just noticed

</edit>

VMware vSphere 4.1 HA and DRS Technical deepdive for only $17.96 on amazon.com

Duncan Epping · Mar 10, 2011 ·

Not sure where this discount is coming from but I figured I would share it with you. You can pick up the vSphere 4.1 HA and DRS Tech Deepdive for $ 17.96 on Amazon.com. That is about as cheap as it can ever get in my opinion. Pick it up.

By the way, Frank and I are working on a major Update of the book. If you spotted any glitches or have any comments drop ’em here so we can incorporate them. Before you ask, yes we will be reconsidering an ebook version!

Thin provisioned disks and VMFS fragmentation, do I really need to worry?

Duncan Epping · Mar 8, 2011 ·

I’ve seen this myth floating around from time to time and as I never publicly wrote about it I figured it was time to write an article to debunk this myth. The question that is often posed is if thin disks will hurt performance due to fragmentation of the blocks allocated on the VMFS volume. I guess we need to rehash (do a search on VMFS for more info)  some basics first around Think Disks and VMFS volumes…

When you format a VMFS volume you can select the blocksize (1MB, 2MB, 4MB or 8MB). This blocksize is used when the hypervisor allocates storage for the  VMDKs. So when you create a VMDK on an 8MB formatted VMFS volume it will create that VMDK out of 8MB blocks and yes indeed in the case of a 1MB formatted VMFS volume it will use 1MB. Now this blocksize also happens to be the size of the extend that is used for Think Disks. In other words, every time your thin disks needs to expand it will grow in extends of 1MB. (Related to that, with a lazy-thick disk the zero-out also uses the blocksize. So when something needs to be written to an untouched part of the VMDK it will zero out using the blocksize of the VMFS volume.)

So using a thin disk in combination with a small blocksize cause more fragmentation? Yes, more than possibly it would. However the real question is if it will hurt your performance. The answer to that is: No it won’t. The reason for it being that the VMFS blocksize is totally irrelevant when it comes to Guest OS I/O. So lets assume you have an regular Windows VM and this VM is issuing 8KB writes and reads to a 1MB blocksize formatted volume, the hypervisor won’t fetch 1MB as that could cause a substantial overhead… no it would request from the array what was requested by the OS and the array will serve up whatever it is configured to do so. I guess what people are worried about the most is sequential I/O, but think about that for a second or two. How sequential is your I/O when you are looking at it from the Array’s perspective? You have multiple hosts running dozens of VMs accessing who knows how many volumes and subsequently who knows how many spindles. That sequential I/O isn’t as sequential anymore all of a sudden it is?!

<edit> As pointed out many arrays recognize sequential i/o and prefetch which is correct, this doesn’t mean that when contiguous blocks are used it is faster as fragmented blocks also means more spindles etc </edit>

I guess the main take away here is, stop worrying about VMFS it is rock solid and it will get the job done.

No one likes queues

Duncan Epping · Mar 4, 2011 ·

Well depending on what type of queues we are talking about of course, but in general no one likes queues. We are however confronted with queues on a daily basis, especially in compute environments. I was having a discussing with an engineer around storage queues and he sent me the following which I thought was worth sharing as it gives a good overview of how traffic flows from queue to queue with the default limits on the VMware side:

From top to bottom:

  • Guest device driver queue depth (LSI=32, PVSCSI=64)
  • vHBA (Hard coded limit: LSI=128, PVSCSI=255)
  • disk.schedNumOutstanding=32 (VMKernel),
  • VMkernel Device Driver (FC=32, iSCSI=128, NFS=256,  local disk=32)
  • Multiple SAN/Array Queues (Check Chad’s article for more details but it includes port buffers, port queues, disk queues etc (might be different for other storage vendors))

The following is probably worth repeating or clarifying:

The PVSCSI default queue depth is 64. You can increase it to 255 if required, please note that it is a per-device queue depth and keep in mind that this would only be truly useful when it is increased all the way down the stack and the Array Controller supports it. There is no point in increasing the queuedepth on a single layer when the other layers are not able to handle it as it would only push down the delay one layer. As explained in an article a year or three ago, disk.schednumreqoutstanding is enforced when multiple VMs issue I/Os on the same physical LUN, when it is a single VM it doesn’t apply and it will be the Device Driver queue that limits it.

I hope this provides a bit more insight to how the traffic flows. And by the way, if you are worried a single VM floods one of those queues there is an answer for that, it is called Storage IO Control!

Managing availability through vCenter Alarms

Duncan Epping · Mar 3, 2011 ·

Last week a customer asked me a question about how to respond to for instance a partial failure in their SAN environment. A while back I had a similar question from one of my other customers so I more or less knew where to look, and I actually already blogged about this over a year ago when I was showing some of the new vSphere features. Although this is fairly obvious I hardly ever see people using this and hence the reason I wanted to document one of the obvious things that you can implement…. Alarms

Alarms can be used to trigger an alert, and that is of course the default behavior of predefined alarms. However you can also create your own alarms and associate an action with it. I am showing the possibilities here and am not saying that this is a best practice, but the following two screenshots show that it is possible to place a host in maintenance mode based on degraded storage redundancy.

First you define the alarm:

And then you define the action:

Again, this is action could have a severe impact when a switch fails and I wouldn’t recommend it, but I wanted to ensure everyone understands the type of combinations that are possible. I would generally recommend to send an SNMP trap or even a notification email… and I would recommend to at least define the following alarms:

  • Degraded Storage Path Redundancy
  • Duplicate IP Detected
  • HA Agent Error
  • Host connection lost
  • Host error
  • Host warning
  • Host WWN changed
  • Host WWN conflict
  • Lost Network Connectivity
  • Lost Network Redundancy
  • Lost Storage Connectivity
  • Lost Storage Path Redundancy

Many of these deal with hardware issues and you might already be monitoring for them, if not make sure you monitor them through vCenter and take appropriate action when needed.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Interim pages omitted …
  • Page 20
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in