• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

performance

Single initiator zoning

Duncan Epping · Oct 28, 2008 ·

I’ve been doing VMware Design Reviews lately and so are my colleagues of the PSO department. A Design Review is quick scan of your design documentation by a VMware consultant. The consultant will hold your docs against best practices and propose changes to the design.

One of the things we encounter on a regular base is that admins took the easy path for their Storage Design zoning. So what’s zoning? In short: a way to partition your fabric into smaller subsets. These small subsets provide you with a better security and less interference.

You can do zoning in two ways, Soft and Hard. With “soft zoning” you use the device WWN in a zone without any restrictions to what port this WWN is attached. With “hard zoning” you put the port into a specific zone. So what do I prefer? I would prefer “hard zoning” because you need to know how your devices are connected and it makes troubleshooting a lot easier.

So now I’ve chosen a way  to zone I can just write down all my port numbers, create a zone and drop them in and I’m done… Well not so fast, that’s another choice one has to make before you start. How am I going to zone, single initiator zoning or multi initiator zoning? So what’s a single initiator zone: a single hba in a zone with the target device(s). And a multi initiator zone is all initiators that need to communicate with a device(s) in one zone. As one can imagine multi initiator zones are really easy to setup but definitely not my first choice.

Single initiator zones are the way to go. If there’s no need, and for ESX there isn’t, for initiators to be able to communicate with each other then they shouldn’t be able to. Not only is this more secure, because initiators can’t communicate with each other, it also cuts out a lot of rubbish on your fibre. Rubbish as for instance “Registered State Change Notifications”. Although RSCN storms don’t occur that often anymore as they used to it’s still a risk of contention and should be avoided when possible. So if you’re doing a design or preparing for one keep this in mind: Single Initiator Zones are the way to go!

There are a whole bunch of good articles on the net about zoning, read them you might learn a thing or two:

  • TechTarget.com: part1, part2, part3
  • Storage Networking 101: Understanding Fibre Channel Zones
  • Single HBA Zoning

Have fun,

Queuedepth, how and when

Duncan Epping · Oct 27, 2008 ·

So you’ve heard this probably from a few dozens of people by now when you don’t hit the expected SAN performance: Set your queuedepth to a larger size.

So how do you set this queuedepth? Find out for which module you’ll need to set this option:

vmkload_mod -l | grep qla

Now set it to a depth of 64 for module qla2300_707

esxcfg-module -s ql2xmaxqdepth=64 qla2300_707
esxcfg-boot –b

So now you’ve set the queue depth to 64 for your HBA cards, but why? Well I hope the answer is:”because I monitored my system with esxtop and I noticed that the “QUED” value was high”.

So there’s your when. You’ll need to set this setting if you notice a high “QUED” value in esxtop. Take a look at the following example I borrowed from a great blog on this subject:

As you can see in the example, the “ACTV” has a value of 32. Indeed 32 active commands cause that’s the default queue depth for qlogic cards. And 31 outstanding commands, in other words if we bump up the queue depth to 64 than all the commands should be processed instead of queued in the VMkernel.

What will this result in?

Virtual Machine tweaks for a better performance

Duncan Epping · Jun 20, 2008 ·

Over the last couple of months I gathered the following tweaks for a better performance insight the virtual machine, besides disabling / uninstalling useless services and devices:

  1. Disable the pre-logon screensaver:
    Open Regedit
    HKEY_USERS\.DEFAULT\Control Panel\Desktop
    Change the value of “ScreenSaveActive” to 0.
  2. Disable updates of the last access time attribute for your NTFS filesystem, especially for i/o intensive vm’s this is a real boost:
    Open CMD
    fsutil behavior set disablelastaccess 1
  3. Disable all visual effects:
    Properties on your desktop
    Appearance -> Effects
    Disable all options.
  4. Disable mouse pointer shadow:
    Control Panel -> Mouse
    Click on the tab “pointers” and switch “enable pointer shadow” off.

So if you’ve got an addition, please post it and I’ll keep updating this blog post!

vscsi stats

Duncan Epping · Jun 19, 2008 ·

Via the Dutch VMUG site I landed on a new blog, well new… for me new. This blog is maintained by Toni Verbeiren and he created an excellent article about monitoring performance stats for the scsi controllers inside a VM:

A tool is available on ESX 3.5 that creates histograms by default (and complete traces if wanted) is VscsiStats. As an option, one provides the vSCSI handle ID and the VM World ID. In order to get some statistics at all, one first needs to start the monitoring:
./vscsiStats -s

After some time, the relevant statistics can be fetched by issuing a command like:
./vscsiStats -i 8260 -w 1438 -p ioLength

Read more at the source…

There also appears to be a pdf about the subject on the VMware website which contains good information on the subject.

EDIT: You can find the command here: /usr/lib/vmware/bin

Good read: how many vm’s on 1 ESX host

Duncan Epping · May 25, 2008 ·

Check out this topic on the VMTN forum by Gabrie. It’s a good read about how many vm’s one would dare to run on an ESX host.

TexiWill:
This really depends. I know companies that are doing no more than a 10:1 or 20:1 compression, but there are other companies with 50+ VMs running on one box (at the time it was a DL760 with 8 CPUs and 64GBs of memory. I do know that the max vCPUs you can put on a system is still 8 * pCores and the larget box I have seen is the DL580G4 with 4 quad cores (16 cores) and 512GBs of memory….. So maximally 128 vCPUs…..

Ken.Cline:
I make this decision based on a couple things:

* – How important are the VMs in questions?
* If they’re truly “mission critical”, then I keep the number small – on the order of 10:1
* If they’re “important”, then let’s look at 20:1
* If they’re “who cares if they’re up”, then load ’em up!

* – How large is the environment? I like to deploy a minimum of two hosts (three makes me happier)
* 20 systems @ 2 hosts = 10:1, @ 3 hosts = 7:1
* 100 systems @ 2 hosts = I wouldn’t do it, @ 3 hosts = 34:1
* 1,000 systems – now you’re talking! @ 20 hosts = 50:1, @ 30 hosts = 34:1, @ 20 hosts = 50:1, @ 10 hosts = 100:1
* 10,000 systems – you can bet I’m going to have a few hosts with 50 to 60 (or more) VMs and some hosts with 10 (or less) VMs!

So, there’s not single “right” answer (other than “it depends”)

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 17
  • Page 18
  • Page 19
  • Page 20
  • Page 21
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in