It must have been Virtual Geek week this week! I guess most of you already know Virtual Geek, and if you didn’t you’ve been missing out on the good stuff. Virtual Geek is being maintained by Chad Sakac of EMC and let’s say there’s a reason why his blog is called “Virtual Geek”. Chad posted a series of blog articles which are a must read for anyone interested in storage related to VMware and storage/VMware in general.
It started out with the “VMFS best practices and counter FUD” article where he sets the facts straight and debunks several myths like max amount of vm’s per VMFS volume and the use of extents. Besides countering this FUD there are also some very valuable tips in this article, for instance the advanced setting “Disk.SchedNumReqOutstanding” and the why/where/when.
In his second post this week he revealed that the upcoming release of ESX/vCenter(vSphere) will include the counterpart of the EMC Storage Viewer (vCenter plugin, youtube demo to be found here.). For all Clarion/Celerra customers who are planning on upgrading to vSphere a nice “little” extra!
The third one was the one I have been personally waiting for, the brand new version of the Celerra VSA. If you want to run a virtual “virtual environment” this virtual storage appliance is a must have. Especially if you want to test SRM this VSA will come in handy. Be sure to also download the how to guide that Chad provided in the “HOWTO 401” article.
Number four and five deal about multipathing and MRU behavior. I fully agree that understanding how MRU works is essential if you are using the policy. Post number 5 contains the script that is demoed in post 4. The script load balances the LUNs on the backend of the array(storage processors) and of course makes sure this is reflected on ESX for an optimal performance.
Let’s hope there’s more to come over the next weeks…
Jason Boche says
I read Chad’s mythbusting blog post earlier this week and learned 2 very important things regarding VMFS-3 extents:
1. Not a spill and fill pattern. VMware randomly distributes VM placement among the extents.
2. If you lose an extent, your VMs on the other extents are still alive (I already knew this), UNLESS you lose the 1st/parent/master extent then all data is lost (I didn’t know this and this is more like native extent behavior in VMFS-2 I believe).