My colleague Rene Jorissen is blogging daily about the Cisco Live convention and the sessions he visits. Check out his blog if your interested in this convention and his findings or any other networking related articles!
Deleting snapshots when everything else failse…
The common mis perception of the term “snapshot”, related to VMware, can cause huge problems. I’ve spend a lot of time the last years solving snapshot problems. For once and for all, a snapshot isn’t a static situation like a clone is. A snapshot can best be compared to a redo log, although technically it isn’t because it’s just a bitmap of disk sectors that changed. When you create a snapshot you only create a small “differences” file (*.delta.vmdk) which will contain all the differences until you delete or revert. Please remember reverting(go to) doesn’t delete the differences file! And this file can grow very fast depending on how many changes are made on the disk.
Another thing that people don’t know is the way “delete all” works, but I’ve already outlined that a while ago in a blog.
When you’ve got for instance a 10 levels deep nested snapshot tree with a very large last snapshot it would almost be impossible to press delete all because it will take up a lot of disk space. It would consume a lot of time doing a “delete” for every snapshot, and still it would always take up additional diskspace.
Another way to remove the snapshot is just by cloning the VM to another Datastore. This way you don’t need the extra disk space on the same datastore, and it might be a good moment to consider re-loadbalancing your lun’s again. [Read more…] about Deleting snapshots when everything else failse…
Virtual Machine tweaks for a better performance
Over the last couple of months I gathered the following tweaks for a better performance insight the virtual machine, besides disabling / uninstalling useless services and devices:
- Disable the pre-logon screensaver:
Open Regedit
HKEY_USERS\.DEFAULT\Control Panel\Desktop
Change the value of “ScreenSaveActive” to 0. - Disable updates of the last access time attribute for your NTFS filesystem, especially for i/o intensive vm’s this is a real boost:
Open CMD
fsutil behavior set disablelastaccess 1 - Disable all visual effects:
Properties on your desktop
Appearance -> Effects
Disable all options. - Disable mouse pointer shadow:
Control Panel -> Mouse
Click on the tab “pointers” and switch “enable pointer shadow” off.
So if you’ve got an addition, please post it and I’ll keep updating this blog post!
Update your bookmarks
Update your bookmarks, EMC’s Chad Sakac recently started blogging and already wrote some cool article. Check out his blog and add it to your RSS reader and/or bookmarks.
A couple of outtakes:
I’ve been working with 10 joint VMware/EMC customers this week in NY, NJ and Houston (phew!), and was in Australia the week before last where there were 2 more. Out of those 12, 4 asked me questions about the applicability of “stretching” their ESX clusters across geographic distances – that’s 33%, and absolutely above the “man, I should write a blog on the topic” threshold.
So, what are we talking about?
A stretched cluster is the practice of having ESX member servers in a cluster that are geographically separated. The reason this is generally done is to provide the ability to dynamically move workloads from one datacenter to another. Often, the customer is also considering it for disaster recovery purposes (“I’ll just VMotion in case of a disaster”). Can this be done – ABSOLUTELY – but not considered lightly.
I guess it was inevitable, but it’s still depressing. Traveling around the world means I read a LOT of magazines – there’s that 15 minutes of airplane ascent and decent where my usual toys (PSP, iPod, DS) are verboten. Some stuff (like the Economist) I read to expand my horizons, some stuff (like Maximum PC) I read as the nerd equivalent of Maxim (completely vacuous brain mush).
I couldn’t resist the headline of this month’s Windows IT Pro: “Virtualization Wars: Hyper-V vs. ESX Server ”
I am so not into protocol and transport wars – BUT that still doesn’t change the fact that the future is Ethernet-connected. So, then what about protocol? iSCSI, NFS, or FCoE? Well – NFS will continue to do well – it works well, there’s nothing wrong with it – and it will always have the strengths that it has in the VMware context (so easy to create massive datastores that span ESX clusters or even sites). iSCSI will continue to grow wildly (it is the fastest growing in the market at large, and in EMC’s portfolio) and is (IMHO – I’m still in love) the future of the block storage market en masse. BUT, I’m starting to come around on FCoE.
Scalable Storage Performance PDF
I was just reading up on the PDF’s I gathered over the last couple of weeks and found the Scalable Storage Performance pdf extremely useful. It contains a good explanation about the queue depth setting and much more….
To reduce latency, ensure that the sum of active commands from all virtual machines does not consistently exceed the LUN queue depth. Either increase the queue depth as shown in the VMware Infrastructure 3 Fibre Channel SAN Configuration Guide (the maximum recommended queue depth is 64) or move the virtual disks of some virtual machines to a different VMFS volume. You can find the guide at
http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_san_cfg.pdf.
Also make sure to set the Disk.SchedNumReqOutstanding parameter to the same value as the queue depth. If this parameter is given a higher value than the queue depth, it is still capped at the queue depth. However, if this parameter is given a lower value than the queue depth, only that many outstanding commands are issued from the ESX kernel to the LUN from all virtual machines. The Disk.SchedNumReqOutstanding setting has no effect when there is only one virtual machine issuing I/O to the LUN.