• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware Innovate magazine edition available for download!

Duncan Epping · Nov 19, 2012 ·

Internally at VMware we have this cool magazine called “Innovate”. I am part of the team which is responsible for VMware Innovate. I noticed this tweet from Julia Austin and figured I would share it with all of you. This specific edition is about RADIO 2012, which is a VMware R&D innovation offsite. (So looking forward to RADIO 2013!)

Check out #VMware‘s Innovate Magazine.Usually internal only, but we wanted to share this one with our community! ow.ly/fijfP

— Julia Austin (@austinfish) November 14, 2012

There is some cool stuff to be found in this magazine in my opinion. Just one of the many nuggets, did you know VMware was already exploring vSphere FT in 2001? Just a nice reminder of how long typical engineering efforts can take. Download the magazine now!

Ganesh Venkitachalam presented “Hardware Fault Tolerance with Virtual Machines” (or Fault Tolerance, for short) at the “Engineering Offsite 2001.” This was released as a feature called Fault Tolerance for vSphere 4.0.

vSphere Metro Storage Cluster – Uniform vs Non-Uniform

Duncan Epping · Nov 13, 2012 ·

Last week I presented in Belgium at the quarterly VMUG event in Brussels. We did a Q&A and got some excellent questions. One of them was about vSphere Metro Storage Cluster (vMSC) solutions and more explicitly about Uniform vs Non-Uniform architectures. I have written extensively about this in the vSphere Metro Storage Cluster whitepaper but realized I never blogged that part. So although this is largely a repeat of what I wrote in the white paper I hope it is still useful for some of you.

<update>As of 2013 the official required bandwidth is 250Mbps per concurrent vMotion</update>

Uniform Versus Nonuniform Configurations

VMware vMSC solutions are classified in two distinct categories, based on a fundamental difference in how hosts access storage. It is important to understand the different types of stretched storage solutions because this will impact your design and operational considerations. Most storage vendors have a preference for one of these solutions, so depending on your preferred vendor it could be you have no choice. The following two main categories are as described on the VMware Hardware Compatibility List:

  • Uniform host access configuration – When ESXi hosts from both sites are all connected to a storage node in the storage cluster across all sites. Paths presented to ESXi hosts are stretched across distance.
  • Nonuniform host access configuration – ESXi hosts in each site are connected only to storage node(s) in the same site. Paths presented to ESXi hosts from storage nodes are limited to the local site.

We will describe the two categories in depth to fully clarify what both mean from an architecture/implementation perspective.

With the Uniform Configuration, hosts in Datacenter A and Datacenter B have access to the storage systems in both datacenters. In effect, the storage-area network is stretched between the sites, and all hosts can access all LUNs. NetApp MetroCluster is an example of this. In this configuration, read/write access to a LUN takes place on one of the two arrays, and a synchronous mirror is maintained in a hidden, read-only state on the second array. For example, if a LUN containing a datastore is read/write on the array at Datacenter A, all ESXi hosts access that datastore via the array in Datacenter A. For ESXi hosts in Datacenter A, this is local access. ESXi hosts in Datacenter B that are running virtual machines hosted on this datastore send read/write traffic across the network between datacenters. In case of an outage, or operator-controlled shift of control of the LUN to Datacenter B, all ESXi hosts continue to detect the identical LUN being presented, except that it is now accessed via the array in Datacenter B.

The notion of “site affinity”—sometimes referred to as “site bias” or “LUN locality”—for a virtual machine is dictated by the read/write copy of the datastore. For example, when a virtual machine has site affinity with Datacenter A, its read/write copy of the datastore is located in Datacenter A.

The ideal situation is one in which virtual machines access a datastore that is controlled (read/write) by the array in the same datacenter. This minimizes traffic between datacenters and avoids the performance impact of reads’ going across the interconnect. It also minimizes unnecessary downtime in case of a network outage between sites. If your virtual machine is hosted in Datacenter B but its storage is in Datacenter A you can imagine the virtual machine won’t be able to do I/O when there is a site partition.

With the Non-uniform Configuration, hosts in Datacenter A have access only to the array in Datacenter A. Nonuniform configurations typically leverage the concept of a “virtual LUN.” This enables ESXi hosts in each datacenter to read and write to the same datastore/LUN. The clustering solution maintains the cache state on each array, so an ESXi host in either datacenter detects the LUN as local. Even when two virtual machines reside on the same datastore but are located in different datacenters, they write locally without any performance impact on either of them.

Note that even in this configuration each of the LUNs/datastores has “site affinity” defined. In other words, if anything happens to the link between the sites, the storage system on the preferred site for a given datastore is the only remaining one that has read/write access to it, thereby preventing any data corruption in the case of a failure scenario. This also means that it is recommended to align virtual machine – host affinity with datastore affinity to avoid any unnecessary disruption caused by a site isolation.

I hope this helps understanding the differences between Uniform vs Non-Uniform configurations. Many more details about vSphere Metro Storage Cluster solutions, including design and operational considerations, can be found in the vSphere Metro Storage Cluster whitepaper. Make sure to read it if you are considering, or have implemented, a stretched storage solution!

Warning: Latest OS X updates causes issues with Fusion 5.0.x!

Duncan Epping · Nov 12, 2012 ·

On the VMware VMTN Forums it is reported that VMware Fusion 5.0.x in combination with the latest OS X updates (MacBook Air and MacBook Pro Update 2.0) causes virtual machines to crash. This is reported in the following threads:

  • http://communities.vmware.com/thread/425117
  • http://communities.vmware.com/thread/425153
  • http://communities.vmware.com/thread/425010

The error encountered is:

VMware Fusion has encountered an error and has shut down the virtual machine.

You can simply solve this as mentioned by Darius:

In the meantime, please try this: With your VM powered off, go into the Virtual Machine > Settings, then choose Display, and turn off the Accelerate 3D Graphics option.  Then close the Settings window and try to power on your VM.

vSphere HA compatibility list, how do I check it?

Duncan Epping · Nov 8, 2012 ·

Someone reported issues that in their environment VMs could not be restarted as there were no compatible hosts available. The relevant part of the error message was:

N3Vim5Fault16NoCompatibleHostE

I don’t know why in this case it happened as the log files unfortunately don’t provide these details. This person had manually restarted all of his VMs and that actually worked okay. This could mean that some how the “compatibility list” that vSphere HA maintains was not complete or it wasincorrect. So the question would be how do you validate that if you ever end up in a scenario like this?

First of all before I forget, create a support dump. That way VMware Global Support Services can help pinpointing your problems and provide tips on how to prevent these from occurring.

On a host, and you will have to SSH in to one, you can actually run a script that provides you with some nice details around this. Lets go through the options of the script and explain what you can get out of it. The script is called “prettyPrint.sh” can be found in “/opt/vmware/fdm/fdm/”.

./prettyPrint.sh hostlist

The hostlist option provides all relevant details about the hosts which are part of this cluster including “hostId”, host name, ip address etc.

./prettyPrint.sh clusterconfig

The clusterconfig option provides all configuration info of your cluster like admission control and isolation response.

./prettyPrint.sh compatlist

The compatlist option provides the list of VMs and host they are compatible with, only for vSphere 5.0.

./prettyPrint.sh vmmetadata

The vmmetadata option provides the list of VMs and host they are compatible with, only for vSphere 5.1.

So in this case “vmmetadata” was important as it lists VMs compatible with which host. In this case “<index>0</index> refers to a VM and “<compatMask>0,1,2,3</compatMask> refers to the hosts it is compatible with. Nice right?!

   <compatMatrix>
      <restartCompat>
         <index>0</index>
         <compatMask>0,1,2,3</compatMask>
      </restartCompat>
      <restartCompat>
         <index>1</index>
         <compatMask>0,1,2,3</compatMask>
      </restartCompat>
      <restartCompat>
         <index>2</index>
         <compatMask>0,1,2,3</compatMask>
      </restartCompat>
   </compatMatrix>

** Update: Added Portgroup Test **

On VMTN someone asked if HA also takes networking in to account when restarting VMs. If a given portgroup is not available on specific hosts will HA smartly place VMs? In my test I removed the “VM Network” portgroup from one of my hosts (host with ID 2). When listing the compatibility list the following shows up:

<restartCompat>
       <index>0</index>
       <compatMask>0,1,3</compatMask>
</restartCompat>

As you can see host with ID 2 is missing.

Resizing an IDE virtual disk, part two

Duncan Epping · Nov 7, 2012 ·

A long time ago I wrote this article about resizing an IDE virtual disk. I just ran out of diskspace on my Windows 7 VM and I needed to increase the disk. Unfortunately the Windows 7 VM had and IDE disk and the Web Client didn’t allow me to simply increase the size of the VMDK. So this is what I had to do, and yes I agree it should be easier than this.

  1. Remove the IDE vmdk from the VM
  2. Edit the “vmdk” file (can be found under cd /vmfs/volumes/<datastore_name>/<vm_name>/)
  3. Change ddb.adapterType from “ide” to “lsilogic”
  4. Add the IDE vmdk to the VM
  5. Change the size of the disk
  6. Remove the IDE vmdk from the VM
  7. Edit the “vmdk” file (can be found under cd /vmfs/volumes/<datastore_name>/<vm_name>/)
  8. Change ddb.adapterType from “lsilogic” to “ide”
  9. Add the IDE vmdk to the VM
  10. Power on the VM and “extend” the partition within Windows 7

There might be an easier way of doing this, and I guess using “vmkfstools -X” would also work, I just preferred to take this route instead as I knew it would work.

** note to self, don’t import W7 VMs with an IDE disk, it sucks **

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 200
  • Page 201
  • Page 202
  • Page 203
  • Page 204
  • Interim pages omitted …
  • Page 492
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in