• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

4.1

RE: VMFS 3 versions – maybe you should upgrade your vmfs?

Duncan Epping · Feb 25, 2011 ·

I was just answering some questions on the VMTN forum when someone asked the following question:

Should I upgrade our VMFS luns from 3.21 (some in 3.31) to 3.46 ? What benefits will we get?

This person was referred to an article by Frank Brix Pedersen who states the following:

Ever since ESX3.0 we have used the VMFS3 filesystem and we are still using it on vSphere. What most people don’t know is that there actually is sub versions of the VMFS.

  • ESX 3.0 VMFS 3.21
  • ESX 3.5 VMFS 3.31 key new feature: optimistic locking
  • ESX 4.0 VMFS 3.33 key new feature: optimistic IO

The good thing about it is that you can use all features on all versions. In ESX4 thin provisioning was introduced but it does need the VMFS to be 3.33. It will still work on 3.21. The changes in the VMFS is primarily regarding the handling of SCSI reservations. SCSI reservations happens a lot of times. Creation of new vm, growing a snapshot delta file or growing thin provisioned disk etc.

I want to make sure everyone realizes that this is actually not true. All the enhancements made in 3.5, 4.0 and even 4.1 are not implemented on a filesystem level but rather on a VMFS Driver level or through the addition of specific filters or even a new datamover.

Just to give an extreme example: You can leverage VAAI capabilities on a VMFS volume with VMFS filesystem version 3.21, however in order to invoke VAAI you will need the VMFS 3.46 driver. In other words, a migration to a new datastore is not required to leverage new features!

Storage vMotion performance difference?

Duncan Epping · Feb 24, 2011 ·

Last week I wrote about the different datamovers being used when a Storage vMotion is initiated and the destination VMFS volume has a different blocksize as the source VMFS volume. Not only will it make a difference in terms of reclaiming zero space, but as mentioned it also makes a different in performance. The question that always arises is how much difference does it make? Well this week there was a question on the VMTN community regarding a SvMotion from FC to FATA and the slow performance. Of course within a second FATA was blamed, but that wasn’t actually the cause of this problem. The FATA disks were formatted with a different blocksize and that cause the legacy datamover to be used. I asked Paul, who started the thread, if he could check what the difference would be when equal blocksizes were used. Today Paul did his tests and he blogged about it here and but I copied the table which contains the details that shows you what performance improvement the fs3dm (please note, that VAAI is not used… this is purely a different datamover) brought:

From To Duration in minutes
FC datastore 1MB blocksize FATA datastore 4MB blocksize 08:01
FATA datastore 4MB blocksize FC datastore 1MB blocksize 12:49
FC datastore 4MB blocksize FATA datastore 4MB blocksize 02:36
FATA datastore 4MB blocksize FC datastore 4MB blocksize 02:24

As I explained in my article about the datamover, the difference is caused by the fact that the data doesn’t travel all the way up the stack… and yes the difference is huge!

Installing drivers during a scripted install of ESXi

Duncan Epping · Feb 23, 2011 ·

As you hopefully have read I have been busy over the last weeks with a new project. This project is all about enabling migrations to ESXi. I wrote two articles for the ESXi Chronicles blog of which the first article describes the scripted install procedure and the seconds gives some additional advanced examples of what is possible in these scripts. Based on that article I started receiving some questions from the field and last week I had a conference call with a customer who had issues “injecting” a driver into ESXi during the install. Normally installing a driver is done by using a simple “esxupdate” command and a reboot, but in this case however the situation was slightly different and let me explain the problem first.

This customer implemented a script that would run during the %firstboot section. The name of the section already explains what it is, this section will run after the reboot of the scripted install has been completed. The way this works is that ESXi creates a script in /etc/vmware/init/init.d with the prefix 999. This script will run as the last script during the boot and is generally used to configure the host. After a final reboot this script is automatically deleted and the host is ready to be added to vCenter.

The challenge however that this customer was facing is that it was using a Xsigo network environment. In this scenario a server that would need to be reinstalled would get a temporary network configuration that would only work during the first boot. Meaning that before this %firstboot section would even run the original network configuration would be restored. The problem with that however is that the original network configuration requires the drivers to be installed before it can be used. In other words, after the reboot done by the installer you will not be able to access the network unless you have loaded the drivers. This rules out downloading the drivers during the %firstboot section. Now how do we solve this?

The scripted installation has multiple sections that can contain your commands. The first section is called %post. The ESXi setup guide describes this section as follows:

Executes the specified script after package installation has been completed. If you specify multiple %post sections, they are executed in the order they appear in the installation script.

This means that in the case of this customer we will be able to use this section to download any driver package required with for instance “wget” and that is what I did. I issued the command below during the %post section and rebooted the server.

wget http://192.168.1.100/xsigo.zip

The problem however was that the package wasn’t persisted after a reboot which brought me back to right where I began, without a network after the restart. Than I figured that during the install a local datastore is created and I could use that as persistent storage. So I issued the following command in the %post section of the script:

wget http://192.168.1.100/xsigo.zip -O /vmfs/volumes/datastore1/xsigo.zip

I rebooted the installer and checked after the install if the driver bundle was there or not, and yes it was. The only thing left to do was to install the driver in the %fistboot section. This by itself is a fairly simple task:

esxupdate --bundle=/vmfs/volumes/datastore1/xsigo.zip update

As the host will need to reboot before the drivers are loaded I also added a “reboot” command at the end of to the “%firstboot” section. This ensures the drivers are loaded and the network is accessible.

I guess this demonstrates how easy a solution can be. When I first started looking into this issue and started brainstorming I was making things way too complex. By using the standard capabilities of the scripted install mechanism and a simple “wget” command you can do almost everything you need to do during the install. This also removes the need to fiddle around with injecting drivers straight into the ISO itself.

Go ESXi 🙂

Management Cluster / vShield Resiliency?

Duncan Epping · Feb 14, 2011 ·

I was reading Scott’s article about using dedicate clusters for management applications. Which was quickly followed by a bunch of quotes turned into an article by Beth P. from Techtarget. Scott mentions that he had posed the original question on twitter if people were doing dedicated management clusters and if so why.

As he mentioned only a few responded and the reason for that is simple, hardly anyone is doing dedicated management clusters these days. The few environments that I have seen doing it were large enterprise environments or service providers where this was part of an internal policy. Basically in those cases a policy would state that “management applications cannot be hosted on the platform it is managing”, and some even went a step further where these management applications were not even allowed to be hosted in the same physical datacenter. Scott’s article was quickly turned in to a “availability concerns” article by Techtarget to which I want to respond. I am by no means a vShield expert, but I do know a thing or two about the product and the platform it is hosted on.

I’ll use vShield Edge and vShield Manager as an example as in Scott’s article vCloud Director is mentioned which leverages vShield Edge. This means that vShield Manager needs to be deployed in order to manage the edge devices. I was part of the team who was responsible for the vCloud Reference Architecture but also part of the team who designed and deployed the first vCloud environment in EMEA. Our customer had their worries as well about resiliency of vShield Manager and vShield Edge, but as they are virtual they can easily be “protected” by leveraging vSphere features. One thing I want to point out though, if vShield Manager is down vShield Edge will continue to function so no need to worry there. I created the following table to display how vShield Manager and vShield Edge can be “protected”.

Product vShield Manager VMware HA VM Monitoring VMware FT
vShield Manager Yes (*) Yes Yes Yes
vShield Edge Yes (*) Yes Yes Yes

Not only would you be able to leverage these standard vSphere technologies there is more that can be leveraged:

  • Scheduled live clone of vShield Manager through vCenter
  • Scheduled configuration back up of vShield Manager (*)

Please don’t get me wrong here, there are always methods to get locked out but as Edward Haletky stated “In fact, the way vShield Manager locks down the infrastructure upon failure is in keeping with longstanding security best practices”. (Quote from Beth P’s article) I also would not want my door to be opened up automatically when there is something wrong with my lock. The trick though is to prevent a “broken lock” situation from occurring and to utilize vSphere capabilities in such a way that the last known state can be safely recovered if it would.

As always an architect/consultant will need to work with all the requirements and constraints  and based on the capabilities of a product come up with a solution that offers maximum resiliency and with the mentioned options above you can’t tell me that VMware doesn’t provide these

Want a free HA/DRS Technical Deepdive Book?

Duncan Epping · Feb 10, 2011 ·

Want a free HA/DRS Technical Deepdive Book? Watch vChat 15!

In Episode 15 of our vChat series, we have a couple of special guests with us whom I’m sure you would have heard of or if not met before, Frank Denneman and Duncan Epping. These guys embody almost all things deep-dive when it comes to vSphere and with the recent release of their new book VMware HA/DRS Deepdive and we take the opportunity to ask about the background behind the book, whether an electronic version is in the pipeline along with their plans for any future publications. We discuss VMware Partner Exchange (PEX) 2011. Other topics, as you’d imagine, cover the VMware iPad app (and the potential security issues) and their home vSphere labs.

Watch it here!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Page 13
  • Interim pages omitted …
  • Page 20
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in