• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

5.5

vCenter 5.5 Update 1b with OpenSSL and SPBM fix!

Duncan Epping · Jun 13, 2014 ·

For those not monitoring the VMware website like a hawk… VMware just released vCenter 5.5 Update 1b. This update contains a couple of fixes which are critical in my opinion. So make sure to upgrade vCenter as quickly as possible:

  • Update to OpenSSL library addresses security issues
    OpenSSL libraries have been updated to versions openssl-0.9.8za, openssl-1.0.0m, and openssl-1.0.1h to address CVE-2014-0224.
  • Under certain conditions, Virtual SAN storage providers might not be created automatically after you enable Virtual SAN on a cluster
    When you enable Virtual SAN on a cluster, Virtual SAN might fail to automatically configure and register storage providers for the hosts in the cluster, even after you perform a resynchronization operation. This issue is resolved in this release. You can view the Virtual SAN storage providers after resynchronization. To resynchronize, click the synchronize icon in the Storage Providers tab.

You can download the bits here.

Disconnect a host from VSAN cluster doesn’t change capacity?

Duncan Epping · Jun 13, 2014 ·

Someone asked this question on VMTN this week and I received a similar question this week from another user… If you disconnect a host from a VSAN cluster it doesn’t change the total amount of available capacity. The customer was wondering why this was. Well the answer is simple: You are not disconnecting the host from your VSAN cluster, but you are rather disconnecting it from vCenter Server instead! (In contrary to HA and DRS by the way) In other words: your VSAN host is still providing storage to the VSAN datastore when it is disconnected.

If you want a host to leave a VSAN cluster you have two options in my opinion:

  • Place it in maintenance mode with full data migration and remove it from the cluster
  • Run the following command from the ESXi command line:
    esxcli vsan cluster leave

Please keep that in mind when you do maintenance… Do not use “disconnect” but actually remove the host from the cluster if you do not want it to participate in VSAN any longer.

vSphere 5.5 U1 patch released for NFS APD problem!

Duncan Epping · Jun 11, 2014 ·

On April 19th I wrote about an issue with vSphere 5.1 and NFS based datastores APD ‘ing. People internally at VMware have worked very hard to root cause the issue and fix it. Log entries witnessed are:

YYYY-04-01T14:35:08.075Z: [APDCorrelator] 9414268686us: [esx.problem.storage.apd.start] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down state.
YYYY-04-01T14:36:55.274Z: No correlator for vob.vmfs.nfs.server.disconnect
YYYY-04-01T14:36:55.274Z: [vmfsCorrelator] 9521467867us: [esx.problem.vmfs.nfs.server.disconnect] 192.168.1.1/NFS-DS1 12345678-abcdefg0-0000-000000000000 NFS-DS1
YYYY-04-01T14:37:28.081Z: [APDCorrelator] 9553899639us: [vob.storage.apd.timeout] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed. 

More details on the fix can be found here: http://kb.vmware.com/kb/2077360

New beta of the vSphere 5.5 U1 Hardening Guide released

Duncan Epping · May 28, 2014 ·

Mike Foley just announced the new release of the vSphere 5.5 U1 Hardening Guide. Note that it is still labeled as a “beta” as Mike is still gathering feedback, however the document should be finalized first week of June.

For those concerned about security, this is an absolute must read! As always, before implementing ANY of these recommendations make sure to test them on a test cluster and test expected functionality of both the vSphere platform and the virtual machines and applications running on top of it.

Nice work Mike!

One versus multiple VSAN disk groups per host

Duncan Epping · May 22, 2014 ·

I received two questions on the same topic this week so I figured it would make sense to write something up quickly. The questions were around an architectural decision for VSAN, one versus multiple VSAN disk groups per host. I have explained the concept of disk groups already in various posts but in short this is what a disk group is and what the requirements are:

A disk group is a logical container for disks used by VSAN. Each disk groups needs at a minimum 1 magnetic disk and can have a maximum of 7 disks. Each disk group also requires 1 flash device.

VSAN disk groups

Now when designing your VSAN cluster at some point the question will arise should I have 1 or multiple disk groups per host? Can and will it impact performance? Can it impact availability?

There are a couple of things to keep in mind when it comes to VSAN if you ask me. The flash device which is part of each disk group is the caching/buffer layer for those disks, without the flash device the disks will also be unavailable. As such a disk group can be seen as a “failure domain”, because if the flash device fails the whole disk group is unavailable for that period of time. (Don’t worry, VSAN will automatically rebuild all your components that are impacted automatically.) Another thing to keep in mind is performance. Each flash device will provide an X amount of IOPS.A higher total number of IOPS could (probably will) change performance drastically, however it should be noted that capacity could still be a constraint. If this all sounds a bit fluffy lets run through an example!

  • Total capacity required: 20TB
  • Total flash capacity: 2TB
  • Total number of hosts: 5

This means that per host we will require:

  • 4TB of disk capacity (20TB/5 hosts)
  • 400GB of flash capacity (2TB/5 hosts)

This could simply result in each host having 2 x 2TB NL-SAS and 1 x 400GB flash device. Lets assume your flash device is capable of delivering 36000 IOPS… You can see where I am going right? What if I would have 2 x 200GB flash and 4x 1TB magnetic disks instead? Typical the lower capacity drives will do less write IOPS but for the Intel S3700 for instance that is 4000 less. So instead of 1 x 36000 IOPS it would result in 2 x 32000 IOPS. Yes, that could have a nice impact indeed….

But not just that, we also have more disk groups and smaller fault domains as a result. On top of that we will end up with more magnetic disks which means more IOPS per GB capacity in general. (If an NL-SAS drive does 80 IOPS for 2TB then two NL-SAS drives of 1TB will do 160 IOPS. Which means same TB capacity but twice the IOPS if you need it.)

In summary, yes there is a benefit in having more disk groups per hosts and as such more flash devices…

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 16
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in