• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

5.5

VSAN Design Consideration: Booting ESXi from USB with VSAN?

Duncan Epping · Mar 10, 2014 ·

** Update, as of November 21st we also support SD/USB boot with higher memory configurations when core dump partition is increased. Also read this KB article on the topic of increasing the ESXi diagnostic partition size **

One thing most probably won’t realize is that there is a design consideration with VSAN when it comes to installing ESXi. Many of you have probably been installing ESXi on USB or SD and this is also still supported with VSAN. There is one caveat however and this caveat is around the total number of GBs of memory in a host. The design consideration is fairly straight forward and also documented in the VSAN Design and Sizing Guide. Just to make it a bit easier to find it I copied/pasted it here for your convenience:

  • Use SD, USB, or hard disk devices as the installation media whenever ESXi hosts are configured with as much as 512GB memory. Minimum size for USB/SD card is 4GB, recommended 8GB.
  • Use a separate magnetic disk or solid-state disk as the installation device whenever ESXi hosts are configured with more than 512GB memory.

You may wonder what the reason is, the reason for this is that VSAN will use the core dump partition to store VSAN traces that can be used by VMware Global Support Services and the VMware Engineering team for root cause analysis when needed. So make sure when configuring host to keep this in mind when going above 512GB of memory.

Please note that this is what has been tested by VMware and will be supported, so this is not just any recommendation but could have impact on support!

Virtual SAN GA update: Flash vs Magnetic Disk ratio

Duncan Epping · Mar 7, 2014 ·

With the announcement of Virtual SAN 1.0 a change in recommended practice when it comes to SSD to Magnetic Disk capacity ratio was also introduced (page 7) has been introduced. (If you had not spotted it yet, the Design and Sizing Guide for VSAN was updated!) The rule was straight forward: Recommended SSD to Magnetic Disk capacity ratio is 1:10. This recommendation has been changed to the following:

The general recommendation for sizing flash capacity for Virtual SAN is to use 10 percent of the anticipated consumed storage capacity before the number of failures to tolerate is considered.

Lets give an example to show the differences:

  • 100 virtual machines
  • Average size: 50GB
  • Projected VM disk utilization: 50%
  • Average memory: 5GB
  • Failures to tolerate = 1

Now this results in the following from a disk capacity perspective:

100 VMs * 50GB * 2 = 10.000GB
100 VMs * 5GB swap space * 2 = 1000GB
(We multiplied by two because FTT was set to 1)

This means we will need 11TB to run all virtual machines. As explained in an earlier post I prefer to add additional capacity for slack space (snapshots etc) and meta data overhead, so I suggest to add 10% at a minimum.This results in ~12TB of total capacity.

From an SSD point of view this results in:

  • Old rule of thumb: 10% of 12TB = 1.2TB of cache. Assuming 4 hosts, this is 300GB of SSD per host.
  • New rule of thumb: 10% of ((50% of (100 * 50GB)) + 100 * 5GB) = 300GB. Assuming 4 hosts this is 75GB per host.

Now lets dig some deeper, 70% read cache and 30% write buffer. On a per host basis that means:

  • Old rule: 210GB Read Cache, 90GB Write Buffer
  • New rule: 52,5GB Read Cache, 22,5GB Write Buffer

Sure from a budgetary perspective this is great, only 75GB versus 300GB per host. Personally, I would prefer to spend more money and make sure you have a larger read cache and write buffer… Nevertheless, with this new recommendation you have the option to go lower without violating any of the recommendations.

Note, this is my opinion and doesn’t reflect VMware’s statement. Read the Design and Sizing Guide for more specifics around VSAN sizing.

VMware Virtual SAN launch and book pre-announcement!

Duncan Epping · Mar 6, 2014 ·

Today is the day, finally… the Virtual SAN (VSAN) launch. Many people have been waiting for this one. With 12.000 plus beta participants this was one of the biggest projects I have ever seen within VMware. It is truly impressive to see how the product has grown and what the team has done. Before I will provide you with some of the details of the announcement I want to share something else that all of you should look out for:

Cormac Hogan and I decided it was time for a book on Virtual SAN. Both of us have published many articles about VSAN the last 9 months and have been working with the product for over a year now so it only made sense. We have decided, and this wasn’t an easy decision for me, to go with VMware Press. When I say “not an easy decision” I don’t want to sound negative about using publisher, but it is just that I have had a great experience (and results) with self-publishing. It was time for a new experience though, try something different. As we speak we are working hard to get the final set of chapters in for review / editing and we are hoping to have the book available before VMworld. I am guessing that the rough cuts will be available through Safari in the upcoming weeks, if so I will let you know via a blog post.

Now lets get back to the topic of the day again, Virtual SAN Launch… So what was announced today?

  • General Availability of Virtual SAN 1.0 the week of the 10th of March
  • vSphere 5.5 Update 1 will support VSAN GA
  • Support for 32 hosts in a Virtual SAN cluster
  • Support for 3200 VMs in a Virtual SAN cluster
    • Note, due to HA restrictions only  2048 VMs can be HA protected!
  • Full support for VMware Horizon / View
  • Elastic and Linear Scalability for both capacity and performance
  • VSAN is not a VSA. Performance is much better than any VSA!
  • 2 Million IOPS validated in a 32 host Virtual SAN cluster
  • ~ 4.5PB in a 32 host cluster
  • 13 different VSAN Ready Node configurations between Cisco IBM Fujitsu and Dell available at GA, more coming soon!

Once again, great work by the VSAN team. Version 1.0 just got release, and I can barely wait for the next release to become available!

Don’t create a Frankencluster just because you can…

Duncan Epping · Feb 19, 2014 ·

In the last couple of weeks I have had various discussions around creating imbalanced clusters. Imbalanced from either CPU, memory and even a storage point of view. This typically comes up in discussions where either someone wants to bring larger scale to their cluster and they want to add hosts with more resources of any of the before mentioned types. Or also when licensing costs need to be limited and people want to restrict certain VMs to run a specific set of hosts. Something that comes up often when people are starting to look at virtualizing Oracle. (Andrew Mitchell published this excellent article on the topic of Oracle Licensing and soft vs hard partitioning which is worth reading!)

Why am I not a fan of imbalanced clusters when it comes to compute or storage resources? Why am I not a fan of crippling your environment purposely to ensure your VMs will only run on a subset of vSphere hosts? The reason is simple, the problems I have seen and experienced and the inefficiency in certain scenarios. Lets look at some examples:

Lets assume I have 4 hosts with each 128GB of memory. I need more memory in my cluster and I add a host with 256GB of memory. Now you just went from 512Gb to 768GB which is a huge increase. However, this is only true when you don’t do any form of admission control and resource management. When you do proper resource management or admission control than you would need to make sure that all of your virtual machines can run in the case of a failure, and preferably run with equal performance before and after the failure has occured. If you added 256GB of memory and this is being used and that host containing 256GB goes down your virtual machines could potentially be impacted. They might not restart, and if they restart they may not get the same amount of resources as they received before the failure. This scenario also applies to CPU, if you create an imbalance .

Another one I encountered recently was presenting a LUN to a limited set of hosts, in this case a LUN was only presented to 2 hosts out of the 20 hosts in that cluster… Guess what, when those two hosts die… so do your VMs. Not optimal right when they are running an Oracle database for instance. On top of that I have seen people pitching a VSAN cluster of 16 nodes with only 3 hosts contributing storage. Yes you can do that, but again… when things go bad, they will go horribly bad. Just imagine 1 host fails, how will you rebuild your components that were impacted? What is the performance impact? Very difficult to predict how it will impact your workload, so just keep it simple. Sure there is a cost overhead associated with separating workloads and creating dedicated clusters, but it will be easier to manage and more predictable in failure scenarios.

I guess in summary: If you want predictability in terms of availability and recoverability of your virtual machines go for a balanced environment, don’t create a Frankencluster!

What is Virtual SAN really about?

Duncan Epping · Feb 18, 2014 ·

When talking about Virtual SAN you hear a lot of people talking about the benefits, what Virtual SAN is essentially about. You see the same with various other so-called Software Defined Storage solutions. People typically, when talking about these solutions, talk about things like “enabling within 2 clicks”… Or maybe about how easy it is to scale out, or scale-up for that matter. How much performance you have because of the way they use flash drives. Or about some of the advanced data services they offer.

While all of these are important, when it comes to Virtual SAN I don’t think that is the true strength. Sure, it is great to be able to provide a well performing easy to install scale-out storage solution… but the true strength in my opinion is: Policy Based Management & Integration. After having worked with VSAN for months, that is probably what stood out the most… policy based management

What does this deep integration and what do these policies allow you to do?

  • It provides the ability to specify both Performance and Availability characteristics using the UI (Web Client) or through the API.
    • Number of replicas
    • Stripe width
    • Cache reservations
    • Space reservations
  • It allows you to apply policies to your workload in an easy way through the UI (or API).
  • It provides the ability to do this in a granular way, per VMDK and not per datastore.
  • To a group of VMs or even all VMs in a programmatic way when needed.

Over the last couple of months I have played extensively with this feature of VSAN and vCenter, and in my opinion it is by far the biggest benefit of a hypervisor-converged storage solution. Deep integration with the platform, exposed in a simplistic VM-centric way through the Web Client and/or the vSphere APIs.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Interim pages omitted …
  • Page 16
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in