• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

1.0

VSAN and the AHCI controller (hint: not supported!)

Duncan Epping · Mar 17, 2014 ·

I have seen multiple people reporting this already so I figured I would write a quick blog post. Several folks are going from Beta to GA release for VSAN and so far people have been very successful, except for those using disk controllers which are not on the HCL like the on-board AHCI controller. For whatever reason it appeared on the HCL for a short time during the beta, but it is not supported (and not listed) today. I have had similar issues in my lab, and as far as I am aware there is no workaround at the moment. The errors you will see appear in the various logfiles have the keywords: “APD”, “PDL”, “Path lost” or “NMP device <xyz> is blocked”.

Before you install / configure Virtual SAN I highly want to recommend validating the HCL: http://vmwa.re/vsanhcl (I figured I will need this URL a couple of times in the future so I created this nice short url)

Update: with 5.5 U2 it is reported AHCI works, however still not supported!

Rebuilding your Virtual SAN Lab? Wipe the disks first!

Duncan Epping · Mar 12, 2014 ·

** You can do this in the vSphere UI these days, read William’s article on how to! **

Are you ready to start rebuilding your Virtual SAN lab from beta builds to GA code, vSphere 5.5 U1? One thing I noticed is that the installer is extremely slow when there are Virtual SAN partitions on disk. It sits there at “VSAN: successfully initialized” for a long time and when you get to the “scanning disks” part it takes equally as long. Eventually I succeeded, but it just took a long time. Could be because I am running with an uncertified disk controller of course, either way if you are stuck in the following screen there is a simple solution.

Just wipe ALL disks first before doing the installation. I used the Gparted live ISO to wipe all my disks clean, just delete all partitions and select “apply”. Takes a couple of minutes, but saved me at least 30 minutes waiting during the installation.

VSAN Design Consideration: Booting ESXi from USB with VSAN?

Duncan Epping · Mar 10, 2014 ·

** Update, as of November 21st we also support SD/USB boot with higher memory configurations when core dump partition is increased. Also read this KB article on the topic of increasing the ESXi diagnostic partition size **

One thing most probably won’t realize is that there is a design consideration with VSAN when it comes to installing ESXi. Many of you have probably been installing ESXi on USB or SD and this is also still supported with VSAN. There is one caveat however and this caveat is around the total number of GBs of memory in a host. The design consideration is fairly straight forward and also documented in the VSAN Design and Sizing Guide. Just to make it a bit easier to find it I copied/pasted it here for your convenience:

  • Use SD, USB, or hard disk devices as the installation media whenever ESXi hosts are configured with as much as 512GB memory. Minimum size for USB/SD card is 4GB, recommended 8GB.
  • Use a separate magnetic disk or solid-state disk as the installation device whenever ESXi hosts are configured with more than 512GB memory.

You may wonder what the reason is, the reason for this is that VSAN will use the core dump partition to store VSAN traces that can be used by VMware Global Support Services and the VMware Engineering team for root cause analysis when needed. So make sure when configuring host to keep this in mind when going above 512GB of memory.

Please note that this is what has been tested by VMware and will be supported, so this is not just any recommendation but could have impact on support!

Re: VMware VSAN VS the simplicity of hyperconvergence

Duncan Epping · Dec 11, 2013 ·

I was reading this awesome article by “the other” Scott Lowe. (That is how he calls himself on twitter.) I really enjoyed the article and think it is a pretty fair write-up. Although I’m not sure I really agree with some of the statements or conclusions drawn. Again, do not get me wrong… I really like the article and effort Scott has put in, and I hope everyone takes the time to read it!

A couple of things I want to comment on:

VMware VSAN VS the simplicity of hyperconvergence

I guess I should start with the title… Just like for companies like SimpliVity (Hey guys congrats on winning the well deserved award for best converged solution) and Nutanix their software is the enabler or their hyper-converged solution. Virtual SAN could be that, if you buy a certain type of hardware of course that is.

Hyper-converged infrastructure takes an appliance-based approach to convergence using, in general, commodity x86-based hardware and internal storage rather than traditional storage array architectures. Hyper-converged appliances are purpose-built hardware devices.

Keyword in this sentence if you ask me is “purpose-built”. In most cases there is nothing purpose-built about the hardware. (Except for SimpliVity as they use a purpose built component for deduplication.) In May of 2011 I wrote about these HPC Servers that SuperMicro was selling and how they could be a nice platform for virtualization, I even ask in my article which company would be the first to start using these in a different way. Funny, as I didn’t know back then that Nutanix was planning on leveraging these which was something I found out in August of 2011. The servers used by most of the Hyper-converged players today those HPC servers and are very much generic hardware devices. The magic is not the hardware being used, the magic is the software if you ask me and I am guessing vendors like Nutanix will agree on me that.

Due to its VMware-centric nature and that fact that VSAN doesn’t present typical storage constructs, such as LUNs and volumes, some describe it as a VMDK storage server.

Not sure I agree with this statement. What I personally actually like about VSAN is that it does present a “typical storage construct” namely a (Virtual SAN) data store. From a UI point of view it just looks like a regular datastore. When you deploy a virtual machine the only difference is that you will be picking a VM Storage Policy on top of that, other than that it is just business as usual. For users, nothing new or confusing about it!

As is the case in some hybrid storage systems, VSAN can accelerate the I/O operations destined for the hard disk tier, providing many of the benefits of flash storage without all of the costs. This kind of configuration is particularly well-suited for VDI scenarios with a high degree of duplication among virtual machines where the caching layer can provide maximum benefit. Further, in organizations that run many virtual machines with the same operating system, this breakdown can achieve similar performance goals. However, in organizations in which there isn’t much benefit from cached data — highly heterogeneous, very mixed workloads — the overall benefit would be much less.

VSAN can accelerate ANY type of I/O if you ask me. It has a write buffer and a read cache. Depending on the size of your working set (active data), the size of the cache and the type of policy used you should always benefit regardless of the type of workload used. From a writing perspective as mentioned it will always go to the buffer, but from a read perspective your working set should be in cache. Of course there are always types of workloads where this will not apply but for the majority it should.

VSAN is very much a “build your own” approach to the storage layer and will, theoretically, work with any hardware on VMware Hardware Compatibility list. However, not every hardware combination is tested and validated. This will be one of the primary drawbacks to VSAN…

This is not entirely true. VMware is working on a program called Virtual SAN ready nodes. These Virtual SAN ready nodes will be pre-configured, certified and tested configurations which are optimized for things like performance / capacity etc. I haven’t seen the final list yet, but I can imagine certain vendors like for instance Dell and HP will want to list specific types of servers with an X number of Disks and a specific SSD types to ensure optimal user experience. So although VSAN is indeed a “bring your own hardware” solution, but I think that is the great thing about VSAN… you have the flexibility to use the hardware you want to use. No need to change your operational procedures because you are introducing a new type of hardware, just use what you are familiar with.

PS: I want to point out there are some technical inaccuracies in Scott’s post. I’ve pointed these out and am guessing they will be corrected soon.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in