• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

virtual san

Awesome VSAN contest results

Duncan Epping · Jan 24, 2014 ·

Every once in a while I have someone asking me how about VSAN scale? Typically the question is triggered by a conversation had with another storage vendor and either a misunderstanding or pure FUD that was spread. It is nice to see some of our beta testers showing the possibilities with VSAN. How about the result of the Miami VMUG? These guys managed to get their hands on 3 awesome Dell R720xd hosts and found someone willing to host them. Then they started building, and it didn’t take them long to figure out what the limit was on a 3 node cluster… They managed to run 2250 virtual machines on 3 hosts. Yes, that is indeed more than two thousand virtual machines.

Thanks Miami VMUG for going all out.

Building a hyper-converged platform using VMware technology part 2

Duncan Epping · Jan 23, 2014 ·

In part 1 of “Building a hyper-converged platform using VMware technology” I went through the sizing and scaling exercise. In short to recap, in order to run 100 VMs we would need the following resources:

  • 100 x 1.5 vCPUs = ~30 cores
  • 100 x 5 GB = 500GB of memory
  • 100 x 50 GB (plus FTT etc) = 11.8 TB of disk space

From a storage perspective 11.8 TB is not a huge amount, 500 GB of memory can easily fit in a single host today, and 30 cores… well maybe not easilyin a single host but it is no huge requirement either. What are our options? Lets give an example of some server models that fall into the category we are discussing:

  • SuperMicro Twin Pro – 2U chassis with 4 nodes. Per node: Capable of handling 6 * 2.5″ drives and on-board 10GbE. Supports the Intel E-2600 family and up to 1TB of memory
    • SuperMicro is often used by startups, especially in the hyperconverged space but also hybrid storage vendors like Tintri use their hardware. Hey SuperMicro Marketing Team, this is something to be proud of… SuperMicro powers more infrastructure startups than anyone else probably!
    • Note you can select 3 different disk controller types, LSI 3108, LSI 3008 and the Intel C600. Highly recommend the LSI controllers!
  • HP Sl2500t – 2U chassis with 4 nodes. Per node: Capable of handling 6 * 2.5″ or 3 * 3.5″ drives and FlexibleLOM 10GbE can be included. Supports the Intel E-2600 family and up to 512GB of memory
    • You can select from the various disk controllers HP offers, do note that today there are a limited number of controllers certified.
    • Many probably don’t care, but the HP kit just looks awesome 🙂
  • Dell C6000 series – 2U chassis with 4 nodes. Per node: Capable of handling 6 * 2.5″ per node or 3 * 3.5″ drives. Supports the Intel E-2600 family and up to 512GB of memory
    • Note there is no on-board 10GbE or “LOM” type of solution, you will need to add a 10GbE PCIe card.
    • Dell offers 3 different disk controllers including the LSI 2008 series. Make sure to check the HC.

First thing to note here is that all of the configuration above by default come with 4 nodes, yes you can order them with less but personally I wouldn’t recommend that. Strange thing is that in order to get configuration details for the Dell and HP you need to phone them up, so lets take a look at the SuperMicro Twin Pro as there are details to be found online. What are our configuration options? Well plenty I can tell you that. CPUs ranging low-end Quad-core 1.8GHz up to Twelve-core 2.7 GHz Intel CPUs. Memory configurations ranging from 2GB DIMMS to 32GB DIMMS including the various speeds. Physical disks ranging from 250GB 7200 RPM SATA Seagate to 1.2TB 10k RPM SAS Hitachi drives. Unlimited possibilities, and that is probably where it tends to get more complicated. [Read more…] about Building a hyper-converged platform using VMware technology part 2

Building a hyper-converged platform using VMware technology part 1

Duncan Epping · Jan 22, 2014 ·

I have been working on a slidedeck lately that explains how to build a hyper-converged platform using VMware technology. Of course it is heavily focusing on Virtual SAN as that is one of the core components in the stack. I created the slidedeck based on discussions I have had with various customers and partners who were looking to architect a platform for their datacenters that they could easily repeat. A platform which had a nice form factor and allowed them to scale out and up. Something that could be used in a full size datacenter, but also in smaller SMB type environments or ROBO deployments even.

I guess it makes sense to start with explaining what hyper-converged means to me, although I already wrote various articles on this topic a while back. I explicitly say “to me” as I am sure many folks will have a different opinion on this. A hyper-converged platform is an appliance type of solution where a single box provides a platform for virtual machines. This box typically contains multiple generic x86 hosts (trying to avoid using the word commodity) on which a hypervisor is installed, local storage which is aggregated in to a large shared pool, and network ports. Note that typically network switching is not included in the box itself, well except for virtual switches. In order to aggregate storage in to a large shared pool an additional piece of software is required. Typical examples of hyper-converged platforms which are out there today are Nutanix, SimpliVity and Pivot .

The question than arises if these are “just” x86 boxes with hypervisors installed and storage software, what are the benefits over a regular environment? Those benefits in my opinion are:

  • Time to market is short, < 4hrs to install / deploy (probably much faster for the majority)
  • Easy of management and integration
  • Scale out, both capacity and performance wise
  • Typically more affordable (results will vary)

Sounds like a done deal right? Easier, cheaper and faster… It is fair to say that these are great solutions for many companies as they provide you with one throat to choke. With that meaning it is a single SKU offering and which includes a single point of contact for support in most cases. Only downside I have heard from some partners and customers is that these solutions are typically tied to hardware and specific configurations, which is not always the same as your preferred supplier and probably not the exact configuration you prefer. This could lead to operational challenges when it comes to updating / patching, which probably doesn’t make the operational team happy. On top of that there is the “trust” issue. Some people swear by HP and would never ever want to touch any other brand, while others won’t come close to it. That is a matter of experience and personal taste I guess. Where is all of this leading to? Well here is, in my opinion, where Virtual SAN / VSAN comes in. [Read more…] about Building a hyper-converged platform using VMware technology part 1

Virtual SAN Datastore Calculator

Duncan Epping · Jan 17, 2014 ·

Over a week ago I wrote an article around how to size your Virtual SAN Datastore. I included an equation to help everyone who is going through the same exercise. I figured I should be able to make life even easier by creating a simple Virtual SAN Datastore calculator based on this equation.

<EDIT> VMware has launched a great calculator. My tool served as a temporary solution as there was no calculator available, now that there is I highly recommend using the official tool: http://virtualsansizing.vmware.com/ </EDIT>

Virtual SAN Read IO – cache / buffer / spindles

Duncan Epping · Jan 15, 2014 ·

I had a question around how Virtual SAN read IO is handled when data can be anywhere: read cache, write buffer, disks. On VMTN one of the engineers recently explained this. I figured I would create a quick diagram to illustrate it. Basically how it works is that VSAN will check the read cache, if the block that needs to be read is not available in the read cache it will check whether the block is in the write buffer or on disk. Simple right?

In the scenario I drew below two blocks needs to be read. Block 1 is actively served by ESXi-01 and Block 2 is actively served by ESXi-03. In the case of ESXi-01 the block resides in the read cache so it is read from the cache. In the case of ESXi-03 it is not in the read cache and neither in the write buffer, hence it is read from the magnetic disks. Do note that this is 1 virtual machine, so reads are being served from 2 hosts and depending who is actively serving IO for that block the block can reside on that host in the read cache. The host which is not actively serving IO for that block will also not place the block in the read cache! (Of course if the host which is actively serving IO for a block fails the other host will take over.)

I hope that helps.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 28
  • Page 29
  • Page 30
  • Page 31
  • Page 32
  • Interim pages omitted …
  • Page 36
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in