• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

All-Flash HCI is taking over fast…

Duncan Epping · Nov 23, 2016 ·

Two weeks ago I tweeted about All-Flash HCI taking over fast, maybe I should have said All-Flash vSAN as I am not sure every vendor is seeing the same trend. Reason for it being of course is the price of flash dropping while capacity goes up. At the same time with vSAN 6.5 we introduced “all-flash for everyone” by dropping the “all-flash” license option down to vSAN Standard.

I love getting these emails about huge vSAN environments… this week alone 900TB and 2PB raw capacity in a single all-flash vSAN cluster

— Duncan Epping (@DuncanYB) November 10, 2016

So the question naturally came, can you share what these customers are deploying and using, I shared those later via tweets, but I figured it would make sense to share it here as well. When it comes to vSAN there are two layers of flash used, one for capacity and the other for caching (write buffer to be more precise). For the write buffer I am starting to see a trend, the 800GB and 1600 NVMe devices are becoming more and more popular. Also the write-intensive SAS connected SSDs are often used. I guess it largely depends on the budget which you pick, needless to say but NVMe has my preference when budget allows for it.

For the capacity tier there are many different options, most people I talk to are looking at the read intensive 1.92TB and 3.84TB SSDs. SAS connected are a typical choice for these environments, but it does come at a price. The SATA connected S3510 1.6TB (available at under 1 euro per GB even) seems to be a choice many people make who have a tighter budget, these devices are relatively cheap compares to the SAS connected devices. With the downside being the shallow queue depth though, but if you are planning on having multiple devices per server than this probably isn’t a problem. (Something I would like to see at some point is a comparison between SAS and SATA connected for real life workloads for drives with similar performance capabilities to see if there actually is an impact.)

With prices still coming down and capacity still going up it will be interesting to see how the market shifts in the upcoming 12-18 months. If you ask me hybrid is almost dead, of course there are still situations where it may make sense (low $ per GB requirements), but in most cases all-flash just makes more sense these days.

I would interested in hearing from you as well, if you are doing all-flash HCI/vSAN, what are the specs and why are you selecting specific devices/controllers/types?

Share it:

  • Tweet

Related

Server, Software Defined, Storage, vSAN 6.2, 6.5, software defined storage, virtual san, VMware, vsan, vSphere

Reader Interactions

Comments

  1. Greg says

    23 November, 2016 at 22:23

    1.9tb capacity drives and 800gb+ cache drives are certainly where I am seeing a lot of folks land.

  2. Kate says

    24 November, 2016 at 09:38

    I have 2 clusters… Both AllFlash. We use:
    Cache – Intel with 10DWPD about 800GB
    Capacity – SanDisk with 3 DWPD about 900GB

  3. Cristiano Cumer (@sappomanno) says

    24 November, 2016 at 17:18

    Small VDI implementation here

    2 disk groups per server.
    Each DG made of:
    – 1x Intel S3710 – 400 GB – Cache
    – 3 x Intel S3610 – 800 GB – Capacity

  4. MrTaliz says

    24 November, 2016 at 20:17

    The difficult thing is finding a balance between CPU, memory & disk.
    We have VMs in the five figures in eight different datacenters, and now I am tasked with replacing the underlying hardware with HCI.
    Currently we’re looking at something that somewhat mirrors the VSAN ready nodes, albeit with slight adjustments.

    As we are a Cisco shop we use UCS, and we will probably go with something like this when it comes to disk, per server(C240):
    2x800GB Intel NVMe for cache
    14×1,6TB Intel SATA for capacity

    Then we will try to figure out what kind of CPU & memory would be appropriate for that. Probably 2x 14-18 core CPUs and 512-768GB RAM.

    The next generation UCS servers, M5, will be able to hold 10x hot swap NVMe drives btw. So soon SATA/SAS will be dead too..

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in