• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

All-flash VSAN configuration example

Duncan Epping · Mar 31, 2015 ·

I was talking to a customer this week who was looking to deploy various 4 node VSAN configurations. They needed a solution which would provide them performance and wanted to minimize the moving components due to the location and environmental aspects of the deployment, all-flash VSAN is definitely a great choice for this scenario. I looked at various server vendors and based on their requirements (and budget) provided them a nice configuration (in my opinion) which comes in for slightly less than $ 45K.

What I found interesting is the price of the SSDs, especially the “capacity tier” as the price is very close to SAS 10K RPM. I selected the Intel S3500 as the capacity tier as it was one of the cheapest listed that is part of the VMware VSAN HCL, will be good to track GB/$ for new entries on the HCL that will be coming soon, so far S3500 seems to be the sweet spot. Also seems that from a price point perspective the 800GB devices are most cost effective at the moment. The 3500 seems to perform well as demonstrated in this paper by VMware on VSAN scaling / performance.

This is what the bill of materials looked like, and I can’t wait to see it deployed:

  • Supermicro SuperServer 2028TP-HC0TR – 2U TwinPro2
  • Each node comes with:
    • 2 x Eight-Core Intel Xeon Processor E5-2630 v3 2.40GHz 20MB Cache (85W)
    • 256 GB in 8 DIMMs at 2133 MHz (32GB DIMMs)
    • 2 x 10GbE NIC port
    • 1 x 4
    • Dual 10-Gigabit Ethernet
    • LSI 3008 12G SAS

That is a total of 16TB of flash based storage capacity, 1TB of memory and 64 cores in mere 2U. The above price is based on a simple online configurator and does not include any licenses, a very compelling solution if you ask me.

Related

Server, Storage, vSAN all flash, flash, virtual san, vsan

Reader Interactions

Comments

  1. brent says

    1 April, 2015 at 05:27

    You had me until the ‘2u’ at the end. Aren’t we talking about 8u here? Cool idea though. I’d love to benchmark next to an AFA.

    • Duncan Epping says

      1 April, 2015 at 07:48

      2U / 4Node solution by SuperMicro

  2. Johnny cash says

    1 April, 2015 at 17:10

    Interesting server design.

  3. John Nicholson says

    1 April, 2015 at 18:20

    The total BOM assuming vSphere standard and VSAN and all flash pricing would approach 100K.

    The storage piece of of this BOM (32K in VSAN licensing, 3K for the S3700’s 14K for the 3500’s) would be 49K. That is $6 per GB. This is not very compelling with the socket pricing, IMHO until all the 3D NAND foundries, come online later this year, and data reduction (or other fun efficiency tricks) gets thrown in.

    Now for VDI, or VSPP it should be fine (paying per desktop and GB are a lot nicer, and scale down well).

    In the meanwhile unless there is a compelling case for it, I think we’ll be largely focusing on hybrid deployments for 2015.

    • Duncan Epping says

      1 April, 2015 at 21:04

      Not an optimal design from a TCO point of view, but that wasn’t the goal here to be honest. There were a set of requirements and minimum resources were needed.

      • John Nicholson. says

        7 April, 2015 at 08:22

        Larger drive count configurations (2RU with say 24 drives), could have a nice value. On top of that the value is in the policy management and integration.

        • Duncan Epping says

          7 April, 2015 at 10:44

          That is what most customers end up going for indeed, 2U servers and many drives…

  4. Martin Gavanda says

    1 April, 2015 at 18:57

    It is nice design. What we are doing is something similar but with few key differents in compare to EVO:RAIL.

    The product is focused primary on SMB customers so we use 2 compute nodes with HP VSA converged storage and 1 storage node for backups with Veeam. Basically in 5U (3U for servers and 2U for dual 10G EX3300 switches) you get complete solution for your onpremise virtal datacenter. Nice thing is that you can use Essentials Plus kit which is quite nice for SMB.

    • Duncan Epping says

      7 April, 2015 at 10:44

      Sounds cool. I want to point out that you can use Essentials as well with VSAN…

      • Neil says

        9 September, 2015 at 23:34

        Hello Duncan,
        Can you elaborate more on using “Essentials with VSAN”

        What are the pros and cons?

        • Duncan Epping says

          10 September, 2015 at 10:16

          When you use Essentials, then things like DRS are not available within your environment. For VSAN not a big deal, but it could be for your workloads. Personally I would prefer to use a higher license sku

  5. Scott says

    1 April, 2015 at 19:14

    This is 16TB of RAW capacity; I would be curious what the actual ‘real world’ capacity of cluster is when you start factoring in copies of data across the VSAN. Would I be correct in thinking the actual usable capacity would be near half? Any good formulas for this?

    • John Nicholson. says

      7 April, 2015 at 08:24

      Scott, the default policy is mirroring (so 50% efficiency) however if you use Thin Provisioning (generally get another 20% back). If something is not critical (like a lab system) you could have a policy that doesn’t mirror those systems. So, mileage may vary 🙂

    • BrandinB says

      9 April, 2015 at 02:47

      Well you are supposed to only fill disks to 70%, then 50% of that for mirroring, then account for node failure -25%. I think you are left with about 4.2tb of space you should actually use. Someone correct me if I am wrong.

  6. Robert Rizzi says

    11 October, 2015 at 17:50

    Just purchased one of these with some very minor differences. Instead of using 400GB Intel DC S3700 SSDs, we chose the 200GB model because we only plan to use less than 1TB of capacity across the DC S3500 SSDs. If we need more, we will replace re-purpose the 200GB SSDs and replace with something larger down the road. At the very least, that would mean the endurance Flash SSD is at least 20% of capacity SSD. For capacity, we chose the same drive, but only purchased 1 for each node for the time being. Our total RAW capacity (not including the endurance SSD of course), is only 3.2TB.

    We also backed off the total RAM shown in the original post because we won’t need that much for our use case. Our server was provisioned with only 128GB, which cost $1,112.00. We will probably double this sometime next year.

    This will be used as our new vSphere management cluster for reliability. We are a small but growing VMware Service Provider.

    More importantly (and licensing excluded because we are a service provider, aka VSPP aka vCAN), total cost including shipping was $16,108.00 for the entire system. All things considered, I think this setup makes a compelling argument for a management cluster use-case such as ours.

    I’ll post a follow-up reply in the near future to remark on test results with only one SSD per Disk Group.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium
Aug 21st – VMware Explore
Sep 20th – VMUG DK
Nov 6th – VMware Explore
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in