Yesterday I was informed by the EVO:RAIL team that the HP ConvergedSystem 200–HC EVO:RAIL is available (shipping) as of this week. I haven’t seen much around additional pieces HP is including, but I was told though that they are planning to integrate HP One View. HP One View is a management/monitoring solution that gives you a great high level overview of the state of your systems but at the same time enables you to dive deep when required. Depending on the version included HP One View can also do things like Firmware Management, which is very useful in a Virtual SAN environment if you ask me. I know though that many people have been waiting for HP to start shipping as it appears to be a preferred vendor for many customers. In terms of configuration, the HP solution is very much similar to what we have already seen out there:
- 4 nodes in 2U each containing:
- 2 x Intel® E5-2620 v2 six-core CPUs
- 192 GB memory
- 1 x SAS 300 GB 10k rpm drive ESXi boot device
- 3 x SAS 1.2 TB 10k rpm drive (VSAN capacity tier)
- 1 x 400 GB MLC enterprise-grade SSD (VSAN performance tier)
- 1 x H220 host bus adapter (HBA) pass-through controller
- 2 x 10GbE NIC ports
- 1 x 1GbE IPMI port for remote (out-of-band) management
As soon as I find out more around integration of other components I will let you folks know.




That is a nice long list indeed. Let my discuss some of these features a bit more in-depth. First of all “all-flash” configurations as that is a request that I have had many many times. In this new version of VSAN you can point out which devices should be used for caching and which will serve as a capacity tier. This means that you can use your enterprise grade flash device as a write cache (still a requirement) and then use your regular MLC devices as the capacity tier. Note that of course the devices will need to be on the HCL and that they will need to be capable of supporting 0.2 TBW per day (TB written) over a period of 5 years. For a drive that needs to be able to sustain 0.2 TBW per day, this means that over 5 years it needs to be capable of 365TB of writes. So far tests have shown that you should be able to hit ~90K IOPS per host, that is some serious horsepower in a big cluster indeed.
