• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

virtual san

EMC VSPEX Blue aka EVO:RAIL going GA

Duncan Epping · Feb 3, 2015 ·

EMC VSPEX BLUEEMC just announced the general availability of VSPEX Blue. VSPEX Blue is basically EMC’s version of EVO:RAIL and EMC wouldn’t be EMC if they didn’t do something special with it. First thing that stands out from a hardware perspective is that EMC will offer two models a standard model with the Intel E5-2620 V2 proc and 128GB of memory, and a performance model which will hold 192GB of memory. That is the first time I have seen an EVO:RAIL offering with different specs. But that by itself isn’t too exciting…

When reading the spec sheet the following bit stood out to me:

EMC VSPEX BLUE data protection incorporates EMC RecoverPoint for VMs and VMware vSphere Data Protection Advanced. EMC RecoverPoint for VMs offers operational and disaster recovery, replication and continuous data protection at the VM level. VMware vSphere Data Protection Advanced provides centralized backup and recovery and is based on EMC Avamar technology. Further, with the EMC CloudArray gateway, you can securely expand storage capacity without limits. EMC CloudArray works seamlessly with your existing infrastructure to efficiently access all the on-demand public cloud storage and backup resources you desire. EMC VSPEX BLUE is backed by a single point of support from EMC 24×7 for both hardware and software.

EMC is including various additional pieces of software including vSphere Data Protection Advanced for backup and recovery, EMC Recovery Point for disaster recovery, EMC CloudArray gateway and the EMC VSPEX BLUE management software and EMC Secure Remote Service which will allow for  monitoring, diagnostics and repair services. This of course will differ per support offering, and there are currently 3 support offerings (basic, enhanced, premium). Premium is where you get all the bells and whistles with full 24x7x4 support.

What is special about the management / support software in this case is that EMC took a different approach then normal. In this case the VSPEX BLUE interface will allow you to directly chat with support folks, dig up knowledge base articles and even the community is accessible from within. Also, the management layer will monitor the system and if something fails then EMC will contact you, also known as “phone home”. Besides the fact that the UI is a couple of steps ahead of anything I have seen so far, it looks like EMC will directly tie in with LogInsight which will provide deep insights from the hardware to the software stack. What also impressed me were the demos they provided and how they managed to create the same look and feel as the EVO:RAIL interface.

EMC also mentioned that they are working on a market place. This market place will allow you to deploy certain additional services, in this example you can see CloudArray, RecoverPoint and VDPA but more should be added soon! Will be interesting to see what kind of services will end up in the market place. I do feel that this is a great way of adding value on top of EVO:RAIL.

One of the services in the market place that stood out to me was CloudArray. So what about that EMC CloudArray gateway solution, what can you do with that? The CloudArray solution allows you to connect external offsite store as iSCSI or NFS to the appliance. It can be used for everything, but what I find most compelling is that it will allow you to replicate your backup data off-site. The CloudArray will come with 1TB local cache and 10 TB cloud storage!

I have to say that EMC did a great job packing the EVO:RAIL offering with additional pieces of software and I believe they are going to do well with VSPEX BLUE, in fact I would not be surprised if they are going to be the number 1 qualified partner in terms of sales really fast. If you are interested, the offering will be shipping on the 16th of February, but can be ordered today!

What is new for Virtual SAN 6.0?

Duncan Epping · Feb 3, 2015 ·

vSphere 6.0 was just announced and with it a new version of Virtual SAN. I don’t think it is needed to introduce Virtual SAN as I have written many many articles about it in the last 2 years. Personally I am very excited about this release as it adds some really cool functionality if you ask me, so what is new for Virtual SAN 6.0?

  • Support for All-Flash configurations
  • Fault Domains configuration
  • Support for hardware encryption and checksum (See HCL)
  • New on-disk format
    • High performance snapshots / clones
    • 32 snapshots per VM
  •  Scale
    • 64 host cluster support
    • 40K IOPS per host for hybrid configurations
    • 90K IOPS per host for all-flash configurations
    • 200 VMs per host
    • 8000 VMs per Cluster
    • up to 62TB VMDKs
  • Default SPBM Policy
  • Disk / Disk Group serviceability
  • Support for direct attached storage systems to blade (See HCL)
  • Virtual SAN Health Service plugin

That is a nice long list indeed. Let my discuss some of these features a bit more in-depth. First of all “all-flash” configurations as that is a request that I have had many many times. In this new version of VSAN you can point out which devices should be used for caching and which will serve as a capacity tier. This means that you can use your enterprise grade flash device as a write cache (still a requirement) and then use your regular MLC devices as the capacity tier. Note that of course the devices will need to be on the HCL and that they will need to be capable of supporting 0.2 TBW per day (TB written) over a period of 5 years. For a drive that needs to be able to sustain 0.2 TBW per day, this means that over 5 years it needs to be capable of 365TB of writes. So far tests have shown that you should be able to hit ~90K IOPS per host, that is some serious horsepower in a big cluster indeed.

Fault Domains is also something that has come up on a regular basis and something I have advocated many times. I was pleased to see how fast the VSAN team could get it in to the product. To be clear, no this is not a stretched cluster solution… but I would see this as the first step, but that is my opinion and not VMware’s. This Fault Domain feature will allow you to specify fault domains per rack and then when you provision a new virtual machine VSAN will make sure that the components of the objects are placed in different fault domains.

In this case when you do it per rack then even a full rack failure would not impact your virtual machine availability. Very cool indeed. The nice thing about the fault domain feature also is that it is very simple to configure. Literally a couple of clicks in the UI, but you can also use RVC or host profiles to configure it if you want to. Do note that you will need 6 hosts at a minimum for Fault Domains to make sense.

Then of course there is the scalability. Not just the 64 host cluster support but also the 200 VMs per host is a great improvement. Of course there is also the improvements around snapshot and cloning which can be attributed to the new on-disk format and the different snapshotting mechanism that is being used, less then 2% performance impact when going up to 32 levels deep is what we have been waiting for. Fair to say that this is where the acquisition of Virsto is coming in to play, and I think we can expect to see more. Also, the components number has gone up. The max number of components used to be 3000 and is now increased to 9000.

Then there is the support for blade systems with direct attached storage systems… this is very welcome, I had many customers asking for this. Note that as always the HCL is leading, so make sure to check the HCL before you decide to purchase equipment to implement VSAN in a blade environment. Same applies to hardware encryption and checksums, it is fully supported but make sure your components are listed with support for this functionality on the HCL! As far as I know the initial release will have 2 supported systems on there, one IBM system and I believe the Dell FX platform.

All of the operational improvements that were introduced around disk serviceability and being able to tag a device as “local / remote / SSD” are the direct result of feedback from customers and passionate VSAN evangelists internally at VMware. Also for instance pro-active rebalancing is now possible through RVC. If you add a host or remove a host and want to even out the nodes from a capacity point of view then a simple RVC command will allow you to do this. But also for instance the “resync” details can now be found in the UI, something I am very happy about as that will help people during PoCs not to run in to the scenario where they introduce new failures while VSAN is recovering from previous failures.

Last one I want to mention is the Virtual SAN Health Service plugin. This is a separately developed Web Client plugin that will provide in-depth information about Virtual SAN. I gave it a try a couple of weeks ago and now have it running in my environment, impressed with what is in there and great to see this type of detail straight in the UI. I expect that we  will see various iterations in the upcoming year.

EZT Disks with VSAN, why would you?

Duncan Epping · Jan 26, 2015 ·

I noticed a tweet today which made a statement around the use of eager zero thick disks in a VSAN setup for running applications like SQL Server. The reason this user felt this was needed was to avoid the hit on “first write to block on VMDK”, it is not the first time I have heard this and I have even seen some FUD around this so I figured I would write something up. On a traditional storage system, or at least in some cases, this first write to a new block takes a performance penalty. The main reason for this is that when the VMDK is thin, or lazy zero thick, the hypervisor will need to allocate that new block that is being written to and zero it out.

First of all, this was indeed true with a lot of the older storage system architectures (non-VAAI). However, this is something that even in 2009 was dispelled as forming a huge problem. And with the arrival of all-flash arrays this problem disappeared completely. But indeed VSAN isn’t an all-flash solution (yet), but for VSAN however there is something different to take in to consideration. I want to point out, that by default when you deploy a VM on VSAN you typically do not touch the disk format even and it will get deployed as “thin” with potentially a space reservation setting which comes from the storage policy! But what if you use an old template which has a zeroed out disk and you deploy that and compare it to a regular VSAN VM, will it make a difference? For VSAN eager zero thick vs thin will (typically) make no difference to your workload at all. You may wonder why, well it is fairly simple… just look at this diagram:

If you look at the diagram then you will see that the acknowledgement will happen to the application as soon as the write to flash has happened. So in the case of thick vs thin you can imagine that it would make no difference as the allocation (and zero out) of that new block would happen minutes after the application (or longer) has received the acknowledgement. A person paying attention would now come back and say: hey you said “typically”, what does that mean? Well that means that the above is based in the understanding that your working set will fit in cache, of course there are ways to manipulate performance tests to proof that the above is not always the case, but having seen customer data I can tell you that this is not a typical scenario… or extremely unlikely.

So if you deploy Virtual SAN… and have “old” templates floating around and they have “EZT” disks, I would recommend overhauling them as it doesn’t add much, well besides a longer waiting time during deployment.

Two logical PCIe flash devices for VSAN

Duncan Epping · Jan 5, 2015 ·

A couple of days ago I was asked whether I would recommend to use two logical PCIe flash devices leveraging a single physical PCIe flash device. The reason for the question was the recommendation from VMware to have two Virtual SAN disk groups instead of (just) one disk group.

First of all, I want to make it clear that this is a recommended practices but definitely not a requirement. The reason people have started recommending it is because of “failure domains”. As some of you may know, when a flash device becomes unavailable, which is used for read caching / write buffering and fronts a given set of disks, all the disks in that disk group associated with the flash devices becomes unavailable. As such a disk group can be considered a failure domain, and when it comes to availability it is typically best to spread risks so having multiple failure domains is desirable.

When it comes to PCIe devices would it make sense to carve up a single physical device in to multiple logical? From a failure point of view I personally think it doesn’t add much value, if the device fails then it is likely both logical devices fail. From an availability point of view there isn’t much 2 logical devices adds, however it could be beneficial to have multiple logical devices if you have more than 7 disks per server.

As most of you will know each host can have 7 disks per disk group at most and 5 disk groups per server. If there is a requirement for the server to have more than 7 disks then there will be a need to have multiple flash devices, in that scenario creating multiple logical devices would be needed, although I would still prefer having multiple physical devices from a failure tolerance perspective than having multiple logical devices. But I guess it all depends on what type of devices you use, if you have sufficient PCIe slots available etc. In the end the decision is up to you, but do make sure you understand the impact of your decision.

Slow backup of VM on VSAN Datastore

Duncan Epping · Nov 14, 2014 ·

Someone at out internal field conference asked me a question around why doing a full back up of a virtual machine on a VSAN datastore is slower then when doing the same exercise for that virtual machine on a traditional storage array. Note that the test that was conducted here was done with a single virtual machine. The best way to explain why this is is by taking a look at the architecture of VSAN. First, let me mention that the full backup of the VM on a traditional array was done on a storage system that had many disks backing the datastore on which the virtual machine was located.

Virtual SAN, as hopefully all of you know, creates a shared datastore out of host local resources. This datastore is formed out of disk and flash. Another thing to understand is that Virtual SAN is an object store. Each object typically is stored in a resilient fashion and as such on two hosts, hence 3 hosts is the minimum. Now, by default the component of an object is not striped which means that components are stored in most cases on a single physical spindle, for an object this means that as you can see in the diagram below that the disk (object) has two components and without stripes is stored on 2 physical disks.

Now lets get back to the original question. Why did the backup on VSAN take longer then with a traditional storage system? It is fairly simple to explain looking at the above info. In the case of the traditional storage array you are reading from multiple disks (10+) but with VSAN you are only reading from 2 disks. As you can imagine when reading from disk performance / throughput results will differ depending on the number of resources the total number of disks it is reading from can provide. In this test, as there it is just a single virtual machine being backed up, the VSAN result will be different as it has a lower number of disks (resources) to its disposal and on top of that is the VM is new there is no data cached so the flash layer is not used. Now, depending on your workload you can of course decide to stripe the components, but also… when it comes to backup you can also decided to increase the number of concurrent backups… if you increase the number of concurrent backups then the results will get closer as more disks are being leveraged across all VMs. I hope that helps explaining  why results can be different, but hopefully everyone understands that when you test things like this that parallelism is important or provide the right level of stripe width.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 19
  • Page 20
  • Page 21
  • Page 22
  • Page 23
  • Interim pages omitted …
  • Page 36
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in