I’ve been prepping a presentation for upcoming VMUGs, but wanted to also share this with my readers. The session is all about vSphere futures, what is coming soon? Before anyone says I am breaking NDA, I’ve harvested all of this info from public VMworld sessions. Except for the VSAN details, those were announced to the press at VMworld EMEA. Lets start with Virtual SAN…
The Virtual SAN details were posted in this Computer Weekly article, and by the looks of it they interviewed VMware’s CEO Pat Gelsinger and Alberto Farronato from the VSAN product team. So what is coming soon?
- All Flash Virtual SAN support
Considering the price of MLC has lowered to roughly the same price as SAS HDDs per GB I think this is a great new feature to have. Being able to build all-flash configurations at the price point of a regular configuration, and with probably many supported configurations is a huge advantage of VSAN. I would expect VSAN to support various types of flash as the “capacity” layer, so this is an architects dream… designing your own all-flash storage system! - Virsto integration
I played with Virsto when it was just released and was impressed by the performance and the scalability. Functions that were part of Virst such as snapshots and clones these have been built into VSAN and it will bring VSAN to the next level! - JBOD support
Something many have requested, and primarily to be able to use VSAN in Blade environments… Well with the JBOD support announced this will be a lot easier. I don’t know the exact details, but just the “JBOD” part got me excited. - 64 host VSAN cluster support
VSAN doesn’t scale? Here you go,
That is a nice list by itself, and I am sure there is plenty more for VSAN. At VMworld for instance Wade Holmes also spoke about support for disk controller based encryption for instance. Cool right?! So what about vSphere? Considering even the version number was dropped during the keynote and it hints at a major release you would expect some big functionality to be introduced. Once again, all the stuff below is harvested from various public VMworld sessions:
- VMFork aka Project Fargo – discussed here…
- Increased scale!
- 64 host HA/DRS cluster, I know a handful of customers who asked for 64 host clusters, so here it is guys… or better said: soon you will have it!
- SMP vCPU FT – up to 4 vCPU support
- I like FT from an innovation point of view, but it isn’t a feature I would personally use too much as I feel “fault tolerance” from an app perspective needs to be solved by the app. Now, I do realize that there are MANY legacy applications out there, and if you have a scale-up application which needs to be highly available then SMP FT is very useful. Do note that with this release the architecture of FT has changed. For instance you used to share the same “VMDK” for both primary and secondary, but that is no longer the case.
- vMotion across anything
- vMotion across vCenter instances
- vMotion across Distributed Switch
- vMotion across very large distance, support up to 100ms latency
- vMotion to vCloud Air datacenter
- Introduction of Virtual Datacenter concept in vCenter
- Enhance “policy driven” experience within vCenter. Virtual Datacenter aggregates compute clusters, storage clusters, networks, and policies!
- Content Library
- Content Library provides storage and versioning of files including VM templates, ISOs, and OVFs.
Includes powerful publish and subscribe features to replicate content
Backed by vSphere Datastores or NFS
- Content Library provides storage and versioning of files including VM templates, ISOs, and OVFs.
- Web Client performance / enhancement
- Recent tasks pane drops to the bottom instead of on the right
- Performance vastly improved
- Menus flattened
- DRS placement “network aware”
- Hosts with high network contention can show low CPU and memory usage, DRS will look for more VM placements
- Provide network bandwidth reservation for VMs and migrate VMs in response to reservation violations!
- vSphere HA component protection
- Helps when hitting “all paths down” situations by allowing HA to take action on impacted virtual machines
- Virtual Volumes, bringing the VSAN “policy goodness” to traditional storage systems
Of course there is more, but these are the ones that were discussed at VMworld… for the remainder you will have to wait until the next version of vSphere is released, or you can also sign up for the beta still I believe!
Tom Howarth says
Duncan, I may have missed this but does the VSAN beta support more than one Datastore being attached to it?
Duncan Epping says
I cannot comment on what has not been discussed publicly, I do wonder what your usecase would be. Considering VSAN provides VM granularity I can’t see why you would want this?
Ser says
Good work boys
Edy says
Duncan, thank you for the exciting news. Do you know any backup solution either from Vmware or 2rd Party which does not uses vmware snapshot functionality when backing/replicating Vmware vm?
I had bad experience with Veaam where by after doing the snapshot … could of our VMs start to have snapshot consolidation is needed error …
ibeerens1 says
VSAN and support for stretched clusters would be great….
Mark Burgess says
Hi Duncan,
We have been having an interesting debate about VSAs v VSAN over at http://blog.nigelpoulton.com/vsan-is-no-better-than-a-hw-array/
It would be interesting to get your comments on the subject.
I have also been following your blogs on EVO:RAIL since VMworld so I would be keen to understand more about this.
I have been looking at it quite closely and even did a couple of blogs on the subject (http://blog.snsltd.co.uk/an-introduction-to-vmware-virtual-san-software-defined-storage-technology/ and http://blog.snsltd.co.uk/what-are-the-pros-and-cons-of-software-defined-storage/) and I am struggling with the following:
1. Why can we not specify a CPU and memory quantity (6-cores seems a bit behind the times today)?
2. Why so little storage and why can the customer not specify the SSD and HDD configuration?
3. Why can we not start with 3 nodes and then add nodes one at a time (purchasing 4 nodes at a time does not seem ideal)?
4. Why can we not bring our own vSphere and VSAN licenses (Enterprise Plus may not be the most appropriate edition for the customers needs)?
5. Can the VMware licenses live on beyond the hardware (i.e. can they be transferred to another EVO:RAIL box or standard server)?
I get VSAN, it is a truly software-defined solution, but unless I have missed something EVO:RAIL is about as far away from software-defined as you can get.
Your comments would be much appreciated.
Best regards
Mark
Duncan Epping says
I saw the post by Nigel and to be honest I fully agree with Chuck and see no point in a debate with someone who is probably not open to change his opinion. If you feel that “VSAN” brings lock in but a VSA doesn’t then what more can I say? I mean at some point you will always be locked to something. As you stated in your article when you go with Nutanix you are locked to SuperMicro or Dell, despite the fact it is a VSA. So I don’t see the point in the argument 🙂
1) The CPU was selected based on the given that we had limited disk capacity to work with. All 2U/4Node servers have a limited number of disks they support today and as such capacity is limited. 1.2TB 10K RPM is max most will support. Hence adding bigger CPU would be overkill as limited storage capacity would not allow you to use it more than likely
2) SSD and HDD config is set so that we know what kind of performance a customer gets. Yes in theory you can add other components, but we have strict guidelines (not a model) and each vendor is free to use other parts, as long as they stick to the requirements.
3) That decision was taken from a licensing point of view I am guessing.
4) Different licensing models are being discussed for future releases, not sure if/when/how.
5) No
In the end you as a customer have choice. Choice to go with a set configuration in EVO:RAIL or take a VSAN Ready Node or go build your own VSAN box… I think that is great as EVO:RAIL will not fit all… It is about simplicity, and there are some constraints because of that.
nigelpoulton says
Hi Duncan. Nigel here. Disclaimer: I work for no vendor and am taking no money from any storage or hypervisor vendor. I guess we could say I’m independent.
I like your blog, and am a fan of your opinions in all technical discussions, and am genuinely a bit gutted that you think I wouldn’t change my mind on something. For the record, I change my mind an embarrassing number of times – seriously :-S And I love discussion.
Anyway….. that said….. yes, I do think VSAN leads to *hypervisor* lock-in. It’s adding value to the hypervisor which IMHO opinion (which may have changed by tomorrow) is similar to adding value to HW. I think it’s about time the hypervisor was comoditized more than it currently is. MAy be hypervisors should have things like as Intel VT, where we have a few hooks and offloads into the hypervisor for storage et al. And it would be great if those hooks/APIs were open and common across hypervisors. Just thinking out loud….
Also, it’s hard for an outsider not to think that a non-technical goal of VSAN is to lock customers in to vSphere. May be not a huge goal, but a goal nonetheless. Now I know the term “lock-in” sends some folks crazy, and I also know the kind of lock-in I’m talking about here isn’t the kind of lock-in we had in the Mainframe days, but I do *honestly* still see it as driving lock-in. I’m sure at VMware you call it “adding value”, but from the non-VMware side it can easily be seen as adding lock-in.
Yes I know VM’s can be migrated to other hypervisors etc. And I know there’s a fine line between value-add and lock-in – I don’t doubt the techies at VMware never even give lock-in a thought – they’re just crazy excited about developing features for a great hypervisor platform.
Anyway, I have my opinions, and I’d personally rather have a ScaleIO based solution. Might have changed my mind by the end of the week though 😉
PS. It made me smile once when the CTO of a hypervisor vendor accused me of probably not being open to change my opinion 😀 I’m yet to see a CTO who has a different opinion than their employer 😉
duncan says
“with someone who is probably not open to change his opinion”… that was based on the twitter discussion we had, considering it didn’t go anywhere you can understand where I am coming from.
To me “lock in”… well I think it is all irrelevant, you will lock in to something at some point. Whether it is a hypervisor, hardware, networking, backup, storage… Yeah what about backup, ever tried changing a backup solution? Gone are those archives typically 🙂
Anil Sedha says
Thanks for sharing this information. I like what I just read.
banc says
Hi Duncan, I was wondering how an all flash configuration would look like? Since a flash drives is used for caching in todays architecture, will this caching mechanism be different considering the capacity “tier” will be flash as well (if you can comment on that)?
duncan says
I cannot comment on implementation details just yet…
dedwards says
With what we’ve found @micronstorage with the current vsan 5.5 U2 ( YouTube – http://ow.ly/COxJt ) is that even without optimizations specific to SSD’s, the IO response times are vastly improved for overall vsan performance. We’ve also found that the nature of the optimizations for spinning media to primary storage (large block writes) vastly improves the endurance of the SSDs being used as main storage, due to reduced write amp and more efficient garbage collection.
banc says
Hi Duncan, sorry to post this question here (the FAQ post doesn’t accept comments):
I have a hard time finding more details around the “distributed read cache” mechanism of VSAN. IN the FAQ you mentioned that VSAN doesn’t do deduplication but the SSD based read cache never caches the same block twice. In case two (or more) VMs are provisioned on different hosts generating their own components, does that mean if they have written the same blocks (same check sum) that those blocks (distributed over several components and hosts) will only be held in SSD cache on a single host? Is there any documentation of blog post explaining the “distributed read cache”?
banc says
Hi Duncan, thanks for the answer above! Any chance to get a reaply to this question (how does the “dstributed read cache” work)? It’s very much appreciated!
duncan says
No, it means that per VM each block is only cached once. If two VMs happen to store the exact same block, then the current version of VSAN will save 2 blocks in cache when they are read.
banc says
Thanks for your reply!
So if a VM has written two exact same blocks (same check sum) and both blocks are read again independently then this will be recognized and both blocks will be cached only once? Or said differently VSAN is calculating check sums to be able to to tell if two blocks (of the same VM) are identical?
David says
Hi Duncan, since you post so detailled about the Features and benefits of the VMWare VSAN… how do you see it in comparison with HP VSA (which is stable on the market)
There is a comparison of These 2 different products via the Taneja Group ( http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA5-5091ENW )
What is your opinion about it?
Thanks,
David
Duncan Epping says
Not sure I am the right person to make competitive statements to be honest. HP VSA has been around for a while indeed and seem to be a stable solution. Depending on what you are looking for one or the other may be a better fit. VSAN continues to evolve and based on my experience with the product and conversations with customers it has been stable and solid.
I don’t have an opinion on the sponsored paper by Taneja, as a blogger though what strikes me is the fact that an established player feels the need to get this published soon after the launch, that alone makes me believe the concerns should be taken with a grain of salt 🙂