Startup News Flash part 4

This is the fourth part already of the Startup News Flash, we are in the middle of VMworld and of course there were many many announcements. I tried to filter out those which are interesting, as mentioned in one of the other posts if you feel one is missing leave a comment.

Nutanix announced version 3.5  of their OS last week. The 3.5 release contains a bunch of new features, one of them being what they call the “Nutanix Elastic Deduplication Engine”. I think it is great they added this feature is ultimately it will allow you to utilize your flash and RAM tier more efficiently. The more you can cache the better right?! I am sure this will result in a performance improvement in many environment, you can imagine that especially for VDI or environments where most VMs are based on the same template this will be the case. What might be worth knowing is that Nutanix dedupe is inline for their RAM and flash tier and then for their magnetic disks is happening in the background. Nutanix also announced that besides supporting vSphere and KVM they also support Hyper-V as of now, which is great for customers as it offers you choice. On top of all that, they managed to develop a new simplified UI and a rest-based API allowing for customers to build a software defined datacenter! Also worth noting is that they’ve been working on their DR story. They’ve developed a Storage Replication Adapter which is one of the components needed to implement Site Recover Manager with array based replication. They also optimized their replication technology by extending their compression technology to that layer. (Disclaimer: the SRA is not listed on the VMware website, as such it is not supported by VMware. Please validate the SRM section of the VMware website before implementing.)

Of course an update from a flash caching vendor, this time it is Proximal Data who announced the 2.0 version of their software. AutoCache 2.0 includes role-based administration features and multi-hypervisor support to meet the specific needs of cloud service providers. Good to see that multi hypervisor and cloud is part of the proximal story soon. I like the Proximal aggressive price point. It starts at $999 per host for flash caches less than 500GB, which is unique for a solution which does both block and file caching. Not sure I agree with Proximal’s stance with regards to write-back caching and “down-playing” 1.0 solutions, especially not when you don’t offer that functionality yourself or were a 1.0 version yesterday.

I just noticed this article published by Silicon Angle which mentions the announcement of the SMB Edition of FVP, priced at a flat $9,999, supports up to 100 VMs across a maximum of four hosts with two processors and one flash drive each. More details to be found in this press release by PernixData.

Also something which might interest people is Violin Memory filing for IPO. It had been rumored numerous times, but this time it seems to be happening for real. The Register has an interesting view by the way. I hope it will be a huge success for everyone involved!

Also want to point people again to some of the cool announcements VMware did in the storage space, although far from being a startup I do feel this is worth listing here again: introduction to vSphere Flash Read Cacheintroduction to Virtual SAN.

vSphere 5.5 nuggets: vCenter Server Appliance limitations lifted!

For those who haven’t seen it… the vCenter Server Appliance limitations that there were around the number of virtual machines and hosts are lifted. Where the vCenter Server Appliance with the embedded ternal database used to be limited to a maximum of 5 hosts and 50 virtual machines this has been increased with vSphere 5.5 to 100 hosts and 3000 virtual machines when you use the embedded database, with an external Oracle database the limits are similar to that of the Windows version of vCenter Server! If you ask me, this means that the vCenter Server Appliance with the embedded database can be used in almost every scenario! That makes life easier indeed.

Couple of other awesome enhancements when it comes to vCenter Server:

  • Drag and drop functionality added! So you can simply drag and drop a VM on to a host again, or a host in to a cluster
  • OS X support, I know many of you have been waiting for this one.
  • Support for Database Clustering solutions, finally!

By itself they appear to be minor things, but if you ask me… this is a huge step forward for the vCenter Server Appliance! Some more details to be found in the what’s new whitepaper in vSphere 5.5 for Platform.

 

2013 VMware Fling Contest – Join in on the fun!

Last year VMware organized the very first VMware Open Innovation Contest and it was a very successful contest which resulted in an awesome fling called “pro-active DRS“. The Open Innovation Contest is back again in 2013 but now called the 2013 VMware Fling Contest.

Now lets get those creative juices flowing again, think about the challenges / problems you are facing everyday and how these could potentially solved and head over to the 2013 VMware Fling Contest website and submit: https://flingcontest.vmware.com/. Do note there is no need to rush to get your idea in, take your time – think about – but make sure to submit it before Nov 15th.

Of course there is an awesome price again, the winner gets a free pass to VMworld 2014, on top of that the VMware engineering team will execute on your idea and a fling will be released. How cool is that? If you need more info, stop by at the VMworld Innovation Booth at the solutions exchange.

Introduction to vSphere Flash Read Cache aka vFlash

vSphere 5.5 was just announced and of course there are a bunch of new features in there. One of the features which I think people will appreciate is vSphere Flash Read Cache (vFRC), formerly known as vFlash. vFlash was tech previewed last year at VMworld and I recall it being a very popular session. In the last 6-12 months host local caching solutions have definitely become more popular and interesting as SSD prices keep dropping and thus investing in local SSD drives to offload IO gets more and more interesting. Before anyone asks, I am not going to do a comparison with any of the other host local caching solutions out there. I don’t think I am the right person for that as I am obviously biased.

As stated, vSphere Flash Read Cache is a brand new feature which is part of vSphere 5.5. It allows you to leverage host local SSDs and turn that in to a caching layer for your virtual machines. The biggest benefit of using host local SSDs of course is the offload of IO from the SAN to the local SSD. Every read IO that doesn’t need to go to your storage system means resources can be used for other things, like for instance write IO. That is probably the one caveat I will need to call out, it is “write through” caching only at this point, so essential a read cache system. Now, by offloading reads, potentially it could help improving write performance… This is not a given, but could be a nice side effect.

Just a couple of things before we get in to configuring it. vFlash aggregates local flash devices in to a pool, this pool is referred too as a “virtual flash resource” in our documentation. So in other words, if you have 4 x 200 GB SSD you end up with a 800GB virtual flash resource. This virtual flash resource has a filesystem sitting on top of it called “VFFS” aka “Virtual Flash File System”. As far as I know it is a heavily flash optimized version of VMFS, but don’t pin me on this one as I haven’t broken it down yet.

So now that I know what it is and does, how do I install it, what are the requirements and limitations? Well lets start with the requirements and limitations first.

Requirements and limitations:

  • vSphere 5.5 (both ESXi and vCenter)
  • SSD Drive / Flash PCIe card
  • Maximum of 8 SSDs per VFFS
  • Maximum of 4TB physical Flash-based device size
  • Maximum of 32TB virtual Flash resource total size (8x4TB)
  • Cumulative 2TB VMDK read cache limit
  • Maximum of 400GB of virtual Flash Read Cache per Virtual Machine Disk (VMDK) file

So now that we now the requirements, how do you enable / configure it? Well as with most vSphere features these days the setup it fairly straight forward and simple. Here we go:

  • Open the vSphere Web Client
  • Go to your Host object
  • Go to “Manage” and then “Settings”
  • All the way at the bottom you should see “Flash Read Cache Resource Management”
    • Click “Add Capacity”
    • Select the appropriate SSD and click OK
  • Now you have a cache created, repeat for other hosts in your cluster. Below is what your screen will look like after you have added the SSD.

Now you will see another option below “Flash Read Cache Resource Management” and it is called “Cache Configuration” this is for the “Swap to host cache” / “Swap to SSD” functionality that was introduced with vSphere 5.0.

Now that you have enabled vFlash on your host, what is next? Well you enable it on your virtual machine, yes I agree it would have been nice to enable it for a full cluster or for a datastore as well but this is not part of the 5.5 release unfortunately. It is something that will be added at some point in the future though. Anyway, here is how you enable it on a Virtual Machine:

  • Right click the virtual machine and select “Edit Settings”
  • Uncollapse the harddisk you want to accelerate
  • Go to “Flash Read Cache” and enter the amount of GB you want to use as a cache
    • Note there is an advanced option, at this section you can also select the block size
    • The block size could be important when you want to optimize for a particular application

Not too complex right? You enable it on your host and then on a per virtual machine level and that is it… It is included with Enterprise Plus from a licensing perspective, so those who are at the right licensing level get it “for free”.

PS: Rawlinson created this awesome demo, check it out:

Introduction to VMware vSphere Virtual SAN

Many of you have seen the announcements by now and I am guessing that you are as excited as I am about the announcement of the public beta of Virtual SAN with vSphere 5.5. What is Virtual SAN, formerly known as “VSAN” or “vCloud Distributed Storage” all about?

Virtual SAN (VSAN from now on in this article) is a software based distributed storage solution which is built directly in the hypervisor. No this is not a virtual appliance like many of the other solutions out there, this sits indeed right inside your ESXi layer. VSAN is about simplicity, and when I say simple I do mean simple. Want to play around with VSAN? Create a VMkernel NIC for VSAN and enable it on a cluster level. Yes that is it!

vSphere Virtual SAN

Before we will get a bit more in to the weeds, what are the benefits of a solution like VSAN? What are the key selling points?

  • Software defined – Use industry standard hardware, as long as it is on the HCL you are good to go!
  • Flexible – Scale as needed and when needed. Just add more disks or add more hosts, yes both scale-up and scale-out are possible.
  • Simple – Ridiculously easy to manage! Ever tried implementing or managing some of the storage solutions out there? If you did, you know what I am getting at!
  • Automated – Per virtual machine policy based management. Yes, virtual machine level granularity. No more policies defined on a per LUN/Datastore level, but at the level where you want it to be!
  • Converged – It allows you to create dense / building block style solutions!

Okay that sounds great right, but where does that fit in? What are the use-cases for VSAN when it is released?

  • Virtual desktops
    • Scale out model, using predictive (performance etc) repeatable infrastructure blocks lowers costs and simplifies operations
  • Test & Dev
    • Avoids acquisition of expensive storage (lowers TCO), fast time to provision
  • Big Data
    • Scale out model with high bandwidth capabilities
  • Disaster recovery target
    • Cheap DR solution, enabled through a feature like vSphere Replication that allows you to replicate to any storage platform

So lets get a bit more technical, just a bit as this is an introduction right…

When VSAN is enabled a single shared datastore is presented to all hosts which are part of the VSAN enabled cluster. Typically all hosts will contribute performance (SSD) and capacity (magnetic disks) to this shared datastore. This means that when your cluster grows, your datastore will grow with it. (Not a requirement, there can be hosts in the cluster which just consume the datastore!) Note that there are some requirements for hosts which want to contribute storage. Each host will require at least one SSD and one magnetic disk. Also good to know is that with this beta release the limit on a VSAN enabled cluster is 8 hosts. (Total cluster size 8 hosts, including hosts not contributing storage to your VSAN datastore.)

As expected, VSAN heavily relies on SSD for performance. Every write I/O will go to SSD first, and eventually they will go to magnetic disks (SATA). As mentioned, you can set policies on a per virtual machine level. This will also dictate for instance what percentage of your read I/O you can expect to come from SSD. On top of that you can use these policies to define availability of your virtual machines. Yes you read that right, you can have different availability policies for virtual machines sitting on the same datastore. For resiliency “objects” will be replicated across multiple hosts, how many hosts/disks will thus depend on the profile.

VSAN does not require a local RAID set, just a bunch of local disks. Now, whether you defined a 1 host failure to tolerate ,or for instance a 3 host failure to tolerate, VSAN will ensure enough replicas of your objects are created. Is this awesome or what? So lets take a simple example to illustrate that. We have configured a 1 host failure and create a new virtual disk. This means that VSAN will create 2 identical objects and a witness. The witness is there just in case something happens to your cluster and to help you decide who will take control in case of a failure, the witness is not a copy of your object let that be clear! Note, that the amount of hosts in your cluster could potentially limit the amount of “host failures to tolerate”. In other words, in a 3 node cluster you can not create an object that is configured with 2 “host failures to tolerate”. Difficult to visualize? Well this is what it would look like on a high level for a virtual disk which tolerates 1 host failure:

With all this replication going on, are there requirements for networking? At a minimum VSAN will require a dedicated 1Gbps NIC port. Of course it is needless to say that 10Gbps would be preferred with solutions like these, and you should always have an additional NIC port available for resiliency purposes. There is no requirement from a virtual switch perspective, you can use either the Distributed Switch or the plain old vSwitch, both will work fine.

To conclude, vSphere Virtual SAN aka VSAN is a brand new hypervisor based distributed platform that enables convergence of compute and storage resources. It provides virtual machine level granularity through policy based management. It allows you to control availability and performance in a way I have never seen it before, simple and efficient. I am hoping that everyone will be pounding away on the public beta, sign up today: http://www.vmware.com/vsan-beta-register!