vSphere 5.5: platform scalability

One of the things that always keeps people busy are the “max config” numbers for a release. I figured I would dedicate a blog post to vSphere 5.5 platform scalability now that it has been released. A couple of things that stand out if you ask me when it comes to platform scalability:

  • 320 Logical CPUs per host
  • 4TB of memory per host
  • 16 NUMA nodes per host
  • 4096 vCPUs maximum per host
  • 40 GbE NIC support
  • 62TB VMDK and Virtual RDM support
  • 16Gb Fibrechannel end-to-end support
  • VMFS Heap Increase, 64TB open VMDK per host max

Some nice key improvements right when it comes to platform scalability? I think so! Not that I expect to have the need for a host with 4TB of memory and 320 pCPUs in the near future, but you never know right. Some more details to be found in the what’s new whitepaper in vSphere 5.5 for Platform.

Startup News Flash part 4

This is the fourth part already of the Startup News Flash, we are in the middle of VMworld and of course there were many many announcements. I tried to filter out those which are interesting, as mentioned in one of the other posts if you feel one is missing leave a comment.

Nutanix announced version 3.5  of their OS last week. The 3.5 release contains a bunch of new features, one of them being what they call the “Nutanix Elastic Deduplication Engine”. I think it is great they added this feature is ultimately it will allow you to utilize your flash and RAM tier more efficiently. The more you can cache the better right?! I am sure this will result in a performance improvement in many environment, you can imagine that especially for VDI or environments where most VMs are based on the same template this will be the case. What might be worth knowing is that Nutanix dedupe is inline for their RAM and flash tier and then for their magnetic disks is happening in the background. Nutanix also announced that besides supporting vSphere and KVM they also support Hyper-V as of now, which is great for customers as it offers you choice. On top of all that, they managed to develop a new simplified UI and a rest-based API allowing for customers to build a software defined datacenter! Also worth noting is that they’ve been working on their DR story. They’ve developed a Storage Replication Adapter which is one of the components needed to implement Site Recover Manager with array based replication. They also optimized their replication technology by extending their compression technology to that layer. (Disclaimer: the SRA is not listed on the VMware website, as such it is not supported by VMware. Please validate the SRM section of the VMware website before implementing.)

Of course an update from a flash caching vendor, this time it is Proximal Data who announced the 2.0 version of their software. AutoCache 2.0 includes role-based administration features and multi-hypervisor support to meet the specific needs of cloud service providers. Good to see that multi hypervisor and cloud is part of the proximal story soon. I like the Proximal aggressive price point. It starts at $999 per host for flash caches less than 500GB, which is unique for a solution which does both block and file caching. Not sure I agree with Proximal’s stance with regards to write-back caching and “down-playing” 1.0 solutions, especially not when you don’t offer that functionality yourself or were a 1.0 version yesterday.

I just noticed this article published by Silicon Angle which mentions the announcement of the SMB Edition of FVP, priced at a flat $9,999, supports up to 100 VMs across a maximum of four hosts with two processors and one flash drive each. More details to be found in this press release by PernixData.

Also something which might interest people is Violin Memory filing for IPO. It had been rumored numerous times, but this time it seems to be happening for real. The Register has an interesting view by the way. I hope it will be a huge success for everyone involved!

Also want to point people again to some of the cool announcements VMware did in the storage space, although far from being a startup I do feel this is worth listing here again: introduction to vSphere Flash Read Cacheintroduction to Virtual SAN.

vSphere 5.5 nuggets: vCenter Server Appliance limitations lifted!

For those who haven’t seen it… the vCenter Server Appliance limitations that there were around the number of virtual machines and hosts are lifted. Where the vCenter Server Appliance with the embedded ternal database used to be limited to a maximum of 5 hosts and 50 virtual machines this has been increased with vSphere 5.5 to 100 hosts and 3000 virtual machines when you use the embedded database, with an external Oracle database the limits are similar to that of the Windows version of vCenter Server! If you ask me, this means that the vCenter Server Appliance with the embedded database can be used in almost every scenario! That makes life easier indeed.

Couple of other awesome enhancements when it comes to vCenter Server:

  • Drag and drop functionality added! So you can simply drag and drop a VM on to a host again, or a host in to a cluster
  • OS X support, I know many of you have been waiting for this one.
  • Support for Database Clustering solutions, finally!

By itself they appear to be minor things, but if you ask me… this is a huge step forward for the vCenter Server Appliance! Some more details to be found in the what’s new whitepaper in vSphere 5.5 for Platform.

 

2013 VMware Fling Contest – Join in on the fun!

Last year VMware organized the very first VMware Open Innovation Contest and it was a very successful contest which resulted in an awesome fling called “pro-active DRS“. The Open Innovation Contest is back again in 2013 but now called the 2013 VMware Fling Contest.

Now lets get those creative juices flowing again, think about the challenges / problems you are facing everyday and how these could potentially solved and head over to the 2013 VMware Fling Contest website and submit: https://flingcontest.vmware.com/. Do note there is no need to rush to get your idea in, take your time – think about – but make sure to submit it before Nov 15th.

Of course there is an awesome price again, the winner gets a free pass to VMworld 2014, on top of that the VMware engineering team will execute on your idea and a fling will be released. How cool is that? If you need more info, stop by at the VMworld Innovation Booth at the solutions exchange.

Introduction to vSphere Flash Read Cache aka vFlash

vSphere 5.5 was just announced and of course there are a bunch of new features in there. One of the features which I think people will appreciate is vSphere Flash Read Cache (vFRC), formerly known as vFlash. vFlash was tech previewed last year at VMworld and I recall it being a very popular session. In the last 6-12 months host local caching solutions have definitely become more popular and interesting as SSD prices keep dropping and thus investing in local SSD drives to offload IO gets more and more interesting. Before anyone asks, I am not going to do a comparison with any of the other host local caching solutions out there. I don’t think I am the right person for that as I am obviously biased.

As stated, vSphere Flash Read Cache is a brand new feature which is part of vSphere 5.5. It allows you to leverage host local SSDs and turn that in to a caching layer for your virtual machines. The biggest benefit of using host local SSDs of course is the offload of IO from the SAN to the local SSD. Every read IO that doesn’t need to go to your storage system means resources can be used for other things, like for instance write IO. That is probably the one caveat I will need to call out, it is “write through” caching only at this point, so essential a read cache system. Now, by offloading reads, potentially it could help improving write performance… This is not a given, but could be a nice side effect.

Just a couple of things before we get in to configuring it. vFlash aggregates local flash devices in to a pool, this pool is referred too as a “virtual flash resource” in our documentation. So in other words, if you have 4 x 200 GB SSD you end up with a 800GB virtual flash resource. This virtual flash resource has a filesystem sitting on top of it called “VFFS” aka “Virtual Flash File System”. As far as I know it is a heavily flash optimized version of VMFS, but don’t pin me on this one as I haven’t broken it down yet.

So now that I know what it is and does, how do I install it, what are the requirements and limitations? Well lets start with the requirements and limitations first.

Requirements and limitations:

  • vSphere 5.5 (both ESXi and vCenter)
  • SSD Drive / Flash PCIe card
  • Maximum of 8 SSDs per VFFS
  • Maximum of 4TB physical Flash-based device size
  • Maximum of 32TB virtual Flash resource total size (8x4TB)
  • Cumulative 2TB VMDK read cache limit
  • Maximum of 400GB of virtual Flash Read Cache per Virtual Machine Disk (VMDK) file

So now that we now the requirements, how do you enable / configure it? Well as with most vSphere features these days the setup it fairly straight forward and simple. Here we go:

  • Open the vSphere Web Client
  • Go to your Host object
  • Go to “Manage” and then “Settings”
  • All the way at the bottom you should see “Flash Read Cache Resource Management”
    • Click “Add Capacity”
    • Select the appropriate SSD and click OK
  • Now you have a cache created, repeat for other hosts in your cluster. Below is what your screen will look like after you have added the SSD.

Now you will see another option below “Flash Read Cache Resource Management” and it is called “Cache Configuration” this is for the “Swap to host cache” / “Swap to SSD” functionality that was introduced with vSphere 5.0.

Now that you have enabled vFlash on your host, what is next? Well you enable it on your virtual machine, yes I agree it would have been nice to enable it for a full cluster or for a datastore as well but this is not part of the 5.5 release unfortunately. It is something that will be added at some point in the future though. Anyway, here is how you enable it on a Virtual Machine:

  • Right click the virtual machine and select “Edit Settings”
  • Uncollapse the harddisk you want to accelerate
  • Go to “Flash Read Cache” and enter the amount of GB you want to use as a cache
    • Note there is an advanced option, at this section you can also select the block size
    • The block size could be important when you want to optimize for a particular application

Not too complex right? You enable it on your host and then on a per virtual machine level and that is it… It is included with Enterprise Plus from a licensing perspective, so those who are at the right licensing level get it “for free”.

PS: Rawlinson created this awesome demo, check it out: