(Inter-VM) TPS Disabled by default, what should you do?

We’ve all probably seen the announcement around inter-VM(!!) TPS (transparent page sharing) being disabled by default in future releases of vSphere, and the recommendation to disable it in current versions. The reason for this is the fact there was a research paper published which demonstrates how it is possible to get access to data under certain highly controlled conditions. As the KB article describes:

Published academic papers have demonstrated that by forcing a flush and reload of cache memory, it is possible to measure memory timings to determine an AES encryption key in use on another virtual machine running on the same physical processor of the host server if Transparent Page Sharing is enabled. This technique works only in a highly controlled environment using a non-standard configuration.

There were many people who blogged about what the potential impact is on your environment or designs. Typically in the past people would take a 20 to 30% memory sharing in to account when sizing their environment. With inter-VM TPS disabled of course this goes out of the window. Frank described this nicely in this post. However, as Frank also described and I mentioned in previous articles when large pages are being used (usually the case) then TPS is not used by default and only under pressure…

The under pressure part is important if you ask me as TPS is the first memory reclaiming technique used when a host is under pressure. If TPS cannot sufficiently reduce the memory pressure then ballooning is leveraged, followed by compression and swapping ultimately. Personally I would like to avoid swapping at all costs and preferably compression as well. Ballooning typically doesn’t result in a huge performance degradation so it could be acceptable, but TPS is something I prefer as it just breaks up large pages in to small pages and collapses those when possible. Performance loss is hardly measurable in that case. Of course TPS would be way more effective when pages between VMs can be shared rather then just within the VM.

Anyway, the question remains should you have (inter-VM) TPS disabled or not? When you assess the risk you need to ask yourself first who has access to your virtual machines as the technique requires you to login to a virtual machine. Before we look at the scenarios, not that I mentioned “inter-VM” a couple of times now, TPS is not completely disabled in future versions. It will be disabled for inter-VM sharing by default, but can be enabled. More to be found on that in this article on the vSphere blog.

Lets explore 3 scenarios:

  1. Server virtualisation (private)
  2. Public cloud
  3. Virtual Desktops

In the case of “Server virtualisation”, in most scenarios I would expect that only the system administrators and/or application owners have access to the virtual machines. The question then is, why would they go to this level when they have access to the virtual machines anyway? So in the scenario where Server Virtualization is your use case, and access to your virtual machines is restricted to a limited number of people, I would definitely reconsider enabling inter-VM TPS.

In a public cloud environment this however is different of course. You can imagine that a hacker could buy a virtual machine and try to retrieve the AES encryption key. What he (the hacker) does with it next of course is even then still the question. Hopefully the cloud provider ensures that that the tenants are isolated from each other from a security/networking point of view. If that is the case there shouldn’t be much they could do with it. Then again, it could be just one of the many steps they have to take to break in to a system so I would probably not want to take the risk, although the risk is low. This is one of the scenarios where I would leave inter-VM TPS disabled.

Third and last scenario is Virtual Desktops. In the case of a virtual desktop many different users have access to virtual machines… The question though is if you are running any applications or accessing applications which are leveraging AES encryption or not. I cannot answer that for you, so I will leave that up in the air… you will need to assess that risk.

I guess the answer to whether you should or should not disable (inter-VM) TPS is as always: it depends. I understand why inter-VM TPS was disabled, but if the risk is low I would definitely consider enabling it.

What is coming for vSphere and VSAN? VMworld reveals…

I’ve been prepping a presentation for upcoming VMUGs, but wanted to also share this with my readers. The session is all about vSphere futures, what is coming soon? Before anyone says I am breaking NDA, I’ve harvested all of this info from public VMworld sessions. Except for the VSAN details, those were announced to the press at VMworld EMEA. Lets start with Virtual SAN…

The Virtual SAN details were posted in this Computer Weekly article, and by the looks of it they interviewed VMware’s CEO Pat Gelsinger and Alberto Farronato from the VSAN product team. So what is coming soon?

  • All Flash Virtual SAN support
    Considering the price of MLC has lowered to roughly the same price as SAS HDDs per GB I think this is a great new feature to have. Being able to build all-flash configurations at the price point of a regular configuration, and with probably many supported configurations is a huge advantage of VSAN. I would expect VSAN to support various types of flash as the “capacity” layer, so this is an architects dream… designing your own all-flash storage system!
  • Virsto integration
    I played with Virsto when it was just released and was impressed by the performance and the scalability. Functions that were part of Virst such as snapshots and clones these have been built into VSAN and it will bring VSAN to the next level!
  • JBOD support
    Something many have requested, and primarily to be able to use VSAN in Blade environments… Well with the JBOD support announced this will be a lot easier. I don’t know the exact details, but just the “JBOD” part got me excited.
  • 64 host VSAN cluster support
    VSAN doesn’t scale? Here you go,

That is a nice list by itself, and I am sure there is plenty more for VSAN. At VMworld for instance Wade Holmes also spoke about support for disk controller based encryption for instance. Cool right?! So what about vSphere? Considering even the version number was dropped during the keynote and it hints at a major release you would expect some big functionality to be introduced. Once again, all the stuff below is harvested from various public VMworld sessions:

  • VMFork aka Project Fargo – discussed here…
  • Increased scale!
    • 64 host HA/DRS cluster, I know a handful of customers who asked for 64 host clusters, so here it is guys… or better said: soon you will have it!
  • SMP vCPU FT – up to 4 vCPU support
    • I like FT from an innovation point of view, but it isn’t a feature I would personally use too much as I feel “fault tolerance” from an app perspective needs to be solved by the app. Now, I do realize that there are MANY legacy applications out there, and if you have a scale-up application which needs to be highly available then SMP FT is very useful. Do note that with this release the architecture of FT has changed. For instance you used to share the same “VMDK” for both primary and secondary, but that is no longer the case.
  • vMotion across anything
    • vMotion across vCenter instances
    • vMotion across Distributed Switch
    • vMotion across very large distance, support up to 100ms latency
    • vMotion to vCloud Air datacenter
  • Introduction of Virtual Datacenter concept in vCenter
    • Enhance “policy driven” experience within vCenter. Virtual Datacenter aggregates compute clusters, storage clusters, networks, and policies!
  • Content Library
    • Content Library provides storage and versioning of files including VM templates, ISOs, and OVFs.
      Includes powerful publish and subscribe features to replicate content
      Backed by vSphere Datastores or NFS
  • Web Client performance / enhancement
    • Recent tasks pane drops to the bottom instead of on the right
    • Performance vastly improved
    • Menus flattened
  • DRS placement “network aware”
    • Hosts with high network contention can show low CPU and memory usage, DRS will look for more VM placements
    • Provide network bandwidth reservation for VMs and migrate VMs in response to reservation violations!
  • vSphere HA component protection
    • Helps when hitting “all paths down” situations by allowing HA to take action on impacted virtual machines
  • Virtual Volumes, bringing the VSAN “policy goodness” to traditional storage systems

Of course there is more, but these are the ones that were discussed at VMworld… for the remainder you will have to wait until the next version of vSphere is released, or you can also sign up for the beta still I believe!

VMware EVO:RAIL demos

I just bumped in to a bunch of new VMware EVO:RAIL demos which I wanted to share. Especially the third demo which shows how EVO:RAIL scales out by a couple of simple clicks.

General overview:

Customer Testimonial:


Clustering appliances:

Management experience:

Configuration experience:

x

EVO:RAIL engineering interview with Dave Shanley (Lead Dev)

A couple of weeks ago we launched EVO:RAIL, a new VMware solution. I have been part of this since the very beginning, the prototype project started with just Dave and myself as part of the prototype team with Mornay van der Walt as the executive sponsor (interview with Mornay will follow shortly as this project involves many different disciplines). After Dave developed the initial UI mock-ups and we worked on the conceptual architecture, Dave started developing what then became known internally as MARVIN. If my memory serves correct it was our director at Integration Engineering (Adam Z.) who came up with the name and acronym (Modular Automated Rackable Virtual Infrastructure Node). All was done under the umbrella of Integration Engineering, in stealth mode with a very small team. I guess something not a lot of people know is that for instance William Lam was very instrumental when it came to figuring out in which order to configure what (a lot of dependencies as you can imagine) and which API calls to use for what. After a couple of months things really started to shape up, the prototype was demoed to C level and before we realized a new team was formed and gears shifted.

Personally whenever I talk to start-ups I like to know where they came from, what they’ve done in the past, how things went about… as that gives me a better understanding of why the product is what it is. Same applies to EVO:RAIL, no better start then with the lead developer and founding team member Dave Shanley

Good morning Dave, as not all of my readers will know who you are and what you did before joining the EVO:RAIL team can you please introduce yourself.
I’m the lead engineer, designer and software architect of the EVO:RAIL platform. I joined VMware about two and a half years ago. I started out in Integration Engineering, I got to see and experience a lot of the frustration that is often seen when trying to install, configure and integrate our technology. I’ve pretty much worked in web application engineering my entire career that has given me a really broad experience across consumer and enterprise technology. Before VMware I was the CTO of a really cool VC funded start-up in the UK as well as being the lead engineer over at McCann Erickson’s EMEA HQ. [Read more...]

VSAN with AHCI controller with vSphere 5.5 U2

I’ve been following a thread on the community forums closely around the AHCI disk controller. This disk controller is an on-board disk controller which caused some problems when used in conjunction with VSAN because of a driver problem. Note that this disk controller is not on the HCL and is not recommend for use in a production environment or ANY environment where reasonable performance is expected and endurance / availability is key. Many homelabbers used this controller however and I am happy to say that it was reported by Philzy that this fix mentioned in KB 2079729 appears to have solved the issues experienced.

For all those wanting to use VSAN in their homelabs… Game on!