• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vSphere 6.0 finally announced!

Duncan Epping · Feb 3, 2015 ·

Today Pat Gelsinger and Ben Fathi announced vSphere 6.0. (if you missed it you can still sign up for other events) I know many of you have been waiting on this and are ready to start your download engines but please note that this is just the announcement of GA… the bits will follow shortly. I figured I would do a quick post which details what is in vSphere 6.0 / what is new.There were a lot of announcements today, but I am just going to cover vSphere 6.0 and VSAN. I have some more detailed posts to come so I am not gonna go in to a lot of depth in this post, I just figured I would post a list of all the stuff that is in the release… or at least that I am aware off, some stuff wasn’t broadly announced.

  • vSphere 6
    • Virtual Volumes
      • Want “Virtual SAN” alike policy based management for your traditional storage systems? That is what Virtual Volumes will bring in vSphere 6.0. If you ask me this is the flagship feature in this release.
    • Long Distance vMotion
    • Cross vSwitch and vCenter vMotion
    • vMotion of MSCS VMs using pRDMs
    • vMotion L2 adjacency restrictions are lifted!
    • vSMP Fault Tolerance
    • Content Library
    • NFS 4.1 support
    • Instant Clone aka VMFork
    • vSphere HA Component Protection
    • Storage DRS and SRM support
    • Storage DRS deep integration with VASA to understand thin provisioned, deduplicated, replicated or compressed datastores!
    • Network IO Control per VM reservations
    • Storage IOPS reservations
    • Introduction of Platform Services Controller architecture for vCenter
      • SSO, licensing, certificate authority services are grouped and can be centralized for multiple vCenter Server instances
    • Linked Mode support for vCenter Server Appliance
    • Web Client performance and usability improvements
    • Max Config:
      • 64 hosts per cluster
      • 8000 VMs per cluster
      • 480 CPUs per host
      • 12TB of memory
      • 1000 VMs per host
      • 128 vCPUs per VM
      • 4TB RAM per VM
    • vSphere Replication
      • Compression of replication traffic configurable per VM
      • Isolation of vSphere Replication host traffic
    • vSphere Data Protection now includes all vSphere Data Protection Advanced functionality
      • Up to 8TB of deduped data per VDP Appliance
      • Up to 800 VMs per VDP Appliance
      • Application level backup and restore of SQL Server, Exchange, SharePoint
      • Replication to other VDP Appliances and EMC Avamar
      • Data Domain support
  • Virtual SAN 6
    • All flash configurations
    • Blade enablement through certified JBOD configurations
    • Fault Domain aka “Rack Awareness”
    • Capacity planning / “What if scenarios”
    • Support for hardware-based checksumming / encryption
    • Disk serviceability (Light LED on Failure, Turn LED on/off manually etc)
    • Disk / Diskgroup maintenance mode aka evacuation
    • Virtual SAN Health Services plugin
    • Greater scale
      • 64 hosts per cluster
      • 200 VMs per host
      • 62TB max VMDK size
      • New on-disk format enables fast cloning and snapshotting
      • 32 VM snapshots
      • From 20K IOPS to 40K IOPS in hybrid configuration per host (2x)
      • 90K IOPS with All-Flash per host

As you can see a long list of features and products that have been added or improved. I can’t wait until the GA release is available. In the upcoming days I will post some more details on some of the above listed features as there is no point in flooding the blogosphere even more with similar info.

New fling released: VM Resource and Availability Service

Duncan Epping · Feb 2, 2015 ·

I have the pleasure of announcing a brand new fling that was released today. This fling is called “VM Resource and Availability Service” and is something that I came up with during a flight to Palo Alto while talking to Frank Denneman. When it comes to HA Admission Control the one thing that always bugged me was why it was all based on static values. Yes it is great to know my VMs will restart, but I would also like to know if they will receive the resources they were receiving before the fail-over. In other words, will my user experience be the same or not? After going back and forth with engineering we decided that this could be worth exploring further and we decided to create a fling. I want to thank Rahul(DRS Team), Manoj and Keith(HA Team) for taking the time and going to this extend to explore this concept.

Something which I think is also unique is that this is a SaaS based solution, it allows you to upload a DRM dump and then you can simulate failure of one or more hosts from a cluster (in vSphere) and identify how many:

  • VMs would be safely restarted on different hosts
  • VMs would fail to be restarted on different hosts
  • VMs would experience performance degradation after restarted on a different host

With this information, you can better plan the placement and configuration of your infrastructure to reduce downtime of your VMs/Services in case of host failures. Is that useful or what? I would like to ask everyone to go through the motion, and of course to provide feedback if you feel this is useful information or not. You can leave feedback on this blog post or the fling website, we are aiming to monitor both.

For those who don’t know where to find the DRM dump, Frank described it in his article on the drmdiagnose fling, which I also recommend trying out! There is also a readme file with a bit more in-depth info!

  • vCenter server appliance: /var/log/vmware/vpx/drmdump/clusterX/
  • vCenter server Windows 2003: %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs\drmdump\clusterX\
  • vCenter server Windows 2008: %ALLUSERSPROFILE%\VMware\VMware VirtualCenter\Logs\drmdump\clusterX\

So where can you find it? Well that is really easy, no downloads as I said… fully ran as a service:

  1. Open hasimulator.vmware.com to access the web service.
  2. Click on “Simulate Now” to accept the EULA terms, upload the DRM dump file and start the simulation process.
  3. Click on the help icon (at the top right corner) for a detailed description on how to use this service.

Congratulations Virtual SAN team, more than 1000 customers reached in 2014!

Duncan Epping · Jan 28, 2015 ·

I want to congratulate the Virtual SAN team with their huge success in 2014. I was listening to the Q4 earnings call yesterday and was amazed by what was achieved. Of course I knew that Virtual SAN was doing well, but I didn’t know that they had already reached 1000 customers in 2014 utilizing the Virtual SAN platform. (Page 6) I am sure that these numbers will grow strong in 2015 and that Virtual SAN is unstoppable, especially knowing what 2015 has to offer in terms of feature/functionality set.

I know many of you must be interested as well in what is coming in the near future. If you haven’t registered yet for the launch event on February the 2, 3 or 5th (depending on region) make sure you do so now. It is going to be an interesting event with some great announcements! Besides that, by simply registering you will have the chance to win a VMworld 2015 ticket and who wouldn’t want that? Register now!

 

EZT Disks with VSAN, why would you?

Duncan Epping · Jan 26, 2015 ·

I noticed a tweet today which made a statement around the use of eager zero thick disks in a VSAN setup for running applications like SQL Server. The reason this user felt this was needed was to avoid the hit on “first write to block on VMDK”, it is not the first time I have heard this and I have even seen some FUD around this so I figured I would write something up. On a traditional storage system, or at least in some cases, this first write to a new block takes a performance penalty. The main reason for this is that when the VMDK is thin, or lazy zero thick, the hypervisor will need to allocate that new block that is being written to and zero it out.

First of all, this was indeed true with a lot of the older storage system architectures (non-VAAI). However, this is something that even in 2009 was dispelled as forming a huge problem. And with the arrival of all-flash arrays this problem disappeared completely. But indeed VSAN isn’t an all-flash solution (yet), but for VSAN however there is something different to take in to consideration. I want to point out, that by default when you deploy a VM on VSAN you typically do not touch the disk format even and it will get deployed as “thin” with potentially a space reservation setting which comes from the storage policy! But what if you use an old template which has a zeroed out disk and you deploy that and compare it to a regular VSAN VM, will it make a difference? For VSAN eager zero thick vs thin will (typically) make no difference to your workload at all. You may wonder why, well it is fairly simple… just look at this diagram:

If you look at the diagram then you will see that the acknowledgement will happen to the application as soon as the write to flash has happened. So in the case of thick vs thin you can imagine that it would make no difference as the allocation (and zero out) of that new block would happen minutes after the application (or longer) has received the acknowledgement. A person paying attention would now come back and say: hey you said “typically”, what does that mean? Well that means that the above is based in the understanding that your working set will fit in cache, of course there are ways to manipulate performance tests to proof that the above is not always the case, but having seen customer data I can tell you that this is not a typical scenario… or extremely unlikely.

So if you deploy Virtual SAN… and have “old” templates floating around and they have “EZT” disks, I would recommend overhauling them as it doesn’t add much, well besides a longer waiting time during deployment.

Lego VSAN EVO:RACK

Duncan Epping · Jan 24, 2015 ·

I know a lot of you guys have home labs and are always looking for that next cool thing. Every once in a while you see something cool floating by on twitter and in this case it was so cool I needed to share it with you guys. Someone posted a picture of his version of “EVO:RACK” leveraging Intel NUC, a small switch and Lego… How awesome is a Lego VSAN EVO:RACK?! Difficult to see indeed in the pics below, but if you look at this picture then you will see how the top of rack switch was included.

Lego VSAN EVO Rack NUC style… Version 2.. Note top of rack switch!!@pdxvmug @vmwarevsan @IntelNUC @vExpert pic.twitter.com/SYFa6leLxX

— Nicholas Farmer (@vmnick0) January 9, 2015

Besides the awesome tweet, Nick also shared how he has build his lab in a couple of blog posts which are worth reading for sure!

  • VSAN Cluster Running On Three Intel NUCs – Part 1 (The Build)
  • VSAN Cluster Running On Three Intel NUCs – Part 2 (vCenter Deploy)

Enjoy,

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 135
  • Page 136
  • Page 137
  • Page 138
  • Page 139
  • Interim pages omitted …
  • Page 497
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in