Public vSphere Beta, sign up and provide feedback now!

I am very pleased to see VMware just announced the beta of vSphere. I think it is great that everyone has the chance to sign up, download it, test it and provide feedback on such a critical part of your environment! Who doesn’t love to play with cutting edge technology? I know I do! Especially for all the bloggers and book authors out there this is an excellent opportunity to already start working on articles (or a book) for the launch time frame, whenever that will be. I have my engines fired up, downloading the bits as I write this…

How do you join?

  •  Navigate to https://communities.vmware.com/community/vmtn/vsphere-beta and click “JOIN NOW!” button on the right hand side!
  • Log in with your My VMware account.  (Please register for an account if you don’t have one).
  • Once you have an account and are logged in, please accept the Master Software Beta Test Agreement (MSBTA) and Program Rules screens if you have not already done so in the past.
  • After doing this you should be in the vSphere Beta 2 community.

There are 2 webinars coming up, which I would recommend attending:

  • Introduction / Overview – Tuesday, July 8, 2014
  • Installation & Upgrade – Thursday, July 10, 2014

One of the features, which is part of the beta, that I am excited about is Virtual Volumes. I have written about this concept a bunch of times (here and here) and I hope you folks will appreciate this feature as much as I do. If you are interested, look at this VVOL Beta page. You may wonder, why a separate page for VVOL beta? Well that is because you will need a VVOL capable storage solution…

Reminder: Before anyone forgets, the vSphere Beta is open to public but it is NOT a public beta. It still is a private beta and NDA applies!

vSwitch Traffic Shaping, what is what?

I was troubleshooting an issue where vMotion would time-out constantly, I had no clue where it was coming from so I started digging. In this case the environment was using a regular vSwitch and 10GbE networking. When I took a closer look I noticed that some form of traffic shaping was applied, as unfortunately the Distributed vSwitch was not an option for this environment. Now traffic shaping was enabled and the peak value was specified and the rest was left to the default value… and unfortunately this is exactly what cause the problem.

So when it comes to vSwitch Traffic Shaping, what is what? There are 3 settings you can set per portgroup:

  • Average Bandwidth – specified in Kbps
  • Peak Bandwidth – specified in Kbps
  • Burst Size – specified in KB

So if you have a 10Gbps NIC port for your traffic this means you have a total of 10,485,760 Kbps. When you enable vSwitch Traffic Shaping by default it is set to have “Average Bandwidth” to 100,000 Kbps , Peak Bandwidth to 100,000 Kbps and Burst Size to 1024,00 KB. So what does that mean? Well it means that if you enable it and do not change the values that the traffic is limited to 100,000 Kbps. 100,000 Kbps is… yes roughly 100Mbps, even less to be more precise: 97.6Mbps. Which is not a lot indeed, and not even a supported configuration for vMotion.

So what if I simply bump up the Peak Bandwidth to lets say 5Gbps, as I do not want vMotion to ever consume more than half of the NIC port (note, vSwitch traffic shaping is only for egress aka outbound traffic). Well setting the peak bandwidth sounds like it may do something, but probably not what you would hope for as this is how the settings are applied:

By default the traffic stream will get what is specified by “Average Bandwidth”. However, it is possible to exceed this when needed by specifying a higher “Peak Bandwidth” value. Your traffic will be allowed to burst until the value of “Burst Size” has been exceeded. In other words, in the above example when only Peak Bandwidth is increased this would lead to the following: By default the traffic is limited to 100Mbps, however it can peak to 5Gbps but only for 100MB worth of data traffic. As you can imagine in the case of vMotion when the full memory content of a VM is transferred that 100MB is hit within a second, after which the vMotion process is throttled back to 100Mbps and the remainder of the VM memory takes ages to copy and eventually times out.

So if you apply traffic shaping using your vSwitch, make sure to think through the numbers. In the above scenario for instance, specifying a 5Gbps Average and Peak would be what was desired.

Result of the Vietnam volunteering experience…

Before I forget, once again I would like to thank everyone who has made all of this possible. All the individuals and corporations who stepped up and made a donation, thank you on behalf of Orphan Impact and of course all of the children! (Donations are always welcome and help is always needed, look here for more details.) Some of you reached out to me personally and have asked me what the result was of the volunteering and their donations to Orphan Impact. Well the result was huge if I say so myself. With the money raised and the help provided Orphan Impact is on its way to provide computer classes to multiple additional orphanages! I just received two cool videos that I wanted to share with all of you. In these videos the results of the trip are explained both from the Orphan Impact side and from the VMware side in terms of volunteering experience.

Before I do, for those who missed the original blog posts on my volunteering experience:

Orphan Impact Story:

VMware Foundation Members share experience:

 

Quality of components in Hybrid / All flash storage

Today I was answering some questions on the VMTN forums and one of the questions was around the quality of components in some of the all flash / hybrid arrays. This person kept coming back to the type of flash used (eMLC vs MLC, SATA vs NL-SAS vs SAS). One of the comments he made was the following:

I talked to Pure Storage but they want $$$ for 11TB of consumer grade MLC.

I am guessing he did a quick search on the internet, found a price for some SSDs and multiplied it and figured that Pure Storage was asking way too much… And even compared to some more traditional arrays filled with SSD they could sound more expensive. I guess this also applies to other solutions, so I am not calling out Pure Storage here.One thing some people seem to forget is that when it comes to these new storage architectures is that they are build with flash in mind.

What does that mean? Well everyone has heard all of the horror stories around consumer grade flash wearing out extremely fast and blowing up in your face. Well fortunately that is only true to a certain extent as some consumer grade SSDs easily reach 1PB of writes these days. On top of that there are a couple of things I think you should know and consider before making statements like these or be influenced by a sales team who says “well we offer SLC versus MLC so we are better than them”.

For instance (As Pure Storage lists on their website), there are many more MLC drives shipped than any other type at this point. Which means that it has been tested inside out by consumers, who can break devices in many more ways than you or your QA team can? Right, the consumer! More importantly if you ask me, ALL of these new storage architectures have in-depth knowledge of the type of flash they are using. That is how their system was architected! They know how to leverage flash, they know how to write to flash, they know how to avoid fast wear out. They developed an architecture which was not only designed but also highly optimized for flash… This is what you pay for. You pay for the “total package” which means the whole solution, not just those flash devices that are leveraged. The flash devices are a part of the solution, and just a relatively small part if you ask me. You pay for total capacity with low latency and functionality like deduplication, compression and replication (in some cases). You pay for the ease of deployment and management (operational efficiency), meaning you get to spent your time on stuff that matters to your customer… their applications.

You can summarize all of it in a single sentence: the physical components used in all of these solutions are just a small part of the solution, whenever someone tries to sell you the “hardware” that is when you need to be worried!

CloudPhysics Storage Analytics and new round of funding

When I just woke up I saw the news was out… A new round of funding for CloudPhysics! CloudPhysics raised $15 million in a series C investment round, bringing the company’s total funding to $27.5 million! Congratulations folks, I can’t wait to see what this new injection will result in to. One of the things that CloudPhysics heavily invested in to the past 12 months has been the storage side of the house. In their SaaS based solution one of the major pillars today is Storage Analytics, along side General Health Checks and Simulations.

The Storage Analytics section is available as of today to everyone out there! It will allow you to monitor things like “datastore contention”, “unused VMs” and everything there is to know about capacity savings ranging from inside the guest to datastore level details. If you ever wondered how “big data” could be of use to you, I am sure you will understand when you start using CloudPhysics. Not just their monitoring and simulation cards are brilliant, the Card Builder is definitely one of their hidden gems. If you need to convince your management, than all you should do is show the above screenshot: savings opportunity!

Of course there is a lot more to it than I will be able to write about in this short post. In my opinion if you truly want to understand what they bring to the table, just try it out for free for 30 days here!

PS: How about this brilliant Infographic… from the people who taught you how to fight the noisy neighbour, they now show you how to defeat that bully!

**disclaimer: I am an advisor to CloudPhysics **