Awesome fling: ESXi Embedded Host Client

A long long time ago I stumbled across a project within VMware which allowed you to manage ESXi through a client which was running on ESXi itself. Basically it presented an html interface for ESXi not unlike the MUI we had in the old days. It was one of those pet-projects being done in spare time by a couple of engineers which for various reasons at the time was never completed. The concept/idea however did not die fortunately. Some very clever engineers felt it was time to have that “embedded host client” for ESXi and started developing something in their spare time and this is the result.

I am not going to describe it in detail as William Lam has an excellent post on this great fling already. The installation is fairly straight forward, basically a vib you need to install. No rocket science. When installed you can manage various aspects of your hosts and VMs including:

  • VM operations (Power on, off, reset, suspend, etc).
  • Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
  • Configuring NTP on a host
  • Displaying summaries, events, tasks and notifications/alerts
  • Providing a console to VMs
  • Configuring host networking
  • Configuring host advanced settings
  • Configuring host services

Is that cool or what? Head over to the Fling website and test it. Make sure to provide feedback when you have it as the engineers are very receptive and always looking to improve their fling. Personally I hope that this fling will graduate and will be added to ESXi by default, or at a minimum be fully supported! Excellent work Etienne Le Sueur and George Estebe!

Get your download engines running, vSphere 6.0 is here!

Yes the day is finally there, vSphere 6.0 / SRM / VSAN (and more) finally available. So where do you find it? Well that is simple… here:

Have fun!

Virtualization networking strategies…

I was asked a question on LinkedIn about the different virtualization networking strategies from a host point of view. The question came from someone who recently had 10GbE infrastructure introduced in to his data center and the way the network originally was architected was with 6 x 1 Gbps carved up in three bundles of 2 x 1Gbps. Three types of traffic use their own pair of NICs: Management, vMotion and VM. 10GbE was added to the current infrastructure and the question which came up was: should I use 10GbE while keeping my 1Gbps links for things like management for instance? The classic model has a nice separation of network traffic right?

Well I guess from a visual point of view the classic model is nice as it provides a lot of clarity around which type of traffic uses which NIC and which physical switch port. However in the end you typically still end up leveraging VLANs and on top of the physical separation you also provide a logical separation. This logical separation is the most important part if you ask me. Especially when you leverages Distributed Switches and Network IO Control you can create a great simple architecture which is fairly easy to maintain and implement both from a physical and virtual point of view, yes from a visual perspective it may be bit more complex but I think the flexibility and simplicity that you get in return definitely outweighs that. I definitely would recommend, in almost all cases, to keep it simple. Converge physically, separate logically.

vSwitch Traffic Shaping, what is what?

I was troubleshooting an issue where vMotion would time-out constantly, I had no clue where it was coming from so I started digging. In this case the environment was using a regular vSwitch and 10GbE networking. When I took a closer look I noticed that some form of traffic shaping was applied, as unfortunately the Distributed vSwitch was not an option for this environment. Now traffic shaping was enabled and the peak value was specified and the rest was left to the default value… and unfortunately this is exactly what cause the problem.

So when it comes to vSwitch Traffic Shaping, what is what? There are 3 settings you can set per portgroup:

  • Average Bandwidth – specified in Kbps
  • Peak Bandwidth – specified in Kbps
  • Burst Size – specified in KB

So if you have a 10Gbps NIC port for your traffic this means you have a total of 10,485,760 Kbps. When you enable vSwitch Traffic Shaping by default it is set to have “Average Bandwidth” to 100,000 Kbps , Peak Bandwidth to 100,000 Kbps and Burst Size to 1024,00 KB. So what does that mean? Well it means that if you enable it and do not change the values that the traffic is limited to 100,000 Kbps. 100,000 Kbps is… yes roughly 100Mbps, even less to be more precise: 97.6Mbps. Which is not a lot indeed, and not even a supported configuration for vMotion.

So what if I simply bump up the Peak Bandwidth to lets say 5Gbps, as I do not want vMotion to ever consume more than half of the NIC port (note, vSwitch traffic shaping is only for egress aka outbound traffic). Well setting the peak bandwidth sounds like it may do something, but probably not what you would hope for as this is how the settings are applied:

By default the traffic stream will get what is specified by “Average Bandwidth”. However, it is possible to exceed this when needed by specifying a higher “Peak Bandwidth” value. Your traffic will be allowed to burst until the value of “Burst Size” has been exceeded. In other words, in the above example when only Peak Bandwidth is increased this would lead to the following: By default the traffic is limited to 100Mbps, however it can peak to 5Gbps but only for 100MB worth of data traffic. As you can imagine in the case of vMotion when the full memory content of a VM is transferred that 100MB is hit within a second, after which the vMotion process is throttled back to 100Mbps and the remainder of the VM memory takes ages to copy and eventually times out.

So if you apply traffic shaping using your vSwitch, make sure to think through the numbers. In the above scenario for instance, specifying a 5Gbps Average and Peak would be what was desired.

New beta of the vSphere 5.5 U1 Hardening Guide released

Mike Foley just announced the new release of the vSphere 5.5 U1 Hardening Guide. Note that it is still labeled as a “beta” as Mike is still gathering feedback, however the document should be finalized first week of June.

For those concerned about security, this is an absolute must read! As always, before implementing ANY of these recommendations make sure to test them on a test cluster and test expected functionality of both the vSphere platform and the virtual machines and applications running on top of it.

Nice work Mike!