CloudPhysics adds functionality: VM reservations/limits and Snapshots

CloudPhysics just announced two new cards. One card is titled “Snapshots Gone Wild Card“, the other is titled “VM Reservations & Limits Card“. This is the direct result of the contest that CloudPhysics held right before VMworld US. I guess that is the nice thing about being a start-up, being able to respond to community / customer requests quickly. However, it is also due to the nature of the CloudPhysics solution.

All the cards CloudPhysics will be offering are objects by itself, making it easy to add new cards or changing cards based on customer requests without the need to QA the whole platform. Flexibility / Agility right there.

So what exactly was added? The first card “Snapshots gone wild” is all about … yes you guessed it VMware snapshots. Which virtual machines have snapshots? How many snapshots? How old is the snapshot? That is the kind of data it reveals. Considering the many problems I have seen out in the field with snapshots, I would say that this is one you will want to check regularly.

The second card is all about VM Reservations and Limits. Frank and I wrote about this many times, and warned people about the impact many times. I guess most of you are aware of the impact by now, but you would be surprised to see what comes up when you run this card in your environment. I have done many many health checks in the past, and VM limits always kept popping up randomly. Definitely highly recommend to take a look at.

Of course besides these two new cards there are various others which are very useful like the Cluster Health card or the VMware Tools cards. I suggest you head over to CloudPhysics.com and sign up and give it a try.

VXLAN requirements

When I was writing my “Configuring VXLAN” post I was trying to dig up some details around VXLAN requirements and recommendations to run a full “VMware” implementation. Unfortunately I couldn’t find much, or at least not a single place with all the details. I figured I would gather all I can find and throw it in to a single post to make it easier for everyone.

Virtual:

  • vSphere 5.1
  • vShield Manager 5.1
  • vSphere Distributed Switch 5.1.0
  • Portgroups will be configured by vShield Manager, recommend to use either “LACP Active Mode”, “LACP Passive Mode” or “Static Etherchannel”
    • When “LACP” or “Static Etherchannel” (Cisco only) is configured note that a port/ether channel will need to be created on the physical side
    • “Fail Over” is supported, but not recommended
    • You cannot configure the portgroup with “Virtual Port ID” or “Load Based Teaming”, these are not supported
  • Requirement for MTU size of 1600 (Kamau explains why here)

Physical:

  • Recommend to have DHCP available on VXLAN transport VLANs, fixed IP also works though!
  • VXLAN port (UDP 8472) is opened on firewalls (if applicable)
  • Port 80 is opened from vShield Manager to the Hosts (used to download the “vib / agent”)
  • For Link Aggregation Control Protocol (LACP), 5- tuple hash distribution is highly recommended but not a hard requirement
  • MTU size requirement is 1600
  • Strongly recommended to have IGMP snooping enabled on L2 switches to which VXLAN participating hosts are attached. IGMP Querier must be enabled on router or L3 switch with connectivity to the multicast enabled networks when IGMP snooping is enabled.
  • If VXLAN traffic is traversing routers –> multicast routing must be enabled
    • The recommended Multicast protocol to deploy for this scenario is Bidirectional Protocol Independent Multicast (PIM-BIDIR), since the Hosts act as both multicast speakers and receivers at the same time.

That should capture most requirements and recommendations. If anyone has any additions please leave a comment and I will add it.

** Please note, proxy arp is not a requirement for a VXLAN / VDS implementation, only when Cisco Nexus 1000v is used this is a requirement **

References:
VXLAN Primer by Kamau
vShield Administration Guide
Internal training ppt
KB 2050697 (note my article was used as the basis for this KB)

Configuring VXLAN…

Yesterday I got an email about configuring VXLAN. I was in the middle of re-doing my lab so I figured this would be a nice exercise. First I downloaded vShield Manager and migrated from regular virtual switches to a Distributed Switch environment. I am not going to go in to any depth around how to do this, this is fairly straight forward. Just right click the Distributed Switch and select “Add and Manage Hosts” and follow the steps. If you wondering what the use-case for VXLAN would be I recommend reading Massimo’s post.

VXLAN is an overlay technique and encapsulates layer 2 in layer 3. If you want to know how this works technically you can find the specs here. I wanted to create a virtual wire in my cluster. Just assume this is a large environment, I have many clusters and many virtual machines. In order to provide some form of isolation I would need to create a lot of VLANs and make sure these are all plumbed to the respective hosts… As you can imagine, it is not as flexible as one would hope. In order to solve this problem VMware (and partners) introduced VXLAN. VXLAN enables you to create a virtual network, aka a virtual wire. This virtual wire is a layer 2 segment and while the hosts might be in different networks the VMs can still belong to the same.

I deployed the vShield virtual appliance as this is a requirement for using VXLAN. After deploying it you will need to configure the network. This is fairly simple:

  • Login to the console of the vShield Manager (admin / default)
  • type “enable” (password is “default”)
  • type “setup” and provide all the required details
  • log out

Now the vShield Manager virtual appliance is configured and you can go to “https://<ip addres of vsm>/. You can login using admin / default. Now you will need to link this vShield Manager to vCenter Server:

  • Click “Settings & Reports” in the left pane
  • Now you should be on the “Configuration” tab in the “General” section
  • Click edit on the “vCenter Server” section and fill out the details (ip or hostname / username / password)

Now you should see some new shiny bright objects in the left pane when you start unfolding:

Now lets get VXLAN’ing

  • Click your “datacenter object” (in my case that is “Cork”)
  • Click the “Network virtualization” tab
  • Click “Preparation” –> “Connectivity
  • Click “Edit” and tick your “cluster(s)” and click “Next
  • I changed the teaming policy to “failover” as I have no port channels configured on my physical switches, depending on your infra make the changes required and click “finish

An agent will now be installed on the hosts in your cluster. This is a “vib” package that  handles VXLAN traffic and a new vmknic is created. This vmknic is created with DHCP enabled, if needed in your environment you can change this to a static address. Lets continue with finalizing the preparation.

  • Click “Segment ID
  • Enter a pool of Segment IDs, note that if you have multiple vSMs this will need to be unique as a segment ID will be assigned to a virtual wire and you don’t want virtual wires with the same ID. I used “5000 – 5900″
  • Fill out the “Multicast address range“, I used 225.1.1.1-225.1.4.254

Now that we have prepped the host we can begin creating a virtual wire. First we will create a network scope, the scope is the boundary of your virtual network. If you have 5 clusters and want them to have access to the same virtual wires you will need to make them part of the same network scope

  • Click “network scopes
  • Click the “green plus” symbol to “add a network scope
  • Give the scope a name and select the clusters you want to add to this network scope
  • Click “OK

Now that we have defined our virtual network boundaries aka “network scope” we can create a virtual wire. The virtual wire is what it is all about, a “layer 2″ segment.

  • Click “networks
  • Click the “green plus” symbol to “create a VXLAN network
  • Give it a name
  • Select the “network scope

In the example below you see two virtual wires…

Now you have created a new virtual wire aka VXLAN network. You can add virtual machines to it by simply selecting the network in the NIC config section. The question of course remains, how do you get in / out of the network? You will need a vShield Edge device. So lets add one…

  • Click “Edges
  • Click the “green plus” symbol to “add an Edge
  • Give it a name
  • I would suggest, if you have this functionality, to tick the “HA” tickbox so that Edge is deployed in an “active/passive” fashion
  • Provide credentials for the Edge device
  • Select the uplink interface for this Edge
  • Specify the default gateway
  • Add the HA options, I would leave this set to the default
  • And finish the config

Now if you had a virtual wire, and it needed to be connected to an Edge (more than likely) make sure to connect the virtual wire to the Edge by going back to “Networks”. Select the wire and then the “actions dial” and click “Connect to Edge” and select the correct edge device.

Now that you have a couple of wires you can start provisioning VMs or migrating VMs to them. Simply add them to the right network during the provisioning process.

Limit the amount of eggs in a single basket through vSphere 5.1 DRS

A while back I had discussion with someone and he asked me if it was possible to limit the amount of eggs in a single basket, in other words limit the amount of VMs per host. The reason this customer wanted to do this was to limit the impact of a failure. They had roughly 1500 VMs in their cluster and some hosts carried 50 VMs while other had 20 or 80. This is the nature of DRS though and totally expected.

If one of these hosts would fail, and lets say they had 80 VMs the impact of that would be substantial. To minimize the risk they wanted to limit the amount of VMs per host. I had thought about this before and had already asked the HA and DRS team if they could do anything around this. The DRS team started looking in to it and to my surprise they managed to get it in quick.

In VMworld 2012 session “VSP2825: DRS: Advanced Concepts, Best Practices and Future Directions” by Ajay Gulati and Aashish Parikh a solution is presented. (You can watch this session for free on youtube, highly recommended!) This solution is a new vSphere DRS advanced setting which is introduced in vSphere 5.1.

 LimitVMsPerESXHost

Note that when you configure this setting it might impact the performance of your virtual machines as it could limit the load balancing mechanism of your cluster. If you have no requirements to limit the amount of VMs per ESXi host, don’t do it. When this setting is configured, vSphere DRS will not allow migrations to a host which has reached the threshold and will also not admit new VMs to the host if it has reached the threshold.

Cool Tool update: RVTools 3.4

It has been a while since I blogged about RVTools, but I just received an email from Rob saying that there is an update out so I figured it was about time. RVTools is in my opinion THE best free and independent tool out there for a vSphere enviroment. This is a must-have tool for every virtualization admin / consultant!

I have used it many times in the past, and I can tell you that it helped me digging up some nasty inconsistencies in environments and misconfigured VMs etc. I am surprised that none of the monitoring/reporting vendors has approached Rob to sponsor the tool itself… Especially considering RVTools was downloaded over 150.000 times so far.

What’s new for RVTools 3.4?

  • Overall performance improvements and better end user experience
  • VI SDK reference changed from 4.0 to 5.0
  • Added reference to Log4net (Apache Logging Framework) for debugging purpose
  • Fixed a SSO problem
  • CSV export trailing separator removed to fix PowerShell read problem
  • On vDisk tabpage new fields: Eagerly Scrub and Write Through
  • On vHost tabpage new field: vRAM = total amount of virtual RAM allocated to all running VMs
  • On vHost tabpage new fields: Used memory by VMs, Swapped memory by VMs and Ballooned memory by VMs
  • Bugfix: Snapshot size was displayed as zero when smaller than 1 MM
  • Added a new preferences screen. Here you can disable / enable some performance killers. By default they are disabled

Go and download it and give it a try, I am certain it will discover things you did not know about…

Call for speakers for Lightning and NotSupported talks at VMworld Barcelona

At VMworld San Francisco the vBrownBag crew and Randy Keener held a series of excellent talks at the community lounge. Randy was responsible for the “NotSupported” talks and the vBrownBag crew ran the “lightning” talks. Both type of sessions were typically around 10-15 minutes tops and technical…

The Brown Bag crew is organizing these talks again for Barcelona and they are looking for people to present. Did you submit a session for VMworld but got rejected? Have you always wanted to do a lightning talk? Got something cool but totally unsupported that you want to share?

S I G N – U P – T O D A Y !

I will be there for sure, this is Europe… lets show them how it is done. 10 minutes, who can’t spare 10 minutes… Go for it I say,