• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware

Using F5 to balance load between your vCloud Director cells

Duncan Epping · Feb 16, 2012 ·

** I want to thank Christian Elsen and Clair Roberts for providing me with the content for this article **

A while back Clair contacted me and asked me if I was interested in getting the info to write an article about how to setup F5’s Big IP LTM VE to front a couple of vCloud Director cells. As you know I used to be part of the VMware Cloud Practice and was responsible for architecting vCloud environments in Europe. Although I did design an environment where F5 was used I never actually was part of the team who implemented it, as it is usually the Network/Security team who takes on this part. Clair was responsible for setting this up for the VMworld Labs environment and couldn’t find many details around this on the internet, hence the reason for this article.

This post will therefore outline how to setup the below scenario of distributing user requests across multiple vCloud Director cells.

figure 1:

For this article we will assume that the basic setup of the F5 Big IP load balancers has already been completed. Besides the management and HA interface, one interface will reside on the external – end-user facing – part of the infrastructure and another interface on the internal – vCloud director facing – part of the infrastructure.

Configuring a F5 Big IP load balancer to front a web application usually requires a common set of configuration steps:

  • Creating a health monitor
  • Creating a member pool to distribute requests among
  • Creating the virtual server accessible by end-users

Let’s get started configuring the health monitor.  A monitor is used to “monitor” the health of the service. Go to the Local Traffic page, then go to monitors. Add a monitor for vCD_https. This is unique to vCD, we recommend to use the following string “http://<cell.hostname>/cloud/server_status“ (figure 3). Everything else can be set to default.

figure 2:

figure 3:

figure 4:

Next you will need to define the vCloud Director Cells as nodes of a member pool. The F5 Big IP will then distribute the load across these member pool nodes. You will need to type in the IP address, add the name and all the info. We suggest to use 3 vCloud Director Cells as a minimum. Go to Nodes and check your node list, depicted in figure 5. You should have three defined as shown in figure 5 and 6. You can create these by simply clicking “Create”  and defining the ip-address and the name of the vCD Cell

figure 5:

figure 6:

figure 7:

Now that you have defined the cells you will need to pool them. If vCloud Director needs to respond to both http and https (figure 8 and 9) you will need to configure two pools. Each pool will have the three cells added. We are going with most of the basics settings. (Pools menu) Don’t forget the Health Monitors.

figure 8:

figure 9:

Now validate if the health monitor has been able to successfully communicate with the vCD cells, you should see a green dot! The green dot means that the appliance can talk to the cells and that the health monitoring is fine and getting results on the query.

Last you will need to create a Virtual IP (VIP) per server. In this case two “virtual servers”  (as the F5 appliance names them, figure 10) will have the same IP but with different ports!, http and https. These can be simply created by clicking “Create” and then define the IP Address which will be used to access the cells (figure 11).

figure 10:

figure 11:

Repeat the above steps for the Consoleproxy IP address of your vCD setup.

Last you will need to specify these newly created VIPs in the vCD environment.

See Hany’s post on how to do the vCloud Director part of it… it is fairly straight forward. (I’ll give you a hint: Administration –> System Settings –> Public Addresses)

 

Setting the default affinity rule for Storage DRS

Duncan Epping · Feb 7, 2012 ·

On my blog article for yesterday “Rob M” commented that the default affinity rule for Storage DRS (SDRS), keep VM files together, did not make sense to him. One of the reasons this affinity rule is set is because customers indicated that from an operational perspective it would be easier if all files of a given VM (vmx / vmdk’s) would reside in the same folder. Especially troubleshooting was one of the main reasons, as this lowers complexity. I have to say that I fully agree with this, I’ve been in the situation where I needed to recover virtual machines and having them spread across multiple datastore really complicates things.

But, just like Rob, you might not agree with this and rather have SDRS handling balancing on a file per file basis. That is possible and we documented this procedure in our book. I was under the impression that I blogged this, but just noticed that somehow I never did. Here is how you change the affinity rule for the current provisioned VMs in a datastore cluster:

  1. Go to Datastores and Datastore Clusters
  2. Right click a datastore cluster and select “edit settings”
  3. Click “Virtual machine settings”
  4. Deselect “Keep VMDKs together”
    1. For virtual machines that need to stick together you can override the default by ticking the tick box next to the VM


Also check out this article by Frank about DRS/SDRS affinity rules, useful to know!

How cool and useful is Storage DRS?!

Duncan Epping · Feb 6, 2012 ·

I was just playing around in my lab and created a whole bunch of VMs when I needed to deploy to large virtual machines. Both of them had 500GB disks. The first one deployed without a hassle, but the second one was impossible to deploy, well not impossible for Storage DRS. Just imagine you had to figure this out yourself! Frank wrote a great article about the logic behind this and there is no reason for me to repeat this, just head over to Frank’s blog if you want to know more..

And the actually migrations being spawned:

Yes, this is the true value of Storage DRS… initial placement recommendations!

Top VMware/virtualization blogs 2012 voting starts today

Duncan Epping · Jan 24, 2012 ·

Yes, it is that time of the year again… vSphere-land.com’s voting for the Top 25 Blogs worldwide has started again. I had the honor of placing 1st four consecutive times, but the competition is huge this year with excellent newcomers like Chris Colotti, scripting warriors like William Lam and Alan Renouf and of course my long time rival/friend Chad Sakac.

I am hoping each of you will select the top-10 blogs based on quality, longevity and frequency. (I personally find length of the article irrelevant, content is King!) I did want to list my top 10 articles over the last 12 months:

  1. The vSphere 5.0 – HA Deepdive
  2. Using vSphere Auto-Deploy in your home lab
  3. Multiple-NIC vMotion in vSphere 5…
  4. esxtop
  5. vSphere 5.0: Storage vMotion and the Mirror Driver
  6. vSphere 5.0: What has changed for VMFS?
  7. HA Architecture Series (1 – 5)
  8. “Hacking” Site Recovery Manager (SRM) / a Storage Array Adapter
  9. ESXi 5.0 and Scripted Installs
  10. vSphere 5.0 vMotion Enhancements

The voting is very straight forward and will only take 2 minutes of your time, all you have to do is select your Top 10 favourite VMware related virtualization blog sites and then sort them in your order of preference (ie: 1 – 10) – it’s as easy as that! Don’t wait any longer, cast your vote now!

New session added for PEX

Duncan Epping · Jan 24, 2012 ·

A couple of weeks back I posted my session details for PEX. I just had a session added to my schedule which I wanted to inform you about. This session was originally hosted by no one less than Mike DiPetrillo. Chris Colotti and I have been asked to take over the session.

Session 1262 (Wednesday 2/12 @ 12:30pm): DR of the Cloud and to the Cloud

This session will look at DR and the cloud. Two different DR scenarios will be presented in depth – DR of the cloud and DR to the cloud. DR to the cloud is how end consumers fail over resources to a cloud provider. DR of the cloud is how you fail over cloud resources from one site to another. This session will go in depth on the consumer and provider side of the architecture. We’ll look at how to replicate the data, what applications are primary targets, how to size environments, how to maintain multi-tenancy, and what to avoid when architecting these solutions. This session is a must for anyone considering tier 1 applications for the cloud.

Presenters: Chris Colotti and Duncan Epping

Don’t forget to add it to your schedule, it is going to be a really cool session!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 77
  • Page 78
  • Page 79
  • Page 80
  • Page 81
  • Interim pages omitted …
  • Page 123
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in