How cool is this video about Distributed Power Management, it demonstrates how DPM works and what the possible utility savings could be. Just watch it:
My understanding is it is still considered experimental as of ESX 3.5U2 and not supported for production use. It will be nice when VMware fully supports this. I think they mentioned at VMworld it will be fully supported in ESX4.
don’t know when it will be supported Eric, but it’s indeed still experimental.
Erik Bussinksays
I’m using DPM in 2 different configs at clients.
The first is 12x IBM HS21XM blades in two different Blade Center H chassis. In this config, 3 ESX per chassis are always on, the rest is set on automated PDM. And at the 2nd client I use HP BL680 blades, and I keep 2 out of 6 always on.
My minimum number of always on ESX server would not be 1, but 2, so that just in case of a crash of one of the two running ESX, the other one can send the appropriate Wake-Up Magic Packet, to the dormant ESXs.
This topic of the ‘Magic Packet’ caught my attention at the start of the year,
when I was implementing 12 ESX servers in two IBM BladeCenter H.
I read up the informations (not very clean mind you), but I think that there where af few slides in the VMware VTSP Infrastructure coursework.
In the mean time, I have a lab with 2x ESXi 3.5u3 (on Shuttle SX38P2 boxes),
and when I attempt to StandBy the second ESX server I get the following Error
message :
“A host in standby mode is for all intents and purposes, powered off. Users may
have difficulty powering the host on again through VirtualCenter. The host’s
network card must have Wake-On LAN support, and its subnet must include other
powered-on ESX hosts that are also managed by VirtualCenter. …”
Then I attempt to force the 2nd ESX server to StandBy and the following message pops-up.
“Failed to find a peer host to wake up this host.”
According to my memory “Magic Packets” are not true TCP packets (hey, the NIC doesn’t have a IP Address anymore as it’s down), they are broadcast packets, and if
you’re VirtualCenter is on another side of a router, the “Magic Packet” would
not reach the ESX servers. That is why you need 1 ESX server always on.
In my deployment, we always keep a minimum of 2 ESX servers on Manual DPM per cluster (we decided that a VMware ESX Cluster would not cross a router).
Hope this helps a bit. You can add the screenshots I send you by Email to you’re post…
Regards,
Erik Bussink
A word to the wise: Before VMworld 2008 I also heard rumors of the magic packets from VirtualCenter not making their way to the correct interface in certain blade chassis and configurations due to the inherent networking complexities in blade infrastructure (mezzanines, passthrus, virtual connect, etc.). The issue was that VirtualCenter thinks it knows the proper MAC address to send the magic packet too (and in most cases it would be right), but the magic packet does not end up at the correct MAC address in order to power on the ESX host. I brought it up in one of the post DPM presentation discussions at VMworld. The presenters hadn’t heard of it but asked for futher information. Another topic I brought up relates to alerting and monitoring. When DPM shuts down a host, all the red alerts still fire. We need to be able to treat a DPM shutdown as a planned event and suppress alerts that would send the uninformed into a panic or build up an immunity against VI alerts to the point valid alerts are being ignored going forward. Bottom line, this technology is still experimental so for sensitive environments, treat it as such.
John Troyer says
Digg it http://digg.com/software/VMware_Demo_of_Distributed_Power_Management
Eric Siebert says
My understanding is it is still considered experimental as of ESX 3.5U2 and not supported for production use. It will be nice when VMware fully supports this. I think they mentioned at VMworld it will be fully supported in ESX4.
Jason Boche says
Artist and song title please?
Duncan says
don’t know when it will be supported Eric, but it’s indeed still experimental.
Erik Bussink says
I’m using DPM in 2 different configs at clients.
The first is 12x IBM HS21XM blades in two different Blade Center H chassis. In this config, 3 ESX per chassis are always on, the rest is set on automated PDM. And at the 2nd client I use HP BL680 blades, and I keep 2 out of 6 always on.
My minimum number of always on ESX server would not be 1, but 2, so that just in case of a crash of one of the two running ESX, the other one can send the appropriate Wake-Up Magic Packet, to the dormant ESXs.
Duncan says
Could be me, but isn’t it VC that’s actually waking them up? I would say you need one extra to provide HA in case of a isolation etc.
Erik Bussink says
Hiya Duncan,
This topic of the ‘Magic Packet’ caught my attention at the start of the year,
when I was implementing 12 ESX servers in two IBM BladeCenter H.
I read up the informations (not very clean mind you), but I think that there where af few slides in the VMware VTSP Infrastructure coursework.
In the mean time, I have a lab with 2x ESXi 3.5u3 (on Shuttle SX38P2 boxes),
and when I attempt to StandBy the second ESX server I get the following Error
message :
“A host in standby mode is for all intents and purposes, powered off. Users may
have difficulty powering the host on again through VirtualCenter. The host’s
network card must have Wake-On LAN support, and its subnet must include other
powered-on ESX hosts that are also managed by VirtualCenter. …”
Then I attempt to force the 2nd ESX server to StandBy and the following message pops-up.
“Failed to find a peer host to wake up this host.”
According to my memory “Magic Packets” are not true TCP packets (hey, the NIC doesn’t have a IP Address anymore as it’s down), they are broadcast packets, and if
you’re VirtualCenter is on another side of a router, the “Magic Packet” would
not reach the ESX servers. That is why you need 1 ESX server always on.
In my deployment, we always keep a minimum of 2 ESX servers on Manual DPM per cluster (we decided that a VMware ESX Cluster would not cross a router).
Hope this helps a bit. You can add the screenshots I send you by Email to you’re post…
Regards,
Erik Bussink
Jason Boche says
A word to the wise: Before VMworld 2008 I also heard rumors of the magic packets from VirtualCenter not making their way to the correct interface in certain blade chassis and configurations due to the inherent networking complexities in blade infrastructure (mezzanines, passthrus, virtual connect, etc.). The issue was that VirtualCenter thinks it knows the proper MAC address to send the magic packet too (and in most cases it would be right), but the magic packet does not end up at the correct MAC address in order to power on the ESX host. I brought it up in one of the post DPM presentation discussions at VMworld. The presenters hadn’t heard of it but asked for futher information. Another topic I brought up relates to alerting and monitoring. When DPM shuts down a host, all the red alerts still fire. We need to be able to treat a DPM shutdown as a planned event and suppress alerts that would send the uninformed into a panic or build up an immunity against VI alerts to the point valid alerts are being ignored going forward. Bottom line, this technology is still experimental so for sensitive environments, treat it as such.
Andrew Storrs says
Jason, “Artist and song title please?”
Franz Ferdinand’s “The Fallen”, remixed by JUSTICE
http://www.youtube.com/watch?v=Fw0Lt8zn_UQ
You’ve gotta love Shazam (http://www.shazam.com/iphone) on the iPhone. ๐
Steve Philp says
Wondering what others are doing for network monitoring when DPM is in effect.
We’re using WhatsUp 12 and when DPM shuts down an unneeded server, we’re seeing the expected “down” states on the server and FC switch ports.
Would like to be able to have the monitoring software realize this is an OK situation and not send alerts.
Anyone work out a solution? Different software give better results?
Casino bonuses says
Good day
How are You?
No deposit casino accept USA players. Reputable online casino casino No deposit casino accept USA players. Reputable online casino EURO blackjack No deposit casino accept USA players. Real casino tournament games No deposit casino accept USA players. Real casino tournament USA poker No deposit casino accept USA players. Accept western union casino No deposit casino accept USA players. Accept western union US Jackpot Internet casino accept USA players. Reputable online casino casino Internet casino accept USA players. Reputable online casino UK Jackpot Internet casino accept USA players. Master card deposit method games Internet casino accept USA players. Master card deposit method UK blackjack Good night.
Kelsey Federowicz says
You should definitely take a look at the free $50 bonus being offered at our new Bet Phoenix Casino. Available now at: Bet Phoenix. Good luck!