• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

hardware

Scale UP!

Duncan Epping · Mar 17, 2010 ·

Lately I am having a lot of discussions with customers around sizing of their hosts. Especially Cisco UCS(with the 384GB option) and the upcoming Intel Xeon 5600 series with six cores per CPU takes the “Scale Up” discussion to a new level.

I guess we had this discussion in the past as well when 32GB became a commodity. The question I always have is how many eggs do you want to have in one basket. Basically do you want to scale up(larger hosts) or scale out(more hosts).

I guess it’s a common discussion and a lot of people don’t see the impact sizing your hosts correctly. Think about this environment, 250 VMs in total with the need of roughly 480GB of memory:

  • 10 Hosts, each having 48GB and 8 Cores, 25 VMs each.
  • 5 Hosts, each having 96GB and 16 Cores, 50 VMs each.

If  you look at it from an uptime perspective; Would a failure occur in scenario 1 you will lose 10% of your environment. If you look at scenario 2 this is 20%. Clearly the associated cost with the down time for 20% of your estate is higher than for 10% of your estate.

Now it’s not only the associated cost with the impact of a host failure it is also for instance the ability of DRS to load balance the environment. The less hosts you will have the smaller the chances are DRS will be able to balance the load. Keep in mind DRS uses a deviation to calculate the imbalance and simulates a move to see if it results in a balanced cluster.

Another thing to keep in mind is HA. When you design for N+1 redundancy and need to buy an extra host the costs associated for redundancy is high for a scale up scenario. Not only the costs associated are high, the load when the fail-over needs to occur will also increase immense. If you only have 4 hosts and 1 host fails the added load on the 3 hosts will have a higher impact than it would have on for instance 9 hosts in a scale out scenario.

Licensing is another often used argument for buying larger hosts but for VMware it usually will not make a difference. I’m not the “capacity management” or “capacity planning” guru to be honest but I can recommend VMware Capacity Planner as it can help you to easily create several scenarios. (Or Platespin Recon for that matter.) If you have never tried it and are a VMware partner check it out and run the scenarios based on scale up and scale out principles and do the math.

Now, don’t get me wrong I am not saying you should not buy hosts with 96GB but think before you make this decision. Decide what an acceptable risk is and discuss the impact of the risk with your customer(s). As you can imagine for any company there’s a cost associated with down time. Down time for 20% of your estate will have a different financial impact than down time for 10% of your estate and this needs to be weighted against all the pros and cons of scale out vs scale up.

Nehalem and memory config

Duncan Epping · Jan 1, 2010 ·

Just a short article for today, or should I call it a tip. Take your memory configuration into account for Nehalem processors. There’s a sweet spot in terms of performance which might just make a difference. Read this article on Scott’s blog or this article on Anandtech where they did measure the difference in performance. Again it is not a huge difference, but when combining workloads it might just be that little extra you were looking for.

Virtualized MMU and Transparent page sharing

Duncan Epping · Mar 6, 2009 ·

I’ve been doing Citrix XenApp performance tests over the last couple of days. Our goal was simple: as many user sessions on a single ESX host as possible, not taking per VM cost in account. Reading the Project VRC performance tests we decided to give both 1 vCPU VM’s and 2 vCPU VM’s a try. Because the customer was using brand new Dell hardware with AMD processors we also wanted to test with “virtualized MMU” set to forced. For a 32Bit Windows OS this setting needs to be set to force other wise it will not be utilized. (Alan Renouf was so kind to write a couple of lines of Powershell that enabled this feature for a specific VM, Cluster or just every single VM you have. Thanks Alan!)

We wanted to make sure that the user experience wasn’t degraded and that ESX would still be able to schedule tasks within a reasonable %RDY Time, < 20% per VM. Combine the 1vCPU, 2vCPU with and without virtualized MMU and you’ve got 4 test situations. Like I said our goal was to get as much user sessions on a box as possible. Now we didn’t conduct a real objective well prepared performance test so I’m not going to elaborate on the results in depth, in this situation 1vCPU with virtualized MMU and scale out of VMs resulted in the most user sessions per ESX host. [Read more…] about Virtualized MMU and Transparent page sharing

Old School: Label your Hardware

Duncan Epping · Jan 5, 2009 ·

So you were used to labelling your hardware with the name of the System running on it. But when running everything virtual you can label your ESX hosts but never know which VM resides at which Server without checking your console and/or vCenter.

Wouldn’t it be cool if you would have a magic Label that updated itself every once in a while. This way one would be able so see within just a glance which VM runs on which host.As you know there’s no such thing as a magic “label”, or maybe there is…

Yesterday I received an email from Nick Weaver(@lynxbat). He emailed me about a very very very cool script he wrote. No this script isn’t going to update your printed label off course. This script displays the VM’s running on your host on the front panel LCD. Most servers these days have frontpanel LCD’s and they can be updated with a couple simple ipmi commands.

Nick wrote an extensive article on how-to create a self updating magic label 🙂 in short:

  1. Install Dell OpenManage and run it on the ESX host (needed for ipmi drivers)
  2. Install ipmitool 1.8.10(SCP over, ./configure, make, make install….)
  3. Run lcd_update.sh script

Now walk over to your Dell server and check the result in the display, isn’t that amazing. Probably one of the most inventive scripts I’ve seen the last few months, it’s simple and gets the job done. Great job Nick, and I’m really curious what you will come up with next.

If I can find a Dell Machine this week I will definitely test it and post a screenshot!
I just received a link to a youtube video that shows it’s actually working!

Online compatibility guide

Duncan Epping · Dec 11, 2008 ·

VMware’s John Troyer revealed on Twitter a couple of hours ago a “search-able hardware compatibility guide” for VMware ESX and VMware View:

This online Hardware Compatibility Guide web application was released on December 10, 2008. To learn more about benefits and usage of this tool, please see “Help on Searching”.

This online Hardware Compatibility Guide replaces the former Hardware Compatibility Guides for systems, I/O devices, and SAN arrays for ESX 3.0 and greater versions, as well as VMware View Client. For compatibility guides of other VMware products or earlier ESX releases and VMware View Client, please use the “Other Documents” tab.

Check it out.

I would love to see some additions, for instance “supported path policy” for Storage. (MRU/Fixed) (The MRU/Fixed is actually there but you have to click on the model and then details before you see it.) But also supported versions of Software Agents for the Service Console would be a real welcome! It’s a big step forward again, and according to John they are still working on it and it will evolve over the next months.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

29-08-2022 – VMware Explore US
07-11-2022 – VMware Explore EMEA
….

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2022 · Log in