• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

Home Lab expansion…

Duncan Epping · May 3, 2011 ·

I’ve posted an overview of my homelab a while back and it changed a bit over the course of the last couple of months so I wanted to do an update of the article. Let me disclose first that Drobo was kind enough to provide me with a test-unit. Thanks Drobo!

My Workstation which runs Windows 7 with VMware Workstation on top of it. The most important change is the addition of an SSD drive. I ran two nice Seagate Cheetah 15k SAS drives in RAID-0 for a while, but started to get annoyed by the ticking sound these drives produce. It’s not a defect it is part of the mechanism, but very annoying background noise.

  • Asustek P6T WS Pro
  • Intel Core i7-920
  • Kingston SSDNow 256GB (new)
  • 6 x 2GB Kingston 1333Mhz

And another substantial change is the lab storage. I used to run on two Iomega IX4’s. Although these are very cool devices unfortunately they are “limited” to four drives and I was looking for some more capabilities to extend some of the tests I am conducting. I just received a brand new Drobo B800i with 6 x 7.2k Sata drives. Which means I have two slots left which I might just fill up with SSD for the sake of it.

  • Drobo B800i (new)
  • 6 x Western Digital 7.2k Drive

If I would give one tip though to the Drobo folks, make the dashboard available over http/https rather than a separate utility. Hopefully I can do some performance testing next week or the week after when I have some more time on my hands.

Which metric to use for monitoring memory?

Duncan Epping · Apr 29, 2011 ·

** PLEASE NOTE: This article was written in 2011 and discussed how to monitor memory usage, which is different then memory / capacity sizing. For more info on “active memory” read this article by Mark A. **

This question has come up several times over the last couple of weeks so I figured it was time to dedicate an article to it. People have always been used to monitoring memory usage in a specific way, mainly by looking at the “consumed memory” stats. This always worked fine until ESX(i) 3.5 introduced the aggressive usage of Large Pages. In the 3.5 timeframe that only worked for AMD processors that supported RVI and with vSphere 4.0 support for Intel’s EPT was added. Every architectural change has an impact. The impact is that TPS (transparent page sharing) does not collapse these so called large pages. (Discussed in-depth here.) This unfortunately resulted in many people having the feeling that there was no real benefit of these large pages, or even worse the perception that large pages are the root of all evil.

After having several discussions with customers, fellow consultants and engineers we managed to figure out why this perception was floating around. The answer was actually fairly simple and it is metrics. When monitoring memory most people look at the following section of the host – summary tab:

However, in the case of large pages this metric isn’t actually that relevant. I guess that doesn’t only apply to large pages but to memory monitoring in general, although as explained it used to be an indication.  The metric to monitor  is “active memory“. Active memory is is what the VMkernel believes is currently being actively used by the VM. This is an estimate calculated by a form of statistical sampling and this statistical sampling will most definitely come in handy when doing capacity planning. Active memory is in our opinion what should be used to analyze trends. Kit Colbert has also hammered on this during his Memory Virtualization sessions at VMworld. I guess the following screenshot is an excellent example of the difference between “consumed” and “active”. Do we need to be worried about “consumed” well I don’t think so, monitoring “active” is probably more relevant at this point! However, it should be noted that “active” represents a 5 minute time slot. It could easily be that the first 5 minute value observed is the same as the second, yet they are different blocks of memory that were touched. So it is an indication of how active the VM is. Nothing more than that.

What have you been up to – part 2

Duncan Epping · Apr 28, 2011 ·

As I have been posting more regularly on the ESXi Chronicles blog I figured it made sense to make people aware of the series of articles I produced like I did last time. These are the articles I recently published, check them out as I feel they are worth reading. Also note that many of the “Ops Changes” articles will be rolled up in to an official whitepaper that will be published on the Tech Resources section of the VMware website.

  • VMTN Podcast about Transitioning to ESXi
  • Scratch partition best practices for USB/SD booted ESXi?
  • Need to install 100s of ESXi hosts?
  • Is your environment secure?
  • The missing link for scripted installs, adding your ESXi host to vCenter
  • Scripted install with ESXi
  • Cool PowerCLI script for backing up the ESXi System Image
  • Ops changes part 8 – Logging in, Auditing and Log files
  • Ops changes part 7 – Upgrading Firmware
  • Ops changes part 6 – Quick troubleshooting tips
  • Ops changes part 5 – Scratch partition
  • Ops changes part 4 – Injecting or installing drivers
  • Ops changes part 3 – Local disk vs USB vs BFS
  • Ops changes part 2 – Scripted installation
  • Ops changes part 1 – Introduction

I hope my efforts with regards to smoothing the transition to ESXi are helpful so far. If there are any specific areas which you feel need to be covered feel free to leave a comment and I will try to cover asap.

Fling: PXE Manager for vCenter

Duncan Epping · Apr 22, 2011 ·

It is finally released… PXE Manager for vCenter. My former Cloud colleague Max Daneri of VMTS fame has worked very  very hard on this and actually demoed it at VMworld in 2009. I know Max is already working on the next release which of course will work with the upcoming vSphere version as well. So if you’ve tested it and have feedback don’t forget to leave a comment on labs.vmware.com.

PXE Manager for vCenter enables ESXi host state (firmware) management and provisioning. Specifically, it allows:

  • Automated provisioning of new ESXi hosts stateless and stateful (no ESX)
  • ESXi host state (firmware) backup, restore, and archiving with retention
  • ESXi builds repository management (stateless and statefull)
  • ESXi Patch management
  • Multi vCenter support
  • Multi network support with agents (Linux CentOS virtual appliance will be available later)
  • Wake on Lan
  • Hosts memtest
  • vCenter plugin
  • Deploy directly to VMware Cloud Director
  • Deploy to Cisco UCS blades

Distributed vSwitches, go Hybrid or go Distributed?

Duncan Epping · Apr 21, 2011 ·

Yesterday I was answering some question in the VMTN Forums when I noticed that someone referred to my article about Hybrid vs full Distributed vSwitch Architectures. This article is almost two years old and definitely in desperate need of a revision. Back in 2009 when Distributed vSwitches where just introduced my conclusion in this discussion was:

If vCenter fails there’s no way to manage your vDS. For me personally this is the main reason why I would most like not recommend running your Service Console/VMkernel portgroups on a dvSwitch. In other words: Hybrid is the way to go…

As with many things my conclusion / opinion was based on my experience with the Distributed vSwitch and I guess you could say it was based on how comfortable I was with the Distributed vSwitch and the hardware that we used at that point in time. Since then much has changed and as such it is time to revise my conclusion.

In many environments today Converged Networks are a reality which basically means less physical NICs but more bandwidth. Less physical NICs results in having less options with regards to the way you architect your environment. Do you really need you management network on a vSwitch? What is the impact of not having it on a vSwitch? I guess it all comes down to what kind of risks you are willing to take, but also how much risk actually is involved. I started rethinking this strategy and came to the conclusion that actually the amount of risk you are taking isn’t as big as we  once all thought it was.

What is the actual issue when running vCenter virtually connected to a Distributed Switch? I can hear many of you repeat the quote from above “there’s no way to manage your vDS”, but think about it for a second… Do you really need to manage your vDS in a scenario where vCenter is down? And if so, wouldn’t you normally want to get your Management Platform up and running first before you start making changes? I know I would. But what if you really really need to make changes to your management network and vCenter isn’t available? (That would be a major corner case scenario by it self but anyway…) Couldn’t you just remove 1 NIC port from your dvSwitch and temporarily create a vSwitch with your Management Network? Yes you can! So what is the impact of that?

I guess it all comes down to what you are comfortable with and a proper operational procedure! But why? Why not just stick to Hybrid? I guess you could, but than again why not benefit from what dvSwitches have to offer? Especially in a converged network environment being able to use dvSwitches will make your life a bit easier from an operational perspective. On top of that you will have that great dvSwitch only Load Based Teaming to your disposal, load balancing without the need to resort to IP-Hash. I guess my conclusion is: Go Distributed… There is no need to be afraid if you understand the impact and risks and mitigate these with solid operational procedures.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 188
  • Page 189
  • Page 190
  • Page 191
  • Page 192
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in