My Homelab

This weeks VMTN podcast is about Homelabs. John  Troyer asked on twitter who had a homelab and if they already posted an article about it. Most bloggers already did but I never got to it. Weird thing is that the common theme for most virtualization bloggers seems to be physical! Take a look at what some of these guys have in their home lab and try to imagine the associated cost in terms of cooling, power but also the noise associated with it.

I decided to take a completely different route. Why buy three or four servers when you can run all your ESX hosts virtually on a single desktop. Okay, I must admit, it is a desktop on steroids but it does save me a lot of (rack)space, noise, heat and of course electricity. Here are the core components of which my Desktop consists:

I also have two NAS devices on which I have multiple iSCSI LUNs and NFS shares. I even have replication going on between the two devices! Works like a charm.

There’s one crucial part missing. On my laptop I use VMware Player but on my desktop I like to use VMware Workstation. Although VMware Player might just work fine, I like to have a bit more functionality at my disposal like teaming for instance.

That’s my lab. I installed 3 x ESXi 4.0 Update 1 in a VM and installed Windows 2008 in a VM with vCenter 4.0 Update 1. Attached the ESX hosts to the iSCSI LUNs and NFS Shares and off we go. Single box lab!

Be Sociable, Share!


    1. says

      So you are running an OS – on that OS you run VMWare Workstation 7 and inside this Workstation you have 3 ESXi’s up & running?

      I only have a whitebox testlab with vSphere on it. Inside this ESX, I run my vCenter as a VM. Can’t do vMotion, HA, DRS,… but don’t want to get multiple rigs because of the costs…

    2. Arron King says

      Thanks for posting this Duncan! This (and everything else you guys do) is a great help for the rest of us!

      For my needs, a single box with a lot of ram, and HDD is good enough for me to learn and practice.

      Keep up the great work!

    3. Rob Mokkink says

      Cool asus motherboard. I still have the 8gb limit with the asus p5m2/sas ones.

      The only downside of this setup is that you can’t run 64 bits guest inside your virtualized ESX(i) servers.

    4. says

      IMO the only way to improve on that is to build a white box with ESX 4 on it. Then virtualize the VMs and your NAS storage. You can then run a mini lab on top of that.

      But if you don’t have a full ESX 4 license, this is the next best thing.

      I really need to get off my ass and build out a lab. I guess I’ve been spoiled because I get to build so many ESX environments for my customers. However now I need to get VCDX out of the way so a lab becomes more important.

    5. Jason Ruiz says

      I got a 2.2ghz quadcore phenom with 6gb of ram(GJ Bestbuy). I ran 2 ESX servers, a domain controller, vCenter server, and OpenFiler storage on that box in VMWare Workstation on a Linux distro. I got mostly everything I wanted running, HA/DRS, but FT doesn’t work in this situation due to no virtualization enabled in firmware/bios. Other than that it’s pretty cool for testing.

    6. says

      Hello Duncan,

      I also have my home lab set up like this. My main desktop is running virtual ESX instances. I have 12GB of RAM. The only exception is that my storage is a VM as well :) Openfiler virtual appliance providing iSCSI.

      My other lab is physical with the IOMEGA box. Great little lab box.

      • mazhar memon says


        Though i am IT person but totally new to ESX/netapp field. Wanted to setup home lab for this stuff. Any guidance.

        I already have HP Server with bunch of SCSI drives and yet an other HP server with MS storage server (NAS). I have installed ESXi 4.0 on first and have created windows server VM. Wanted to learn not only vmware but netapp as well. How shall I proceed with rest of the stuff. Reading on this forum I have come across so many reference to different ways this could be done. IOMEGA NAS, OPENFILER software etc. I am looking at it with open mouth with big awe at the moment.

    7. says

      Hi Duncan,

      This is a very similar setup to what I use at home but I went for the cheaper AMD option. Is this used alongside access to a lab at VMware? Has there been anything you have had to revert to a phyical lab for rather than running on the home setup?

      The main issue I have with mine is having to boot it all up when I want to use it, cant really leave mine running 24 x 7 easily (or without ear ache from the other half!) Would love to put a small server in the loft or somewhere for this reason.



    8. says

      I posted a year back my homelab equipement…but I swapped it over. On my Asus P5B I have now installed ESX 4 update 1 with local disks, the Openfiler is still there, and a desktop with W7 with Workstation 7 on it.

      But, it’s still 3 physical boxes. I’m gonna do something with it…. -:)Would like having some NAS with SSDs.. -:)

      Next lab will be a Nehalem box…

    9. says

      Well running ESXi on such a box is nice, but would mean it has a single purpose. I would also like to be able to create videos on it. do some basic things like internet / blog writing etc. if I run ESXi I will still need to power up a second machine.

      @virtualisedreal yes booting up everything takes some time. but for me it beats having 3 servers creating noise and consuming power while sitting idle.

    10. kekerode says

      I own very similar single box test lab :)

      DFI LP UT X58
      Intel Core i7 920 @ 3.6GHz OCed
      3 x 2GB OCZ Reaper DDR3 1866MHz (going to add 6GB more of it)
      2 x WD Raptor 300GB RAID-0

      Only difference is that for NAS … I use one more VM of Openfiler

    11. Chris says

      I just finished rebuilding my home lab. I have 6 systems in the environment.

      1. Active Directory, old P4 with 2G ram
      2. Firewall – pfSense 1.2.3, old P4 with 512M ram
      3. vCenter – Windows 2008, SQL 2008, vCenter 4 U1. Hardware is i7920 with 12G ram, a few 15K SAS
      4. ESX node 1 – Intel DG33BU mATX board, 1 onboard Intel nic, 2 addon Intel nic, 1 qLogic 4Gb FC HBA, 8G ram, 36G SATA disk
      5. ESX node 2 – Intel DG33BU mATX board, 1 onboard Intel nic, 2 addon Intel nic, 1 qLogic 4Gb FC HBA, 8G ram, 36G SATA disk
      6. Storage Server – OpenSolaris b130. Dual Opteron 252, 8G ram, 2*10K SCSI miror for OS, 4*10K SCSI in RAID-5 for some VM data (mail). 8*15K.7 in RAID-10 for VM and some VM data (database). 30G Intel x25-e for ZIL, 30G OCZ Vertex for Cache. 2 onboard Broadcom nic, 2 addon Intel dual-port nic. 4Gb dual-port qlogic FC HBA.

      2 HP Procurve 2824
      1 Brocade 200E FC

      I have another machine I haven’t configured yet, it has XEON W3520, 12G RAM, 2 onboard Intel nic and 1 qlogic 4Gb FC HBA. I’m going to build this as ESXi to host VC and SQL DB so I can free up the physical server.

    12. says


      How about disks IOs with everything virtualized. Is it still does some valuable job, or once you start 5 VMs (2ESX, 1DC, 1vCenter, 1 Openfiler VM, 1 XP client)… you just wait for the spinning never ends?

      Then, having a little NAS box to spread the IOs does make sense..


    13. says

      Thank you Duncan, it was very valuable information. I have almost made a decision to buy a server but considering the noise,electricity and price you mentioned above, i’m going with your suggestion.

    14. says

      That’s exactly my home lab apart the shared storage where I’m using QNAP devices…

      Unfortunately you can’t test everything on such environment, that’s why I have another ‘playground’ at the office for serious stuff!

      Good post!

    15. says

      I also have a physical lab in the office which consists of:
      6 x Dell R610 with 24GB
      2 x EMC NS20
      1 x NetApp FAS 2050

      But we use that is a sandpit for the consultants… good stuff though!

    16. says

      I’m looking to build a little physical test lab @ work.
      Could someone point me to a physical brand-name box to buy? Would a HPML115 work?

      Thanks :)

    17. SA says

      Hi Duncan,

      I have a similar components as in your setup, but have two disk not in any RAID configuration (One disk with Window 7 and the other ESX). I installed BCD bootloader but have not got this to work sucessfully. The only way I can dual boot is by changing the boot order in the BIOS – not the ideal situation. I’m able to to have a native ESX install and also a Windows install for personal stuff.

      I was wondering if any of your readers of your blog had any success with dual booting?

    18. John says

      Are you running VMs inside your virtual ESX machines, or do they just exist for testing interaction between various ESX hosts.

      I just can’t imagine a VM on top of a VM.

    19. kekerode says


      Yes .. It takes time only while starting VMs. In fact only Openfiler VM is on WD Raptor RAID-0 drive, all other are on WD Black 1TB. I am going to build separate NAS rig. Maybe based Starwind iSCSI Target or Openfiler.

    20. says


      I’m just thinking. Would it make sense to add One SSD localy (like 128 Gigs) to the box, and use it as a iSCSI target for the VM’s?

    21. CGrossmeier says

      As a field sales engineer, It is critical that I have a solution I can test before I engage a customer.

      A critical part of testing in a Home Lab is shared storage. There are lots of options, and I have tried a few. If power and space are a concern, a virtual storage appliance (VSA) solution is great. Alternatively, the low power and features of the new SOHO NAS Devices out there are great as well if you want to move the storage out of the server.

      Physical Options:
      – QNAP TS-439 PRO II – VMware Ready Certified iSCSI NAS (Dual Gigabit ports + USB) with my own 1.5TB SATA Drives. Thin provisioning! Low power w/web server, email, and more (I love mine!)
      – Iomega iX4-200/200d Nas Device – Again, a nice low power ISCSI solution with everything included in the device.
      – Desktop PC Running FreeNAS, OpenFiler, Star Wind, etc. – Software ISCSI/NFS solutions utilizing a cheat desktop PC, NIC’s, and spare drives.

      Virtual Storage Appliance Solutions
      – FalconStor Virtual Storage Appliance – Leverage local storage DAS
      – HP/LeftHand Networks – Leverage local storage DAS

      If I need to take my lab on the road, ditch the physical servers and move over to VMware Workstation 7 on Windows 7 64Bit on a high power laptop (I now use a MacBook Pro with 8GB RAM). I’ve been able to stand up two ESXi Hosts in Workstation 7 and point both to my iSCSI devices for a quick proof of concept. If you are not concerned about the Guest OS’s, run small instances of TinyLinux. If you have the CPU and require portability, the VSA is the way to go as well, but you’ll want to get off of those old 5400 RPM Drives and step up to a SSD.

    22. says


      Nice post, my question is that did you build this desktop yourself or did you pick it up from the shelf.

      I am also trying to build my own home lab and i will rather have 1 desktop rather than multiple server

    23. says

      I build it myself. I would probably recommend to add a 40GB SSD to the system having worked with it for a while now and replace the SAS’s with a 7200GB SATA.

    24. JD says

      Can someone please give me more direction concerning the configuration of the network infrastructure for 2 ESX hosts, 1 vcenter, and 1 AD DC within Workstation 7, specifically the Virtual Network Editor? By the way, I have a AT&T DSL router with 4 ports as the physical connection to internet.


    25. Kelly O says

      Duncan, have you figured a way to get around the 32 bit nested VMs issue? I was successful in virtualizing vsphere but can only run 32 bit guests so I am not willing to go down to a single box at this time.

    26. says

      I’m preparing a new lab since I got a new Laptop as follows;

      Laptop Model: Dell Latitude E4300
      Memory: 6 GB
      C.P.U: p9400 Intel Centrino @ 2.40GHz
      Host OS: Windows 7 64bit

      Virtual Infrastructure Including DR Site:

      1. vESX4.1-01 768 MB Memory
      2. vESX4.1-02 768 MB Memory
      3. vCenter01 1024 MB Memory
      4. FalconStore NSS 256 MB Memory

      Nested VMs:
      1. Domain Controller 256 MB Memory
      2. XP Clients 128 MB Memory

      1. vESX4.1-03 768 MB Memory
      2. vESX4.1-04 768 MB Memory
      3. vCenter02 1024 MB Memory
      4. FalconStore NSS 256 MB Memory

      Total Memory consumption will be 5632 MB.

    27. Vuong Pham says

      wow.. I just found out the 2011 Macbook Pro i7 Sandy Bridge can do.. 16GB or RAM…

    28. Michael Kruger says

      I recently tried setting up a small virtual lab using lots of vmplayer VM’s and ran freenas with NFS (ISCSI too) inside a VM. Unfortunately everything was so slow I simply could not work with it. The problem was not the ESXI VM’s or virtual center, it was the pathetically slow shared storage.

      So, I began seriously considering using real hardware for the ESXi and NAS boxes. I started by building a NAS box (Openfiler 2.99 configured with dual Realtek 8168B NICS, ISCSI FILE I/O w/WB cache) and once I got the I/O’s and Read Speeds well optimized, I decided to revisit the idea of lab virtualization rather than plunking down $800 or more for two physical ESXi hosts. Boy, I am glad I did.

      When matched with a well optimized NAS solution, virtualized labs work work surprisingly well. While it is still a bit slower than real hardware, I can live with it. And the best parts is I can spend that $800 on something else.

    29. Michael Kruger says

      I probably should mention how my lab is configured. Well, I moved away from using VMplayer on Windows 7 and instead loaded it all into ESXi 5.0. running from a flash drive (on the same physical workstation). I just plug in the flash drive whenever I was to use the lab, otherwise it’s my primary Windows desktop.

      My lab is for testing vSphere 4.1. The DNS server VM is running Windows 2008R2 (1vcpu, 1GB memory) and virtual center 4.1 is also running on Windows 2008R2 (1vcpu, 3GB memory). For the ESXi VM’s, I selected “Other 64 bit OS” and loaded 3 instances of ESXi 4.1, each with 1vcpu and 3GB of memory. And I have 2 small nested WinXP VM’s running in an HA cluster.

      All this on a 3.0GHZ dual core desktop CPU with 12GB of memory. I may however increase the memory to 16GB soon.

    30. Kieran says

      Out of curiosity has anyone tried loading VM Workstation 8 with nested ESXi5 hosts?

      I’m currently running WK7 with nested ESX/ESXi4 hosts on my home machine running 6GB RAM and obviously there are performance issues (one of the VMs is running Openfiler). I was originally going to purchase a couple of N40L boxes but wondered how well just upgrading my home PC would be.

      So an i7 running 32GB RAM with a couple of SSDs should hopefully perform better than a couple of physical N40L microprocessors and still give me the ability to fully test vSphere 5 including the Nexus 1000 for roughly the same cash outlay? Given the fact the SXi hosts are virtual would allow me to supply them with multiple vmnics for VLAN testing too?

    31. Andy says

      Hi. Enjoyed reading Duncan’s article. I currently run a home lab with two HP Microservers and a Netgear ReadyNAS. On each Micrsoserver I have Windows 7 64 bit and run VMware workstation. I’m thinking of moving over to one PC to run the whole environment and would appreciate up to date advice on this please. Thanks.

    32. cas says

      I lost my job 17 months ago after a nasty divorce involving my 3 young kids 5, 7 and 10.

      I really need to setup the cheapest esxi 5 lab to get reacquainted with ha/ft/drs.

      I sacrificed a lot and my 3 kids are heading to school next week. Things are better now but struggling to stay afloat.

      At best I may be able to come up with $650 for 2 boxes. I have an e6500 laptop and a few x86 boxes to use for 32bit storage. But feel I need a solution that runs 64bit guest.

      Most job opps for me are 90 minutes away as my last job wad in RTP before divorce. Dread the commute but is essential for my kids.

      Thanks for your help.

      • Michael Kruger says

        Yes, you will want a physical machine capable of supporting 64 bit guests. Pretty much anything AMD satisfies that requirement, but if you go Intel, you’ll need to verfiy the CPU supports virtualization. You’ll also want globs of memory, at least 16GB or more and ideally a quad core processor. If you go AMD, you can easily build such a workstation for less than USD 400. Intel will cost you quite a bit more. You can run openfiler 2.99 for your NAS/SAN on your existing 32 bit machine and configure ISCSI with multiple (2 or more) gigabit NIC’s (I like cheap realtek nics, but you’ll need to manually install the linux driver for Openfiler). Configure the target for File/IO for the best performance. I am getting over 200MB/s using just 2 NIC’s. In theory, you can also try virtualizing the NAS, but I found doing so was too slow. Good luck.

    33. DanMan says

      @ Duncan, Your work is always an inspiration!
      Just wanted to share my 2 cent with regards to my Home Lab that I’m in the process of setting up; I’ve played around with quite a number of white box solutions but even though they are cheap and affordable I’d personally rather have hardware that is similar to what majority of companies are using in their production environments… kind of killing two birds with one stone. This way I learn about virtualization but using components that I use in real life work scenarios.
      Given that, I did my research, and I opted for an HP C3000 Chassis with a bunch of BL460C G1 blades. The chassis can house upto 8 blades, for the size of my lab I’m only using 4 blades at the moment.
      Each blade has 2x QC 5400 series processors, with 16GB of Ram, dual built in network adapters with the ability to add 2 additional Ethernet ports and 2 FC ports using the mezzanine cards. I’m running ESXi4.1 on another Dell Percision 690 Machine on the side to host my Virtual NetApps OnTap ver 8.0 as well as running Windows Storage server 2008. For switching I’d wanted to and did purchase a Cisco SG300-20 which is a managed Layer 3 Gigabit device … but that was only because I wanted to… you can easily have a cheaper substitute for it…
      Beauty of this setup is, you can run upto 12 VM’s on these 4 hosts quite easily but the cherry on top for me is, I can now play with HA/DRS as well as Fault Tolerance. At the same time one can easily test SRM like features as well…
      Over all cost of the HP hardware is under $2000 and you can have it for even cheaper than that if you are able to secure a good deal… The electrical cost are approx. $50 or so more on a per month… a minor price to pay as compared to the amount of learning you can do and that too on actual hardware that is being used out there as part of their virtual infrastructure for a lot of companies….
      Hope this helps!

    34. DanMan says

      Sorry, the price i mentioned above is incorrect, its approx. $5000 not $2000 which is still not much to pay for a good heafty home environment… :-)

    35. Michael Kruger says

      While there is plenty of merit to learning SAN technology, the cost is quite high (too high for me anyway). It’s far cheaper to create a lab using 2 machines where one is the SAN (I prefer openfiler as it’s much faster than anything else) and other other an ESXi 5.x host, all connected using ISCSI. Then just run everything in nested esxi vm’s. Can learn all about HA, DRS, etc. Don’t even need to use a domain controller for DNS, or authentication either. Just use static IP’s, and local authentication for vcenter.

    36. BjoernH says

      Good day all.

      I just built me a PC for the purpose of building a home lab with ESXI 5.x

      I used a MSI 970A-G46 motherboard and AMD FX-6200 CPU wit 6 cores and 8Gb memory. I installed 2 identical SATA hard drives in a Raid-1 mirror configuration and installed Windows XP x64.

      I didn’t want to install ESXI 5.1 and waste the rest of the server, so I installed VMware Player 5 and created a VM which I booted with the ESXI 5.1 installer. I’m happy to say it installed without any problems.
      Now, I can access the VM from the local machine, but not any of my other machines on my home network. I let the ESXI installer use the default DHCP run-up with the following:
      IP address: 192.168.128, Default Gateway: (why 2?)
      Hostname: localhost (should it be something else?)

      The Windows host machine IP is and defaultrouter is (FiOS router).

      How can I get access to the ESXI VM from my other PCs?

      Thanks in advance,

      • Michael Kruger says

        I believe you need to “edit settings” on your newly created VM and change from using NAT mode to bridged mode (which places the VM directly on your home network (instead of network local only to vmplayer).

    37. Dan says

      Hi Duncan, are you still running the same setup on the desktop (I know your fileserver is different)? I have an i7 920 with 24GB of RAM and am thinking about using this as opposed to buying new tin.

      Will probably host a CA FSMO on a N54L…


    38. Mike says

      looking to make a home lab, and seeing prices on ebay of $350-400 USD for Dell 2950 -IIIs with 16GB, dual quad core 54xx, raid+2 SAS drives. Other than size and is there a reason NOT to use a true server?