I was part of the voting committee for VMworld and one of the sessions which I voted for unfortunately did not make it. However with over 350 submissions there are always some excellent topics that don’t make it. I did feel it was worth sharing. So here’s the outline of the session and some additional info. All credits to Richard Stinton (VMware Cloud Architect, EMEA) and his Team for coming up with the concept and allowing me to publish this!
Remember Simon Gallagher’s vTARDIS project? Now for something different. This is the Massive Array of Inexpensive Servers or MAIS (pronounced MAZE). We’re going to build an array of 32 (or more) $150 servers and show the power of vSphere and vCloud Director. Using the new(ish) HP Proliant Microserver, we’re going to build a wall of vBricks!
So it wasn’t just a submission, but these guys actually started working on it. They managed to get their hands on 32 HP Microservers. That gives a total of 64 cpus, 256Gb RAM, 8Tb storage, all for $9,000. They loaded it up with the latest vSphere and vCloud Director versions and it ran great, unfortunately we will never see the results but I did want to share one more thing;
Is that cool or what?
DAMN IT. That would have been really neat to see some results on.
I’ve only seen those around for around US$320. What currency is the $150 in? Also…. that’s freakin’ awesome!
In the UK they have these with rebate every once in a while, makes them dirt cheap.
Its in pounds, so 150 British pounds = 242.73 U.S. dollars
Chuck Norris built the same thing, but with 33 servers all in the same vSphere cluster.
oh man – i voted for that :(. Would’ve been killer!
$9000 total for hardware – that is truly commodity pricing…
I’d be interested to know the minimum vSphere + vCloud license costs to run on the vBrick “wall” as spec’ed ?
I suppose one of the implications of this move from ESX(i) socket pricing to per VM pricing, is that the consolidation ratios become less important (eg, we had been buying big beefy Dell R910’s with 256Gb Ram since we could get the best consolidation and bang for our socket priced hypervisors) – now that VMware is moving to per VM pricing that equation changes to put smaller hardware platforms back on equal footing.
While VMware moving to per-VM pricing isn’t a surprise, I think I’ve failed to see them actually state that anywhere. Is there a reference please?
For example, the current statement says that they aren’t changing: http://www.vmware.com/support/licensing/per-vm/
“256Gb RAM” ? How much did you really mean?
These boxes can only carry 8GB unfortunately.
32 servers each with 8Gb….
They can carry up to 1TB of RAM! (1TB Memory (64x16GB))
Yeah I’ve love to see that $9000 price refactored with licensing fees included.
Yes, these are very cheap currently £200 with £100 cashback
or you can get a fully loaded, 8GB RAM, DVD/RW, ESXi already installed on USB for £300 less £100 cashback from HP.
https://www.serversplus.com/servers/server_bundles/633724-421%23esx
because the only thing i want more than a cluster of inexpensive ESXi servers is a cluster of DVD-RW’s!
Put the farms idle time to use by burning dvds for the black market? Side income to offset the ‘cluster’ purchase?
What about common storage?
I have one of the Microservers (which has 4 disk bays) running Nexenta Community Edition NAS appliance (from a ISB key). That is serving up RAID10 with dedup via iSCSI or NFS. And it’s F-R-E-E-E-E !
Id second the question by FLECTH. Giving the current license model of vShpere this solution is costly. Then there is power, cabling, cooling, space, management, etc to consider. Now if this was done with all free and opensource solutions…. Still, probably nice for a compute grid though.
a fully loaded, 4 HDD, 8GB, two PCI-E cards, I cannot get the Microserver above 40 Watts, with HP 4GB 165w, ESXi 4.1 U1. as for others, great little lab boxes. Presently using FC SAN storage (a bit OTT), but moving to iSCSI/NFS with Solaris ZFS in another Microserver.
Is this some thing similar to V-block form emc-cisco-vmware
Pretty cool, too bad they are AMD CPUs though. I was looking at a similar idea for my lab using whitebox mini-itx based sandy bridge boxes, those can take 16gb of ram. Interesting read though, thanks!
Which motherboard are you looking at?
I was thinking of using this one (its intel, figured it had the
highest chance of working correctly) :
http://www.newegg.com/Product/Product.aspx?Item=N82E16813121507
Throw in a dual port NIC like this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16833106014
and it would make a pretty sweet little whitebox. I started with
2x4gb dimms since the 8gb dimms are just starting to arrive and are
pretty pricy. Couple that with a small case and you would be good to
go for about 600 with a sandy bridge i5 and 750 for an i7. I am
planning to build a test one here in a bit (I am trying to wait out
the 8gb dimms) but I think 2 or 3 of those booting from USB with some
SOHO storage (or even an atom box with some HW raid) would be a great
little lab.
NICE! – for some reason Im thinking of Pink Floyd’s – The Wall (of MAIS)
LOL. I was considering ‘Comfortably Numb’ for the backing track on the YouTube video…but then I’d be showing my age!
I suggest to use this new mainboard very very cool
http://www.bcmcom.com/bcm_product_mx67qm.htm
So why that one Max?
Love this! I own two HP Microservers for my own home lab.
Are you using the AMD ones? How are you finding it?
I am considering purchasing two myself, for my home lab. would be awesome if you could let me know 🙂
Just wondering if you’re using a SW or HW raid and what discs you’re using.
Thanks, very much,
G.
The MicroServer uses an AMD “Fake RAID” controller, so if you want Hardware RAID you’ll have to use a HP Smart Array P410, it has the correct mini-SAS connector so you can remove the one from the motherboard and connect to controller. Only issue is using non TLER drives on the controller, other deep recovery drops the drives off the Array, nothing unusual. You may want to consider LSI controllers, or using NFS/iSCSI storage from ZFS on Solaris Express 11, on SSD – current project for a SSD SuperSAN with ZFS. Trying to dump expensive and costly Lab HP ProLiant Servers and FC SANs due to rising electrical costs, these run at 40 Watts each box! Our SAN shelves run at 1.4Kw at shelf, and then there the air con needed in the Summer, to cool down the room! More electric! and electric in the UK, is getting more expensive, and will be three times the cost in 15 years!
We’ve been testing Hyper-V, ESXi 4.1/5.0, XEN 5.5 on these boxes, they run very well, and if you in the UK, you cannot beat £100 inc 8GB RAM, with the £100 cashback! (at 40Watts!) Cleverly engineered, because you can leave an ESXi USB flash drive inside the machine running ESXi 4.1, but the USB connectors on the external box have priority over the internal, so just plugin another Hypervisor to test in the external slots, shutdown and restart, bingo new Hypervisor running! Just change the NIC if using FreeBSD, because the FreeBSD bge0, broadcom driver is buggy, and keeps reseting the NIC, so it’s no good. So I highly recommend the cheap and cheerful but very good Intel® Gigabit CT Desktop Adapter.