Just a short article for today, or should I call it a tip. Take your memory configuration into account for Nehalem processors. There’s a sweet spot in terms of performance which might just make a difference. Read this article on Scott’s blog or this article on Anandtech where they did measure the difference in performance. Again it is not a huge difference, but when combining workloads it might just be that little extra you were looking for.
The only memory configuration that makes sense for most VMWare implementations is “The box has how many slots?” For other situations just keep it to a multiple of 3x # sockets. I had some servers where I was running x64 standard OS’s so I figured I’d scale back to 8x4GB DIMM’s instead of 9x, big mistake, took a pretty serious performance impact for so little gain. I ended up putting the extra DIMM back in per the HP configurator.
Do you know if there is any impact to existing esx if i upgrade the existing single socket quad core Nehalem CPU(E5520) to dual socket hex core Nehalem CPU(X5680)?