Implementing a Hybrid Cloud Strategy white paper

Last week I already posted this up on the VMware Office of CTO blog, and I figured I would share it to my regular readers here as well. A couple of months ago I stumbled across a great diagram which was developed by Hany Michael, (Consulting Architect, VMware PSO) who is part of the VMware CTO Ambassador program. The CTO Ambassadors are members of a small group of our most experienced and talented customer-facing, individual contributor technologists. The diagram explained an interesting architecture– namely hybrid cloud. After a brief discussion with Hany I decided to reach out to David Hill (Senior Technical Marketing Architect, vCloud Air) and asked if he was interested in getting this work published. Needless to say, David was very interested. Together we worked on expanding on the great content that Hany had already developed. Today, the result is published.

The architecture described in this white paper is based on a successful real-world customer implementation. Besides explaining the steps required it also explains the use case for this particular customer. We hope that you find the paper useful and that it will help implementing or positioning a hybrid cloud strategy.

Implementing a Hybrid Cloud Strategy

IT has long debated the merits of public and private cloud. Public clouds allow organizations to gain capacity and scale services on-demand, while private clouds allow companies to maintain control and visibility of business-critical applications. But there is one cloud model that stands apart: hybrid cloud. Hybrid clouds provide the best of both worlds: secure, on-demand access to IT resources with the flexibility to move workloads onsite or offsite to meet specific needs. It’s the security you need in your private cloud with the scalability and reach of your public cloud. Hybrid cloud implementations should be versatile, easy to use, and interoperable with your onsite VMware vSphere® environment. Interoperability allows the same people to manage both onsite and offsite resources while leveraging existing processes and tools and lowering the operational expenditure and complexity…

Automating vCloud Director Resiliency whitepaper released

About a year ago I wrote a whitepaper about vCloud Director resiliency, or better said I developed a disaster recovery solution for vCloud Director. This solution allows you to fail-over vCloud Director workloads between sites in the case of a failure. Immediately after it was published various projects started to implement this solution. As part of our internal project our PowerCLI guru’s Aidan Dalgleish and Alan Renouf started looking in to automating the solution. Those who read the initial case study probably have seen the manual steps required for a fail-over, those who haven’t read this white paper first

The manual steps in the vCloud Director Resiliency whitepaper is exactly what Alan and Aidan addressed. So if you are interested in implementing this solution then it is useful to read this paper new white paper about Automating vCloud Director Resiliency as well. Nice work Alan and Aidan!

Do I still need to set “HaltingIdleMsecPenalty” with vSphere 5.x?

I received a question last week from a customer. They have a fairly big VDI environment and are researching the migration to vSphere 5.1. One of the changes they made in the 4.1 time frame was the advanced setting “HaltingIdleMsecPenalty” in order to optimize hyper threading fairness for their specific desktop environment. I knew that this was no longer needed but didn’t have an official reference for them (There is a blog post by Tech Marketing performance guru Mark A. that mentions it though). Today I noticed it was mentioned in a whitepaper which was recently released titled “The CPU Scheduler in VMware vSphere 5.1“. I recommend everyone to read this whitepaper as it gives you a better understanding of how the scheduler works and how it has been improved over time.

The following section is an outtake from that white paper.

Improvement in Hyper-Threading Utilization

In vSphere 4.1, a strict fairness enforcement policy on HT systems might not allow achieving full utilization of all logical processors in a situation described in KB article 1020233 [5]. This KB also provides a work-around based on an advanced ESX host attribute, “HaltingIdleMsecPenalty”. While such a situation should be rare, a recent change in the HT fairness policy described in “Policy on Hyper-Threading,” obviates the need for the work-around. Figure 8 illustrates the effectiveness of the new HT fairness policy for VDI workloads. In the experiments, the number of VDI users without violating the quality of service (QoS) requirement is measured on vSphere 4.1, vSphere 4.1 with “HaltingIdleMsecPenalty” tuning applied, and vSphere 5.1. Without the tuning, vSphere 4.1 supports 10% fewer users. On vSphere 5.1 with the default setting, it slightly exceeds the tuned performance of vSphere 4.1.

VMware Technical Journal, pick it up!

VMware just published the second VMware Technical Journal. This winter edition contains great publications on topics like vProbes, Paravirtual vRDMA Devices, Cloud Tenant UI design process, Storage DRS and FrobOS. There is also a nice introduction by VMware’s CEO Pat Gelsinger. Simply to much to mention, I suggest you just download it… I think it is a great read and it gives an idea about some of the things VMware engineers work on.

VMware Technical Journal

 

vSphere 5.0 Hardening Guide public draft available

One of the things my team is responsible for is security of the cloud infrastructure suite. They have worked really hard the last couple of months on overhauling the vSphere Hardening Guide. Today the public draft was published. (Thanks Charu, Grant and Kyle!)

One of the major changes is the format of the guide. It has been poured into an Excel spreadsheet making it easier filter, sort and edit. Please take a look at the guide and if there is any feedback don’t hesitate to comment on the community forum thread! The final version of the document should be published mid May.