• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

architect

RE: The VCDX candidates advantage over the panellists

Duncan Epping · Oct 6, 2014 ·

I was reading Josh Odger’s post on the VCDX Defense. Josh’s article can be summarised with the following part:

As a result, the candidate should be an expert in the design being presented and answering questions from the panel about the design should not be intimidating.

Having gone through the process myself, knowing many of the VCDX’s and having been on countless of panels I completely disagree with Josh. Sure, you do need to know your design inside/out… but, it is not about “who’s having an advantage”, the panel member is not there to fail or pass the candidate… they are there to assess your skills as an architect!

If you look at the defense day there are three parts:

  1. Defend your design
  2. Design scenario
  3. Troubleshooting scenario

For the design and troubleshooting scenario you get a random exercise, so you have no prior knowledge of what will be asked. When it comes to defending your design of course you will know your design (hopefully) better then anyone else. However, the questions you get will not necessarily be about the specifics or details of your design. The VCDX panel is there to assess your skills as an architect and not your “fact cramming skills”. A good panel will ask a lot of hypothetical questions like:

  • Your design uses NFS based storage, how would FC connected storage have changed your design?
  • Your design is based on capacity requirements for 80 virtual machine, what would  you have done differently when the requirement would be 8000 virtual machines?
  • Your design …

So when you do mock exams, prepare for these types of hypothetical questions. That is when you really start to understand the impact decisions can have, and when during your defense you get one of these questions and you do not know the answer make sure you guide the panel through your thought process. That is what differentiates someone who can learn facts (VCP exam) and someone who can digest them, understand them and apply them in different scenarios (VCDX exam).

As I stated, it may sound like that you knowing your design inside out means having a big advantage over the panel members but it probably isn’t… that is not what they are testing you on! Your ability to assess and adapt are put through the wringer, your skills as an architect are tested thoroughly and that is where you will need to do well.

Good luck!

Scale out building block style, or should I say (yellow) brick style!

Duncan Epping · Mar 2, 2012 ·

I attended VMware PEX a couple of weeks back and during some of the sessions and discussions I had after the sessions I realized that many customers out there still design using legacy concepts. Funny thing is that this mainly applies to server virtualization projects and to a certain extend to cloud environments.It appears that designing in building blocks is something that EUC side of this world has embraced a long time ago.

I want to use this post to get feedback about your environments. How you scale up / scale out. I discussed a concept with one of the PEX attendees which I want to share. (This is no rocket science or something revolutionary, let that be clear.) This attendee worked for one of our partners, a service provider in the US, and was responsible for creating a scalable architecture for an Infrastructure as a Service (IaaS) offering.

The original plan they had was to build an environment that would allow for 10.000 virtual machines. Storage, networking and compute sizing and scaling was all done with these 10k VMs in mind. However it was expected that in the first 12 months only 1000 virtual machines would be deployed. You can imagine that internally there was a lot of debate around the upfront investment. Especially the storage and compute platform was a huge discussion. What if the projections where incorrect, what if 10k virtual machines was not realistic in three years. What if the estimated compute and IOps requirements where incorrect? This could lead to substantial underutilization of the environment, especially in IaaS where it is difficult to predict how the workload will behave this could lead to a significant loss. On top of that, they were already floor space constraint… which made it impossible to scale / size for 10k virtual machines straight from the start,

During the discussion I threw the building block (pod, stack, block… all the same) method on the table, as mentioned not unlike what the VDI/EUC folks have been doing for years and not unlike some of you have been preaching. Kris Boyd mentioned this in his session at Partner Exchange and let me quote him on this as I fully agree with his statemenet “If you know what works well on a certain scale, why not just repeat that?!” The advantage to that would be that the costs are predictive, but even more important for the customers and ops team the result of the implementation would be predictive. So what was discussed and what will be the approach for this particular environment, or at least will be the proposed as a possible architecture?

First of all a management cluster would be created. This is the mothership of the environment. It will host all vCenter virtual machines, vCloud Director, Chargeback, Databases etc. This environment does not have high IOps requirements or high compute requirements. It would be implemented on a small storage device, NFS based storage that is. The reason it was decided to use NFS is because of the fact that the vCloud Director cells require NFS to transport files. Chris Colotti wrote an article about when this NFS share is used, might be useful to read for those interested in it. This “management cluster” approach is discussed in-depth in the vCloud Architecture Toolkit.

For the vCloud Director resource the following was discussed. The expectation was a 1000 VMs in the first 12 months. The architecture would need to cater for this. It was decided to use averages to calculate the requirements for this environment as the workload was unknown and could literally be anything. How did they come up with a formula in this case? Well what I suggested was looking at their current “hosted environment” and simply averaging things out. Do a dump of all data and try to come up with some common numbers. This is what it resulted in:

  • 1000 VMs (4:1 core / VM, average of 6GB memory per VM)
    • Required cores = 250 (for example 21 x dual socket 6 core host)
    • Required memory = 6TB (for example 24 x 256GB host)

This did not take any savings due to TPS in to account and the current hardware platform used wasn’t as powerful as the new one. In my opinion it is safe to say that 24 hosts would cater for these 1000 VMs and that would include N+2. Even if it did not, they agreed that this would be their starting point and max cluster size. They wanted to avoid any risks and did not like to push the boundaries too much with regards to cluster sizes. Although I believe 32 hosts is no problem at all in a cluster I can understand where they were coming from.

The storage part is where it got more interesting. They had a huge debate around upfront costs and did not want to invest at this point in a huge enterprise level storage solution. As I said they wanted to make sure the environment would scale, but also wanted to make sure the costs made sense. On average in their current environment the disk size was 60GB. Multiply that by a 1000 and you know you will need at least 60TB of storage. This is a lot of spindles. Datacenter floor space was definitely a constraint, so this would be huge challenge… unless you use techniques like deduplication / compression and you have a proper amount of SSD to maintain a certain service level / guarantee performance.

During the discussion it was mentioned several times that they would be looking at the upcoming storage vendors like Tintri, Nimble and Pure Storage. There were the three specifically mentioned by this partner, but I realize there are many others out there. I have to agree that the solutions offered by these vendors are really compelling and each of them have something unique. It is difficult to compare them on paper though as Tintri does NFS, Nimble iSCSI and Pure Storage HC (and iSCSI soon) but is also SSD only. Especially Pure Storage intrigued them due to the power/cooling/rackspace savings. Also the great thing about all of these solutions is again that they are predictable from a cost / performance perspective and it allows for an easy repeatable architecture. They haven’t made a decision yet and are planning on doing an eval with each of the solutions to see how it integrates, scales, performs and most importantly what the operational impact is.

Something we did not discuss unfortunately was networking. These guys, being a traditional networking provider, did not have much control over what would be deployed as their network department was in charge of this. In order to keep things simple they were aiming for a 10Gbit infrastructure, the cost of networking ports was significant and they wanted to reduce the amount of cables coming out of the rack for simplicity reasons.

All in all it was a great discussion which I thought was worth sharing, although the post is anonymized I did ask their permission before I wrote this up :-). I realize that this is by far a complete picture but I hope it does give an idea of the approach, if I can find the time I will expand on this with some more examples. I hope that those working on similar architectures are willing to share their stories.

Nutanix Complete Cluster

Duncan Epping · Aug 18, 2011 ·

I was just reading up and noticed an article about Nutanix. Nutanix is a “new” company which just came out of stealth mode and offers a datacenter in a box type of solution. With that meaning that they have a solution which provides shared storage and compute resources in a single 2u chassis. This 2u chassis can hold up to 4 compute nodes and each of these nodes can have 2 CPUs, up to 192GB of memory, 320 GB of PCIe SSD, 300 GB SATA SSD and 5 TB of SATA HDDs. Now the cool thing about it is that each of the nodes “local” storage can be served up as shared storage to all of the nodes enabling you to use HA/DRS etc. I guess you could indeed describe Nutanix’s solution as the “Complete Cluster” solution and as Nutanix says it is unique and many analysts and bloggers have been really enthusiastic about this… but is it really that special?

What Nutanix actually uses for their building block is an HPC form factor case like the one I discussed in May of this year. I wouldn’t call that revolutionary as Dell, Super Micro, HP (and others) sell these as well but market it differently (in my opinion a missed opportunity). What does make Nutanix somewhat unique is that they package it as a complete solution including a Virtual Storage Appliance they’ve created. It is not just a VSA but it appears to be a smart device which is capable of taking advantage of the SSD drives available and uses that as a shared cache distributed amongst each of the hosts and it uses multiple tiers of storage; SSD and SATA. It kind of reminds me of what Tintri does only this is a virtual appliance that is capable of leveraging multiple nodes. (I guess HP could offer something similar in a heartbeat if they bundle their VSA with the DL170e) Still I strongly believe that this is a promising concept and hope these guys are at VMworld so I can take a peak and discuss the technology behind this a bit more in-depth as I have a few questions from a design perspective…

  • No 10Gbe redundancy? (according to the datasheet just a single port)
  • Only 2 nics for VM traffic, vMotion, Management? (Why not just 2 10Gbe nic ports?)
  • What about when the VMware cluster boundaries are reached? (Currently 32 nodes)
  • Out band management ports? (could be useful to have console access)
  • How about campus cluster scenarios, any constraints?
  • …..

Lets see if I can get these answered over the next couple of days or at VMworld.

5 Tips for preparing your VCDX Defense

Duncan Epping · Nov 15, 2010 ·

After the VCDX defenses Boston I had a chat with Craig Risinger, also known as 006 ;-). We discussed some of the things we’d seen on the panels and came to the conclusion that it wouldn’t hurt to reiterate some of the tips we’ve given in the past.

  1. It’s OK to change your actual project documents. See the following points for examples. This isn’t really about what you actually happened to do on a particular project with its own unique set of circumstances. It’s about showing what you can do.This is your portfolio to convince potential customers you can do their design, whatever they might need. It’s about proving you could work with a customer to establish requirements and design an architecture that meets them.
  2. Include everything the Application says is mandatory. Don’t be surprised if you have to write some new documents or sections. For example, maybe a Disaster Recovery plan wasn’t important in your project, but it will be to another customer or in another project, so you should show you know how to create one.
  3. Explain any bad or debatable decisions. Did your customer insist on doing something that’s against best practices? Did you explain what was wrong with it? Say how you would have preferred to do things and why. Even if you just made a mistake back then, that’s OK if you can show that you’ve learned and understand the error you made. If you are using VMware’s best practices make sure you know why it is a best practice and why it met your customer’s requirements.
  4. Show you can design for large scale. It’s OK if your actual project was for a small environment, but show that you can think big too. What would you have done for a bigger customer, or for a customer who wanted to start small but be able to scale up easily? What would you need to do to add more VMs, more hosts, more storage, more networking, more vCenter servers, more roles and division of duties, a stronger BC/DR plan in the future? How would that change your design, if at all?
  5. Architect = Knowledge + Reasoning. The VCDX certification isn’t just about knowing technical facts; it’s about being able to apply that knowledge to meet goals. In the defense session itself, be prepared to discuss hypothetical scenarios and alternative approaches, to decide on a design, and to explain the reasons for your choices. Show you know how to consider the pros and cons of different approaches.

There are also many other useful collections of advice for pursuing a VCDX certification, we highly recommend reading them as they will give you an idea of the process. Here’s just a sample:

  • John Arrasjid’s VCDX Tips
  • VCDX Workshop Presentation
  • Duncan Epping’s VCDX Defense Experience
  • Jason Boche’s VCDX Defense Experience
  • Maish’s VCDX Defense Experience
  • Frank Denneman’s VCDX Defense Experience
  • Kenneth van Ditmarsch’s VCDX Defense Experience
  • Scott Lowe’s VCDX Defense Experience
  • Rick Scherer’s VCDX Defense Experience
  • Fabio Rapposelli’s VCDX Defense Experience
  • Jason Nash’s VCDX Defense Experience
  • Harley Stagner’s VCDX Defense Experience
  • Andrea Mauro’s VCDX Defense Experience
  • Chris Kranz’s VCDX Defense Experience

Craig Risinger (VCDX006) & Duncan Epping (VCDX007)

VMware Desktop Reference Architecture Workload Simulator (RAWC) 1.1

Duncan Epping · Apr 29, 2010 ·

VMware has just released version 1.1 of the VMware Desktop Reference Architecture Workload Simulator (RAWC). As I know many of my readers are actively working on View projects I thought it might be of interest for you.

VMware Desktop Technical Marketing & TS Research Labs are jointly announcing the availability of VMware Desktop Reference Architecture Workload Simulator (RAWC) version 1.1.    With RAWC 1.1, Solution Providers can better anticipate and plan for infrastructure requirements to support successful VMware View deployments for Windows 7 Migration.

RAWC 1.1 now simulates user workloads in Windows 7 environments and can be used to validate VMware View designs to support Windows 7 Migrations.  RAWC 1.1 supports the following desktop applications in Windows 7 and Windows XP environments: Microsoft Office 2007, Microsoft Outlook, Microsoft Internet Explorer, Windows Media Player, Java code compilation simulator, Adobe Acrobat, McAfee Virus Scan, and 7-Zip.

RAWC 1.1 also includes bug fixes and several enhancements in test run configurations, usability and user interface.  Please see RAWC 1.1 product documents for more details.

VMware partners can download RAWC 1.1 software and the product documents from VMware Partner Central:Sales Tools > Services IP.

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

29-08-2022 – VMware Explore US
07-11-2022 – VMware Explore EMEA
….

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2022 · Log in