Top 5 VMworld EMEA recommended deep dive geek sessions

I’ve been watching a bunch of the VMworld US session recordings. A couple of sessions were real gems and contain a lot of deep dive info, if you are a geek and like deep technical info make sure to register for the following sessions. These sessions are all by VMware employees who work on the actual product as a developer or technical product manager, so if you have a deep deep question don’t be afraid to ask.

Of course there are many many more deep dive sessions, but these are the ones I enjoyed watching and as my time is limited I may not have watched that one ultra deep geek tech session you presented. If you feel a deep dive session is missing. Please leave a comment!

last minute bonus, just watched this session and it was brilliant, especially for those interested in stretched clusters:

Virtual SAN / EVO:RAIL use cases versus supported

I have seen this being debated many times on twitter now, and I’ve seen various Virtual SAN (VSAN) and EVO:RAIL competitors use this in the past to mislead potential customers.

I think we have all seen these slides at VMworld or at a VMUG when it comes to VSAN or EVO:RAIL. The slide contains a couple of primary use cases:

evo:rail use cases

So what does that mean? Does this mean that VMware does not support Exchange or MS SQL on top of VSAN or EVO:RAIL? Does that mean that VMware does not support it as a DR target? Or what about a management cluster? Or what about running Oracle? Or maybe SAP? Or what about my WordPress instance? Or what about MySQL? Or although you mention VDI, would that only be VMware View? What about… Yes by now you get my drift.

Let me try to make it really simple: Primary use cases says nothing about support. Primary use case means that this where the vendor expects the product or solution to fit best. In this case it is where VMware expect VSAN/EVO:RAIL to fit best, this is the target market VMware will be going after with this release.

Why include this in a slide deck? Well it allows you (the user / consultant / architect) to quickly identify where the majority of opportunities will be with the current version for your environment or for your customers. It does NOT mean that if your use case, like running your Exchange environment for instance on top of VSAN, is not listed that it is not supported. (Try listing all use cases on a slide, it will get pretty lengthy.)

Running Tier-1 applications on top of VSAN (or EVO:RAIL) is fully supported as it stands today, however … your application requirements and your service level agreement will determine if EVO:RAIL or VSAN is a good fit. One example would for instance be that if your agreed SLA requires an RPO (recovery point objective) of zero then sync replication is the only option (or stretched clustering), now you will need to determine if this is possible with the platform you want to use (this goes for any solution!). (Yes, you can make that happen with the platform pretty soon before anyone wants to go there…)

I hope that clears things up a bit.

VMware EVO:RAIL use case: ROBO

Something that came up a couple of days back was a question around how VMware EVO:RAIL fits the ROBO (remote office branch office) use case. If you watched the demo you will have seen that it is very simple / easy to configure. It takes about 15 minutes to get you up and running and all you need to do is provide details like “IP ranges”, “Subnet mask”, “Gateway” and a couple of other globals.

This by itself makes EVO:RAIL a perfect solution for ROBO deployments… but there is more. When it comes to ROBO deployments and simplifying the roll out there are two more options:

  1. Provide configuration details during procurement process
  2. Specify configuration details in a file and insert in to appliance before shipment to remote office

evo:rail UI

I won’t discuss option 1 in-depth, as this will very much depend on how each of the EVO:RAIL Qualified Partners handles this on their website / during the procurement process. Basically what happens is that you provide your preferred server vendor with configuration details and they put it in to a file called “default-config-static.json” and this is injected in to the vCenter Server Appliance which also runs the EVO:RAIL engine. For the hackers who want to play around with EVO:RAIL, note that the location of the json file and the format may change with newer versions so make sure to always use the latest and greatest if you want to play around. If you have filled out these details, you can just click

Option 2 is also very interesting if you ask me. If you look at EVO:RAIL as it stands today, you have the option to upload a JSON file when you hit the configuration screen (as shown in the screenshot above). This JSON should contain all of your configuration details and then will allow you to configure EVO:RAIL with the click of a button. In other words, you ship the appliance to your remote office. You email them the JSON file (in a secured manner hopefully) and ask them to click “upload configuration file”. They upload the file and then run “Validate”, and probably fill out the password as you don’t want to sent that in clear text. That is it… Nice right :). Of course, if you want … you could even go as far as injecting the .json file into the vCenter Server Appliance yourself, but I am not sure if that will be supported.

As you can imagine, this greatly simplifies the deployment of EVO:RAIL as all it takes is just one click to configure, which is ideal for a ROBO scenario. Anyone can do it!

PuppetConf 2014 Ticket Giveaway, win a free ticket!

Last week I received an email that I could give away 1 free ticket for PuppetConf! I wish I could go myself but unfortunately have other commitments that week. PuppetConf 2014 is the 4th annual IT automation event of the year, taking place in San Francisco September 20-24.

Join the Puppet Labs community and over 2,000 IT pros for 150 track sessions and special events focused on DevOps, cloud automation and application delivery. The keynote speaker lineup includes tech professionals from DreamWorks Animation, Sony Computer Entertainment America, Getty Images and more.

If you’re interested in going to PuppetConf this year, I will be giving away one free ticket to a lucky winner who will get the chance to participate in educational sessions and hands-on labs, network with industry experts and explore San Francisco. Note that these tickets only cover the cost of the conference (a $970+ value), but you’ll need to cover your own travel and other expenses (discounted rates available). You can learn more about the conference at: 2014.puppetconf.com

Want to a free ticket? It is really easy, just leave a comment here with your real(!) email address and I will pick a random winner on Friday the 12th of September. Easy right?

Re: Re: The Rack Endgame: A New Storage Architecture For the Data Center

I was reading Frank Denneman’s article with regards to new datacenter architectures. This in its turn was a response to Stephen Fosket’s article about how the physical architecture of datacenter hardware should change. I recommend reading both articles as that will give a bit more background, plus they are excellent reads by itself. (gotta love these blogging debates) Lets start with an out take of both articles which summarizes blog posts for those who don’t want to read the full article.

Stephen:
Top-of-rack flash and bottom-of-rack disk makes a ton of sense in a world of virtualized, distributed storage. It fits with enterprise paradigms yet delivers real architectural change that could “move the needle” in a way that no centralized shared storage system ever will. SAN and NAS aren’t going away immediately, but this new storage architecture will be an attractive next-generation direction!

If you look at what Stephen describes I think it is more or less in line with what Intel is working towards. The Intel Rack Scale Architecture aims to disaggregate traditional server components and then aggregate by type of resource backed by a super performing and optimized rack fabric. Rack fabric enabled by the new photonic architecture Intel is currently working on. This is not long term future, this is what Intel showcased last year and said to be available in 2015 / 2016.

Frank:
The hypervisor is rich with information, including a collection of tightly knit resource schedulers. It is the perfect place to introduce policy-based management engines. The hypervisor becomes a single control plane that manages both the resource as well as the demand. A single construct to automate instructions in a single language providing a correct Quality of Service model at application granularity levels. You can control resource demand and distribution from one single pane of management. No need to wait on the completion of the development cycles from each vendor.

There’s a bit in Frank’s article as well where he talks about Virtual Volumes and VAAI and how long it took for all storage vendors to adopt VAAI and how he believes that the same may apply to Virtual Volumes and Frank aims more towards the hypervisor being the aggregator instead of doing it through changes in the physical space.

So what about Frank’s arguments? Well Frank has a point with regards to VAAI adoption and the fact that some vendors took a long time to implement these. However, reality is though that Virtual Volumes is going full steam ahead. With many storage vendors demoing it at VMworld in San Francisco last week I have the distinct feeling that things will be different this time. Maybe timing is part of it, as it seems that many customers or on a crosspoint and want to optimize their datacenter operations / architecture by adopting SDDC, of which policy based storage management happens to be a big chunk.

I agree with Frank that the hypervisor is positioned perfect to be that control plane. However, in order to be that control plane for the future there needs to be a way to connect “things” to it which allows for far better scale and more flexibility. VMware, if you ask me, has done that for many parts of the datacenter but one aspect that stills needs to be overhauled for sure is storage. VAAI was a great start, but with VMFS there simply are too many constraints and it doesn’t cater for granular controls.

I feel that the datacenter will need to change on both ends in order to take that next step in the evolution to the SDDC. Intel Rack Scale architecture will allow for far greater scale and efficiency then seen ever before. But it will only be successful when the layer that sits on top has the ability to take all of these disaggregated resources, turn them in to large shared pools and allows to assign resources in a policy driven (and programmable) manner. Not just assign resources but also allow you to specify what the level of availability (HA, DR but also QoS) should be for whatever consumes those resources. Granularity is important here and of course it shouldn’t stop with availability but applies to any other (data) service that one may require.

So where does what fit in? If you look at some of the initiatives that were revealed at VMworld like Virtual Volumes, Virtual SAN and vSphere APIs for IO Filters you can see where the world is moving towards fast. You can see how vSphere is truly becoming that control plane for all resources and how it will be able to provide you end-to-end policy driven management. In order to make all of this reality the current platform will need to change. Changes that allow for more granularity /flexibility and higher scalability and that is where all these (new) initiatives come in to play. Some partners may take longer to adopt than others, especially those that require fundamental changes to the architecture of underlaying platforms (storage systems for instance), but just like with VAAI I am certain that over time this will happen as customers will drive this change by making decisions based on availability of functionality.

Exciting times ahead if you ask me.