PuppetConf 2014 Ticket Giveaway, win a free ticket!

Last week I received an email that I could give away 1 free ticket for PuppetConf! I wish I could go myself but unfortunately have other commitments that week. PuppetConf 2014 is the 4th annual IT automation event of the year, taking place in San Francisco September 20-24.

Join the Puppet Labs community and over 2,000 IT pros for 150 track sessions and special events focused on DevOps, cloud automation and application delivery. The keynote speaker lineup includes tech professionals from DreamWorks Animation, Sony Computer Entertainment America, Getty Images and more.

If you’re interested in going to PuppetConf this year, I will be giving away one free ticket to a lucky winner who will get the chance to participate in educational sessions and hands-on labs, network with industry experts and explore San Francisco. Note that these tickets only cover the cost of the conference (a $970+ value), but you’ll need to cover your own travel and other expenses (discounted rates available). You can learn more about the conference at: 2014.puppetconf.com

Want to a free ticket? It is really easy, just leave a comment here with your real(!) email address and I will pick a random winner on Friday the 12th of September. Easy right?

Re: Re: The Rack Endgame: A New Storage Architecture For the Data Center

I was reading Frank Denneman’s article with regards to new datacenter architectures. This in its turn was a response to Stephen Fosket’s article about how the physical architecture of datacenter hardware should change. I recommend reading both articles as that will give a bit more background, plus they are excellent reads by itself. (gotta love these blogging debates) Lets start with an out take of both articles which summarizes blog posts for those who don’t want to read the full article.

Stephen:
Top-of-rack flash and bottom-of-rack disk makes a ton of sense in a world of virtualized, distributed storage. It fits with enterprise paradigms yet delivers real architectural change that could “move the needle” in a way that no centralized shared storage system ever will. SAN and NAS aren’t going away immediately, but this new storage architecture will be an attractive next-generation direction!

If you look at what Stephen describes I think it is more or less in line with what Intel is working towards. The Intel Rack Scale Architecture aims to disaggregate traditional server components and then aggregate by type of resource backed by a super performing and optimized rack fabric. Rack fabric enabled by the new photonic architecture Intel is currently working on. This is not long term future, this is what Intel showcased last year and said to be available in 2015 / 2016.

Frank:
The hypervisor is rich with information, including a collection of tightly knit resource schedulers. It is the perfect place to introduce policy-based management engines. The hypervisor becomes a single control plane that manages both the resource as well as the demand. A single construct to automate instructions in a single language providing a correct Quality of Service model at application granularity levels. You can control resource demand and distribution from one single pane of management. No need to wait on the completion of the development cycles from each vendor.

There’s a bit in Frank’s article as well where he talks about Virtual Volumes and VAAI and how long it took for all storage vendors to adopt VAAI and how he believes that the same may apply to Virtual Volumes and Frank aims more towards the hypervisor being the aggregator instead of doing it through changes in the physical space.

So what about Frank’s arguments? Well Frank has a point with regards to VAAI adoption and the fact that some vendors took a long time to implement these. However, reality is though that Virtual Volumes is going full steam ahead. With many storage vendors demoing it at VMworld in San Francisco last week I have the distinct feeling that things will be different this time. Maybe timing is part of it, as it seems that many customers or on a crosspoint and want to optimize their datacenter operations / architecture by adopting SDDC, of which policy based storage management happens to be a big chunk.

I agree with Frank that the hypervisor is positioned perfect to be that control plane. However, in order to be that control plane for the future there needs to be a way to connect “things” to it which allows for far better scale and more flexibility. VMware, if you ask me, has done that for many parts of the datacenter but one aspect that stills needs to be overhauled for sure is storage. VAAI was a great start, but with VMFS there simply are too many constraints and it doesn’t cater for granular controls.

I feel that the datacenter will need to change on both ends in order to take that next step in the evolution to the SDDC. Intel Rack Scale architecture will allow for far greater scale and efficiency then seen ever before. But it will only be successful when the layer that sits on top has the ability to take all of these disaggregated resources, turn them in to large shared pools and allows to assign resources in a policy driven (and programmable) manner. Not just assign resources but also allow you to specify what the level of availability (HA, DR but also QoS) should be for whatever consumes those resources. Granularity is important here and of course it shouldn’t stop with availability but applies to any other (data) service that one may require.

So where does what fit in? If you look at some of the initiatives that were revealed at VMworld like Virtual Volumes, Virtual SAN and vSphere APIs for IO Filters you can see where the world is moving towards fast. You can see how vSphere is truly becoming that control plane for all resources and how it will be able to provide you end-to-end policy driven management. In order to make all of this reality the current platform will need to change. Changes that allow for more granularity /flexibility and higher scalability and that is where all these (new) initiatives come in to play. Some partners may take longer to adopt than others, especially those that require fundamental changes to the architecture of underlaying platforms (storage systems for instance), but just like with VAAI I am certain that over time this will happen as customers will drive this change by making decisions based on availability of functionality.

Exciting times ahead if you ask me.

VMware EVO:RAIL demo

I just wanted to share the VMware EVO:RAIL demo with my readers. I shared it on twitter / linkedin but figured it made sense as well to have it here. The demo shows both the configuration and the management interface. Note that it takes less than 15 minutes normally to complete the configuration, but of course the video has been edited to keep it short and sweet… No point in watching a percentage completed counter go up.

VMware EVO:RAIL FAQ

Over the last couple of days the same VMware EVO:RAIL questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Q&A post so that I can point people to that instead of constantly answering them on twitter.

  • Can you explain what EVO:RAIL is?
    • EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes. The appliance bundles hardware+software+support/maintenance to simplify both procurement and support in a true “appliance” fashion. EVO:RAIL provides the density of blade with the flexibility of rack. Each appliance comes with 100GHz of compute power, 768GB of memory capacity and 14.4TB of raw storage capacity (plus 1.6TB of flash for IO acceleration purposes). For full details, read my intro post.
  • Where can I find the datasheet?
  • What is the minimum number of EVO:RAIL hosts?
    • Minimum number is 4 hosts. Each appliance comes with 4 independent hosts, which means that 1 appliance is the start. It scales per appliance!
  • What is included with an EVO:RAIL appliance?
    • 4 independent hosts each with the following resources
      • 2 x E5-2620 6 core
      • 192GB Memory
      • 3 x 1.2TB 10K RPM Drive for VSAN
      • 1 x 400Gb eMLC SSD for VSAN
      • 1 x ESXi boot device
      • 2 x 10GbE NIC port (SFP / RJ45 can be selected)
      • 1 x IPMI port
    • vSphere Enterprise Plus
    • vCenter Server
    • Virtual SAN
    • Log Insight
    • Support and Maintenance for 3 years
  • What is the total available storage capacity?
    • After the VSAN Datastore is formed and vCenter Server is installed / configured there is about 13.1TB left
  • How many VMs can I run on one appliance?
    • That will very much depend on the size of the virtual machine and the workload. We have been able to comfortably run 250 desktops on one appliance. With Server VMs we ended up with around 100. However, again this very much depends on things like workload / capacity etc.
  • How many EVO:RAIL appliance can I scale to?
    • With the current release EVO:RAIL scales to 4 appliance (aka 16 hosts)
  • If licensing / maintenance / support is three 3 years, what happens after?
    • After 3 years support/maintenance expires. It can be extended, or the appliance can be replaced when desired.
  • How is support handled?
    • All support is handled through the OEM the EVO:RAIL HCIA has been procured through. This ensures that “end to end” support will be provided through the same channel.
  • Who are the EVO:RAIL qualified partners?
    • The following partners were announced at VMworld: Dell, EMC, Fujitsu, Inspur, Net One Systems, Supermicro, Hitachi Data Systems, HP, NetApp
  • How much does an EVO:RAIL appliance cost?
    • Pricing will be set by qualified partners
  • I was told Support and Maintenance is for 3 years, what happens after 3 years?
    • You can renew your support and maintenance with 2 years at most (as far as I know).
    • If not renewed then the EVO:RAIL appliance will remain functioning, but entitlement to support is gone.
  • What if I buy a new appliance after 3 years, can I re-use my licenses that come with the EVO:RAIL appliance??
    • No, the licenses are directly tied to the appliance and cannot be transferred to any other appliance or hardware.
  • Will NSX work with EVO:RAIL?
    • EVO:RAIL uses vSphere 5.5 and Virtual SAN. Anything that works with that will work with EVO:RAIL. NSX has not been explicitly tested but I expect that this should be no problem.
  • Does it use VMware Update Manager (VUM) for updating/patching?
    • No EVO:RAIL does not use VUM for updating and patching. It uses a new mechanism which is built from scratch and comes as part of the EVO:RAIL engine. This to provide a simple updating and patching mechanism, while avoiding the need for a Windows VM (VUM requires Windows).
  • What kind of NIC card is included?
    • 10GbE dual port NIC per host. Majority of vendors will offer both SFP+ and RJ45. This means there is 8 x 10GbE switch port per EVO:RAIL appliance required!
  • Is there a physical switch included?
    • A physical switch is not part of the “recipe” VMware provided to qualified partners, but some may package one (or multiple) with it to simplify green field deployments.
  • What is MARVIN or Mystic ?
    • MARVIN (Modular Automated Rackable Virtual Infrastructure Node) was the codename used by VMware internally for EVO:RAIL. Mystic was the codename used by EMC. Both of them refer to EVO:RAIL
  • Where does EVO:RAIL run?
    • EVO:RAIL runs on vCenter Server. vCenter Server is powered-on automatically when the appliance is started and the EVO:RAIL engine can then be used to configure the appliance
  • Which version of vCenter Server do you use, the Windows version or the Appliance?
    • In order to simplify deployment EVO:RAIL uses the vCenter Server Appliance.
  • Can I use the vCenter Web Client to manage my VMs or do I need to use the EVO:RAIL engine?
    • You can use whatever you like to manage your VMs. Web Client is fully supported and configured for you!
  • Are there networking requirements?
    • IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN

Some great EVO:RAIL links:

If you have any questions, feel free to drop them in comments section and I will do my best to answer them.

VMware / ecosystem / industry news flash… part 2

There we go, part two of the VMware / ecosystem / industry news flash. I expected a lot of news around VMworld as traditionally is always the case. I hope the below is a good summary, these are the articles / announcements I read and found interesting. It is the Monday after VMworld and I figured I would get this out there as I will be out for most of this week to recover.

  • Maginatics: A Virtual Filer for VMware’s Virtual SAN
    Last week I mentioned the Nexenta solution for VSAN… this week Maginatics is up. They also announced it last week, but somehow it fell through the cracks so I figured I would list it this week. MSCP offers a distributed file system with global deduplication, multiple caching layers and Content Distribution Network logic build in.
  • VMware EVO:RAIL was of course all over the news, with these being my fav posts Chris Wahl, Julian Wood, Dell, Chad Sakac)
    Do I really need to comment on this one? I am hoping everyone read my blog… Also, make sure to watch the demo!
  • Infinio announced version 2.0 of their acceleration platform
    A whole bunch of announcements around the 2.0 version of Infinio Acellerator. Support for Fibre Channel, iSCSI and FCoE is probably the biggest piece of functionality added. On top of that the extension of the monitoring / reporting section is very handy for those who want to tweak based on latency / IO information you will be able to do so. There are some more features announced, make sure to read the announcement for the full details.
  • VMware joins Open Compute Project
    I was surprised about this announcement, did not know it was coming… but I am very excited. The OCP solution is interesting as it is highly optimized around efficiency / power consumption / rack units etc. I have looked at some of the configurations for Virtual SAN but the problem I saw was hardware compatibility / support. Hopefully with this announcement these constraints will be lifted soon! Definitely one I will be following with a lot of interest!
  • Nutanix announced a new round of funding: 140 million
    What more can I say than: Congratulations! Hyper-converged infrastructure is hot, and Nutanix has a compelling solution for sure. 140 million (series e) is significant, and I guess they are on their way to an IPO (rumours have been floating around for months now).

That was it for now.