• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

Platform9 announcements / funding

Duncan Epping · Aug 18, 2015 ·

Clearly VMworld is around the corner as many new products, releases and company announcements are being done this week and next. Last week I had the opportunity to catch up with Sirish Raghuram, Platform9‘s CEO. For those who don’t know who/what/where I recommend reading the two articles I wrote earlier this year. In short, Platform9 is a SaaS based private cloud management solution which leverages OpenStack. By Platform9 also described as “Openstack-as-a-Service”.

Over the last months Platform9 has grown to 27 people and is now actively focussing on scaling marketing and sales. They have already hired some very strong people from companies like Rackspace, EMC, Metacloud and VMware. Their series A funding was $ 4.5m by Redpoint Ventures, and now they announced a $ 10m Series B round which was led by Menlo Ventures and included Redpoint Ventures. Considering the state of Openstack startup community that is a big achievement if you ask me. The company has seen good revenue momentum in first two quarters of sales with QoQ growth of 200%, multiple site wide license agreements for 400+ servers in each quarter and customer deployments in 17 countries

So what is being announced? The GA of support for vSphere which has been in Beta since early this year. Basically this means that as of this release you can now manage local KVM and vSphere hosts using Platform9’s solution. What I like about their solution is that it is very easy to configure, and it is SaaS based so no worries about installing/configuring/upgrading/updating or maintenance of the management solution itself. Install / Configure takes less than 5 minutes. Basically you point it at your vCenter Server, a proxy VM will be deploy and then resources will be sucked in. The architecture for vSphere looks like this:

The cool thing is that it will integrate with existing vSphere deployments and if you have people managing vSphere with vCenter and they make changes then Platform9 is smart enough to recognize that and reconcile. On top of that all vSphere templates are also automatically pulled in so you can use those immediately when provisioning new VMs through Platform9. Managing VMs through Platform9 is very easy, but also if you are familiar with the OpenStack APIs then automating any aspect of Platform9 is a breeze as it is fully compatible. When it comes to managing resources and workloads, I think the UI speaks for itself. Very straight forward, very easy to use. Adding hosts, deploying new workloads or monitoring capacity, typically all done within a few clicks. When it comes to vSphere they also support things like the Distributed Switch and have support for NSX around the corner, for those who have the need for advanced networking / isolation / security etc.

Platform9 also introduces auto-scaling capabilities based on resource alarms and application templates. Both scaling-up and scaling-down of your workloads when needed is supported, which is something that comes up on a regular basis with customers I talk to. Platform9 can take care of the infrastructure side of scaling out, you worry about creating that scale-out application architecture, which is difficult enough as it is.

When it comes to their SaaS based platform it is good to know that their platform is not shared between customers. Which means that there is no risk of one customer high-jacking the environment of another customer. Also, the platform will scale independently and will scale automatically as your local environment grows. No need to worry about any of those aspects any longer, and of course because it is SaaS based Platform9 will take care of patching/updating/upgrading etc.

Personally I would love to see a couple of things added, I would find it useful if Platform9 could take care of Network Isolation… Just like Lab Manager was capable of doing in the past. It would also be great if Platform9 could manage “stand alone” ESXi hosts instead of having being pointed to vCenter Server. I do understand that brings some constraints etc, but it could be a nice feature… Either way, I like the single pane of glass they offer today, it can only get better. Nice job Platform9, keep those updates coming!

Virtual SAN going offshore

Duncan Epping · Aug 17, 2015 ·

Over the last couple of months I have been talking to many Virtual SAN customers. After having spoken to so many customers and having heard many special use cases and configurations I’m not easily impressed. I must say that half way during the conversation with Steffan Hafnor Røstvig from TeleComputing I was seriously impressed. Before we get to that lets first look at the background of Steffan Hafnor Røstvig and TeleComputing.

TeleComputing is one of the oldest service providers in Norway. They started out as an ASP with a lot of Citrix expertise. In the last years they’ve evolved more to being a service provider rather than an application provider. Telecomputing’s customer base consists of more than 800 companies and in excess of 80,000 IT users. Customers are typically between 200-2000 employees, so significant companies. In the Stavanger region a significant portion of the customer base is in the oil business or delivering services to the Oil business. Besides managed services, TeleComputing also has their own datacenter they manage and host services in for customers.

Steffan is a solutions architect but started out as a technician. He told me he still does a lot of hands-on, but besides that also supports sales / pre-sales when needed. The office he is in has about 60 employees. And Steffan’s core responsibility is virtualization, mostly VMware based! Note that TeleComputing is much larger than those 60 employees, they have about 700 employees worldwide with offices in Norway, Sweden and Russia.

Steffan told me he got first introduced to Virtual SAN when it was just launched. Many of their offshore installation used what they call “datacenter in a box” solution which was based on IBM Bladecenter. Great solution for that time but there were some challenges with it. Cost was a factor, rack size but also reliability. Swapping parts isn’t always easy either and that is one of the reasons they started exploring Virtual SAN.

For Virtual SAN they are not using blades any longer but instead switched to rack mounted servers. Considering the low number of VMs that are typically running in these offshore environments a fairly “basic” 1U server can be used. With 4 hosts you will now only take up 4U , instead of the 8 or 10U a typical blade system requires. Before I forget, the hosts itself are Lenovo x3550 M4’s with one S3700 Intel SSD of 200GB and 6 IBM 900GB 10K RPM drives. Each host has 64GB of memory and two Intel E5-2630 6 core CPUs. It also uses an M5110 SAS controller. Especially in the type of environments they support this is very important, on top of that the cost is significantly lower for 4 rack mounts vs a full bladecenter. What do I mean with type of environments? Well as I said offshore, but more specifically Oil Platforms! Yes, you are reading that right, Virtual SAN is being used on Oil Platforms.

For these environments 3 hosts are actively used and a 4th host is just there to serve as a “spare”. If anything fails in one of the hosts the components can easily be swapped, and if needed even the whole host could be swapped out. Even with a spare host the environment is still much cheaper than compared to the original blade architecture. I asked Steffan if these deployments were used by staff on the platform or remotely. Steffan explained that staff “locally” can only access the VMs, but that TeleComputing manages the hosts, rent-an-infrastructure or infrastructure as a service is the best way to describe it.

So how does that work? Well they use a central vCenter Server in their datacenter and added the remote Virtual SAN clusters connected via a satellite connection. The virtual infrastructure as such is completely managed from a central location. Not just virtual, also the hardware is being monitored. Steffan told me they use the vendor ESXi image and as a result gets all of the hardware notification within vCenter Server, single pane of glass when you are managing many of these environments like these is key. Plus it also eliminates the need for a 3rd party hardware monitoring platform.

Another thing I was interested in was knowing how the hosts were connected, considering the special location of the deployment I figured there would be constraints here. Steffan mentioned that 10GbE is very rare in these environments and that they have standardized on 1GbE. Number of connection is even limited and today they have 4 x 1GbE per server of which 2 are dedicated to Virtual SAN. The use of 1GbE wasn’t really a concern, the number of VMs is typically relatively low so the expectation was (and testing and production has confirmed) that 2 x 1GbE would suffice.

As we were wrapping up our conversation I asked Steffan what he learned during the design/implementation, besides all the great benefits already mentioned. Steffan said that they learned quickly how critical the disk controller is and that you need to pay attention to which driver you are using in combination with a certain version of the firmware. The HCL is leading, and should be strictly adhered to. When Steffan started with VSAN the Healthcheck plugin wasn’t released yet unfortunately as that could have helped with some of the challenges. Other caveat that Steffan mentioned was that when single device RAID-0 sets are being used instead of passthrough you need to make sure to disable write-caching. Lastly Steffan mentioned the importance of separating traffic streams when 1GbE is used. Do not combine VSAN with vMotion and Management for instance. vMotion by itself can easily saturate a 1GbE link, which could mean it pushes out VSAN or Management traffic.

It is fair to say that this is by far the most exciting and special use case I have heard for Virtual SAN. I know though there are some other really interesting use cases out there as I have heard about installations on cruise ships and trains as well. Hopefully I will be able to track those down and share those stories with you. Thanks Steffan and TeleComputing for your time and great story, much appreciated!

Awesome fling: ESXi Embedded Host Client

Duncan Epping · Aug 13, 2015 ·

A long long time ago I stumbled across a project within VMware which allowed you to manage ESXi through a client which was running on ESXi itself. Basically it presented an html interface for ESXi not unlike the MUI we had in the old days. It was one of those pet-projects being done in spare time by a couple of engineers which for various reasons at the time was never completed. The concept/idea however did not die fortunately. Some very clever engineers felt it was time to have that “embedded host client” for ESXi and started developing something in their spare time and this is the result.

I am not going to describe it in detail as William Lam has an excellent post on this great fling already. The installation is fairly straight forward, basically a vib you need to install. No rocket science. When installed you can manage various aspects of your hosts and VMs including:

  • VM operations (Power on, off, reset, suspend, etc).
  • Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
  • Configuring NTP on a host
  • Displaying summaries, events, tasks and notifications/alerts
  • Providing a console to VMs
  • Configuring host networking
  • Configuring host advanced settings
  • Configuring host services

Is that cool or what? Head over to the Fling website and test it. Make sure to provide feedback when you have it as the engineers are very receptive and always looking to improve their fling. Personally I hope that this fling will graduate and will be added to ESXi by default, or at a minimum be fully supported! Excellent work Etienne Le Sueur and George Estebe!

Datrium finally out of stealth… Welcome Datrium DVX!

Duncan Epping · Jul 28, 2015 ·

Before I get started, I have not been briefed by Datrium so I am also still learning as I type this and it is purely based on the somewhat limited info on their website. Datrium’s name has been in the press a couple of times as it was the company that was often associated with Diane Greene. The rumours back then were that Diane Greene was the founder and was going to take on EMC, that was just a rumour as Diane Greene is actually an investor in Datrium. Not just her of course, Datrium is also backed by NEA (Venture Capitalist) and various other well known people like Ed Bugnion, Mendel Rosenblum, Frank Slootman and Kai Li. Yes, a big buy in from some of the original VMware founders. Knowing that two of the Datrium founders (Boris Weissman and Ganesh Venkitachalam) are former VMware Principal Engineers (and old-timers) that makes sense. (Source) This morning a tweet was send out, and it seems today they are officially out of stealth.

As the sun rises this morning, so does a new dominant #datastorage player #Datrium #stealthmode

— Datrium (@Datrium) July 28, 2015

So what is Datrium about? Well Datrium delives a new type of storage system which they call DVX. Datrium DVX is a hybrid solution comprised of host local data services and a network accessed capacity shelf called “netshelf”. I think this quote from their website says it all what their intention is… Move all functionality to the host and let the “shelf” just take care of storing bits. I included a diagram that I found on their website as it makes it more clear.

On the host, DiESL manages in-use data in massive deduplicated and compressed caches on BYO (bring your own) commodity SSDs locally, so reads don’t need a network hop. Hosts operate locally, not as a pool with other hosts.

datrium dvx

It seems that from a host perspective the data services (caching, compression, raid, cloning etc) are implemented through the installation of a VIB. So not VM/Appliance based but rather kernel based. The NetShelf is accessible via 10GbE and Datrium uses a proprietary protocol to connect to it. From a host side (ESXi) they connect locally over NFS, which means they have implemented an NFS Server within the host. The NFS connection is also terminated within the host and they included their own protocol/driver on the host to be able to connect to the NetShelf. It is a bit of an awkward architecture, or better said … at first it is difficult to wrap your head around it. This is the reason I used the word “hybrid” but maybe I should have used unique. Hybrid, not because of the mixture of flash and HDD but rather because it is a hybrid of hyper-converged / host local caching and more traditional storage but done in a truly unique way. What does that look like? Something like this I guess:

datrium dvx

So what does this look like from a storage perspective? Well each NetShelf will come with 29TB of usable capacity. Expected deduplication and compression rate for enterprise companies is between 2-6x which means you will have between 58TB and 175TB to your disposal. In order to ensure your data is high available the NetShelf is a dual controller setup with dual port drives (Which means the drives are connected to both controllers and used in an “active/standby” fashion). Each controller has NVRAM which is used for write caching, and a write will be acknowledge to the VM when it has been written to the NVRAM of both controllers. In other words, if a controller fails there should be no data loss.

Talking about availability, what if a host fails? If I read their website correctly then there is no write caching from a host point of view as it is states that each host operates independently from a caching point of view (no mirroring of writes to other hosts). This also means that all the data services need to be inline –> dedupe / compress / raid. When those actions complete the result will be stored on the NetShelf and then it is accessible by other hosts when needed. It makes me wonder what happens when DRS is enabled and a VM is migrated from one host to another. Will the read cache migrate with it to the other host? And what about very write intensive workloads, how will those perform when all data services are inline? What kind of overhead can/will it have on the host? How will it scale out? What if I need more than 1 Netshelf? Those are some of the questions that popup immediately. Considering the brain-power within Datrium I am assuming they have a simple answer to those questions… (Former VMware, Data Domain, NetApp, EMC etc) I will try to ask them these questions at VMworld or during a briefing and write a follow up.

From an operational aspect it is an interesting solution as it should lower the effort involved with managing storage almost to zero. There is the NFS connection and you have your VMs and VMDKS at the front end, at the back-end you have a blackbox or better said a shelf dedicated to storing bits. This should be dead easy to manage and deploy. It shouldn’t require a dedicated storage administrator but the VMware admin should be able to manage it. Some of you may ask, well what if I want to connect anything other than a VMware host to it? For now Datrium appears to be mainly targeting VMware environments (which makes sense considering their dna) but I guess they could implement this for various platforms in a similar fashion.

Again, I was not briefed by Datrium and I accidentally saw their tweet this morning but their solution is so intriguing I figured I would share it anyway. Hope it was useful.

Interested? More info here:

  • Datasheet – http://www.datrium.com/datasheet/DVX_DataSheet.pdf
  • Host side implementation info – http://www.datrium.com/dvx-overview/diesl-software/
  • DVX Netshelf – http://www.datrium.com/dvx-overview/datrium-netshelf/
  • Twitter: http://www.twitter.com/datriumstorage

My top 15 VMworld sessions for 2015

Duncan Epping · Jul 22, 2015 ·

Every year I do a top VMworld sessions post. It is getting more complicated each year as there are so many great sessions. In the past years I tried to restrict myself to 20 but it always ends up being 22, 23 or even more sessions. This year I am going to be strict, 15 at most and in random order. These are the sessions I would sign up for myself, unfortunately as a VMware employee you can’t register, but I am sure going to try to sneak in when I have time, or watch the recording!

  • INF4528 – vCenter Server Appliance (VCSA) Best Practices & Tips/Tricks – William Lam
  • INF5211 – Automating Everything VMware with PowerCLI – Deep Dive – Alan Renouf & Luc Dekens
  • STO4949 – Extreme Performance Series: Virtual SAN Performance Deep-Dive – Lenin Singaravelu & Sankaran Sivathanu
  • NET4989 – The Future of Network Virtualization with VMware NSX – Bruce Davie
  • STO6228 – Monitoring and Troubleshooting Virtual SAN, Current and Future – Christian Dickmann & Cormac Hogan
  • CNA6649-S – Build and run Cloud-Native Apps in your Software-Defined Data Center – Kit Colbert & Aaron Sweemer & Jared Rosoff
  • VAPP4639 – Best Practices for Performance Tuning of Virtualized Telco and NFV Applications on vSphere ESXi – Bhavesh Davda & Jin Heo
  • STO4649 – Virtual Volumes Technical Deep Dive – Ken Werneburg & Patrick Dirks
  • NET5612 – NSX for vSphere Logical Load Balacing Deep Dive – Dimitri Desmidt & Uday Masurekar
  • INF5701 – Extreme Performance Series: vSphere Compute & Memory – Fei Guo & Seong Beom Kim
  • CTO6455 – Future Meets Present: Insights from VMware’s Field CTOs – Joe Baguley & Chris Wolf & Paul Strong
  • INF5306 – DRS Advancements in vSphere 6, Advanced Concepts, and Future Directions – Naveen Nagaraj
  • STO5336 – VMware Virtual SAN – Architecture Deep Dive – Christos Karamanolis & Rawlinson Rivera
  • INF4529 – VMware Certificate Management for Mere Mortals – Adam Eckerle & Ryan Johnson
  • STO6287-SPO – Instant Application Recovery and DevOps Infrastructure for VMware Environments – A Technical Deep Dive – Chris Wahl & Arvind Nithrakashyap

I did not include any sessions of my own, if you are interested in my sessions, look at the below:

  • INF4535 – 5 Functions of Software Defined Availability – Frank Denneman & Duncan Epping
  • STO5333 – Building a Stretched Cluster with Virtual SAN – Rawlinson Rivera & Duncan Epping
  • SDDC5027 – VCDX Unwrapped – Everything You Wanted to Know About VCDX – Panel
  • STO4650-QT – Five Common Customer Use Cases for Virtual SAN – Lee Dilworth & Duncan Epping
  • SDDC4593 – Ask the Expert vBloggers – Panel

See you guys there!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 81
  • Page 82
  • Page 83
  • Page 84
  • Page 85
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in