• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

futures

HCI2164BU – HCI Management, current and futures

Duncan Epping · Sep 5, 2018 ·

This session by Christian Dickmann and Junchi Zhang is usually one of my favorites in the HCI track, mainly because they show a lot of demos and in many cases show you what ends up being part of the product in 6-12 months. The session revolved all around management, or as they called it in the session “providing a holistic HCI experience”.

After a short intro Christian showed a demo around what we currently have around the installation of the vCenter Server Appliance and how we can deploy that to a vSAN Datastore, followed by the Quickstart functionality. I posted a demo of Quickstart earlier this week, let me post it here as well so you have an idea of what it is/does.

In the next demo, Christian showed how you can upgrade the firmware of a disk controller using Update Manager. Pretty cool, but afaik still limited to a single disk controller, hopefully, more will follow soon. But more importantly, after that demo ended he started talking about “Guided SDDC Update & Patching”, and this is where it got extremely interesting. We all know that it isn’t easy to upgrade a full stack, and what Christian was describing would be doing exactly that. Do you have Horizon? Sure, we will upgrade that as well when we do vCenter / ESXi / vSAN etc. Do you have NSX as part of your infra? Sure, that is also something we will take into account and upgrade it when required. This would also include firmware upgrades for NICs, disk controllers etc.

Next Christian showed the Support Insight feature, which is enabled through the Customer Experience Improvement Program. His demo then showed how to create a support request right from the H5 Client. The process shows that the solution understands the situation and files the ticket. Then it shows what the support team sees. It allows the support team to quickly analyze the environment, and more importantly inform the customer about the solution. No need to upload log bundles or anything like that, that all happens automatically. That’s not where it stop, you will be informed in the H5 client about the solution as well. Cool right?

Next Junchi was up and he discussed Capacity Management first. As he mentioned it appears to be difficult for people to understand the capacity graphs provided by vSAN. Junchi proposes a new model where it is clear instantly what the usable space is and by what current capacity is being consumed. Not just on a cluster level, but also at a VM level. This should also include what-if scenarios for usage projection. Junchi then quickly demoed the tools available that help with sizing and scaling.

Next Native File Services was briefly discussed, Data Protection and Cloud Native Storage. What does the management of these services look like? The file services demo that Junchi showed was really slick. Fill out IP details and Domain details and have File Services running in a minute or two natively on vSAN. Only thing you would need to do is create file shares and give folks access to the file shares. Also, monitoring will go through the familiar screens like the health check etc.

Last but not least Junchi discusses the integration with vRealize Automation on-premises and SaaS-based, a very cool demo showing how Cloud Assembly (but also vRA) will be able to leverage storage policies and new applications are provided using blueprints which have these policies associated with them.

That was it, if you like to know more, watch the session online, or attend it in EMEA!

An Industry Roadmap: From storage to data management #STO7903 by @xtosk

Duncan Epping · Sep 1, 2016 ·

This is the session I have been waiting for, I had it very high on my “must see” list together with the session presented by Christian Dickmann earlier today. Not because it happened to be presented by our Storage an Availability CTO Christos Karamanolis (@XtosK on twitter), but because of the insights I expect to be provided in this session. The title I think says it all: An Industry Roadmap: From storage to data management.

** Keep that in mind when reading the rest of article. Also, this session literally just finished a second ago, I wanted to publish it asap so if there are any typos, my apologies. **

Christos starts with explaining the current problem. There is a huge information growth, 2x growth every 2 years. And that is on the conservative side. Where does the data go? According to analyst it is not expected that this will go to traditional storage, actually the growth of traditional storage is slowing down, actually there is a negative growth seen. Two new types of storage have emerged and are growing fast, Hyper-scale Server SAN Storage and Enterprise Server SAN Storage aka Hyper-converged systems.

With new types of applications changing the world of IT, data management is more important than ever before. Todays storage product do not meet the requirements of this rapidly changing IT world and does not provide the agility your business owners demand. Many of the infrastructure problems can be solved by Hyper-Converged Software, this is all enabled by the hardware evolution we’ve witness over the last years: flash, RDMA, NVMe, 10Gbe etc. These changes from a hardware point of view allowed us to simplify storage architectures and deliver it as software. But it is not just about storage, it is also about operational simplicity. How do we enable our customers to manage more applications and VMs with less. Storage Policy Based Management has enabled this for both Virtual SAN (hyper-converged) and Virtual Volumes in more traditional environments.

Data Lifecycle Management however is still challenging. Snapshots, Clones, Replication, Dedupe, Checksums, Encryption. How do I enable these on a per VM level? How do we decouple all of these data services from the underlying infrastructure? VMware has been doing that for years, best example is vSphere Replication where VMs and Virtual Disks can be replicated on a case by case basis between different types of storage systems. It is even possible to leverage an orchestration solution like Site Recovery Manager to manage your DR strategy end to end from a single interface from private cloud to private cloud, but also from private to public. And from private to public is enabled by vCloud Availability suite, and here you can pay as you g(r)o(w). All of this again driven by policy and through the interface you use on a daily basis, the vSphere Web Client.

How can we improve the world of DR? Just imagine there was a portable snapshot. A snapshot that was decoupled from storage, can be moved between environments, can be stored in public or private clouds and maybe even both at the same time. This is something we as VMware are working on. A portable snapshot that can be used for Data Protection purposes. Local copies, archived copies in remote datacenters with a different SLA/retention.

How does this scale however when you have 10000s of VMs? Especially when there are 10s of snapshots per VM, or even hundreds. This should all be driven by policy. If I can move the data to different locations, can I use this data as well for other purposes? How about leveraging this for test&dev or analytics? Portable snapshots providing application mobility.

Christos next demoed what the above may look like in the future, the demo shows a VM being replicated from vSphere to AWS, but vSphere to vSphere or vSphere to Azure were also available as an option. The normal settings are configured (destination datastore and network) and literally within seconds the replication starts. The UI looks very crisp and seems to be similar to what was shown in the keynote on day 1 (Cross Cloud Services). But how does this work in the new world of IT, what if I have many new gen applications, containers / microservices?

A Distributed File System for Cloud Native apps is now introduced. It appears to be a solution which sits on top of Virtual SAN and provides a file system that can scale to 1000s of hosts with functionality like highly scalable and performing snapshots and clones. These snapshots provided by this Distributed File System are also portable, this concept being developed is called exoclones. It is not something that is just living in the heads of the engineering team, Christos actually showed a demo of an exoclone being exported and imported to another environment.

If VMware does provide that level of data portability, how do you track and control all that data? Data governance is key in most environments, how do we enforce compliance, integrity and availability.  This will be the next big challenge for the industry. There are some products which can provide this today, but nothing that can do this cross-cloud and for both current and new application architectures and infrastructures.

Although for years we seem to have been under the impression that the infrastructure was the center of the universe. Reality is that it serves a clear purpose: host applications and provide users access to data. Your companies data is what is most important. We as VMware realize that and are working to ensure we can help you move forward on your next big journey. In short, it is our goal that you can focus on data management and no longer need to focus on the infrastructure.

Great talk,

VMworld Session: VSAN – Software Defined Storage Platform of the Future #STO6050

Duncan Epping · Sep 3, 2015 ·

Unfortunately I haven’t been able to attend too many sessions, only 2 so far. This is one I didn’t want to miss as it was all about what VMware is working on for VSAN and layers that could sit on top of VSAN. Rawlinson and Christos spoke about where VSAN is today first. Mainly discussion the use cases (monolithic apps like Exchange, SQL etc.) and the simplicity VSAN brings. After which an explanation of the VSAN object/component model was provided which was the lead in to the future.

We are in the middle of an evolution towards cloud native applications Christos said. Cloud native apps scale in a different way then traditional apps, and their requirements differ. Usually not a need for HA and DRS, and will contain this functionality within their own framework. What does this result in for the vSphere layer?

VMware vSphere Integrated Containers and VMware Photon Platform enabled these new types of applications. But how do we enable these from a storage point of view? What kind of scale will we require? Will we need different data services? Will we need to different tools, what about performance?

First project being discussed is the Performance Service which will come as part of the Health Check plugin. Providing cluster level, host level, disk group level, disk level… The Performance Service Architecture is very interesting and is not a “standard vCenter Server service”. Providing deep insights using per host traces is not possible as it would not scale. A distributed model is proposed which will enable this, but in a decentralized way. Each host can collect data, each cluster can roll this up, and this can be done for many clusters. Data is both processed and stored in a distributed fashion. The cost for a solution like this should be around 10% of 1 core on a server. Just think what a vCenter Server would look like if you had the same type of scale and cost, with a 1000 host solution could easily result in a 100 vCPU requirement, which is not realistic.

Rawlinson demoes a potential solution for this, in this scenario we are talking 1000s of hosts of which data is gathered, analyzed and presented in what appears to be an HTML-5 interface. The solution doesn’t just provides details on the environment it also allows you to mitigate these problems. Note that this is a prototype of an interface that may or may not at some point in time be released. If you like what you see though, make sure to leave a comment as I am sure that helps making this prototype reality!

Next being discussed is the potential to leverage VSAN not just for virtual machines, but also for containers, having the capabilities to store files on top of VSAN. A distributed file system for cloud native apps is now introduced. Some of the requirements for a distributed file system would be a scalable data path, clones at massive scale, multi-tenancy and multi-purpose.

VMware is also prototyping a distributed file system and have it running in their labs. It sits on top of VSAN and leverages that scalable path and uses it to store its data and metadata. Rawlinson demonstrates how he can create 2000 clones of a file in under a second across a 1000 host and runs his application. Note that this application isn’t copied to those 1000 hosts, but it is a simple mountpoint on 1000 hosts, truly distributed filesystem with extremely scalable clone and snapshot technology.

Christos wraps up, key points are that VSAN will be the enabler of future storage solutions as it provides extreme scale, with at a low resource overhead. Awesome session, great peak in to the future.

  • « Go to Previous Page
  • Page 1
  • Page 2

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in