• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

cloud

vSphere 6.5 what’s new – DRS

Duncan Epping · Oct 19, 2016 ·

Most of us have been using DRS for the longest time. To be honest, not much has changed over the past years, sure there were some tweaks and minor changes but nothing huge. In 6.5 however there is a big feature introduced, but lets just list them all for completeness sake:

  • Predictive DRS
  • Network-Aware DRS enhancements
  • DRS profiles

First of all Predictive DRS. This is a feature that the DRS team has been working on for a while. It is a feature that integrates DRS with VROps to provide placement and balancing decisions. Note that this feature will be in Tech Preview until vRealize Operations releases their version of vROPs which will be fully compatible with vSphere 6.5, hopefully sometime in the first half of next year. Brian Graf has some additional details around this feature here by the way.

Note that of course DRS will continue to use the data provided by vCenter Server, it will on top of that however also leverage VROps to predict what resource usage will look like, all of this based on historic data. You can imagine a VM currently using 4GB of memory (demand), however every day around the same time a SQL Job runs which makes the memory demand spike up to 8GB. This data is available through VROps now and as such when making placement/balancing recommendations this predicted resource spike can now be taken in to consideration. If for whatever reason however the prediction is that the resource consumption will be lower then DRS will ignore the prediction and simply take current resource usage in to account, just to be safe. (Which makes sense if you ask me.) Oh and before I forget, DRS will look ahead for 60 minutes (3600 seconds).

How do you configure this? Well that is fairly straight forward when you have VROps running, go to your DRS cluster and click edit settings and enable the “Predictive DRS” option. Easy right? (See screenshot below) You can also change that look ahead value by the way, I wouldn’t recommend it though but if you like you can add an advanced setting called ProactiveDrsLookaheadIntervalSecs.

One of the other features that people have asked about is the consideration of additional metrics during placement/load balancing. This is what Network-Aware DRS brings. Within Network IO Control (v3) it is possible to set a reservation for a VM in terms of network bandwidth and have DRS consider this. This was introduced in vSphere 6.0 and now with 6.5 has been improved. With 6.5 DRS also takes physical NIC utilization in to consideration, when a host has higher than 80% network utilization it will consider this host to be saturated and not consider placing new VMs.

And lastly, DRS Profiles. So what are these? In the past we’ve seen many new advanced settings introduced which allowed you to tweak the way DRS balanced your cluster. In 6.5 several additional options have been added to the UI to make it easier for you to tweak DRS balancing, if and when needed that is as I would expect that for the majority of DRS users this would not be the case. Lets look at each of the new options:

So there are 3 options here:

  • VM Distribution
  • Memory Metric for Load Balancing
  • CPU Over-Commitment

If you look at the description then I think they make a lot of sense. Especially the first two options are options I get asked about every once in a while. Some people prefer to have a more equally balanced cluster in terms of number of VMs per host, which can be done by enable “VM Distribution”. And for those who much rather load balance on “consumed” vs “active” memory you can also enable this. Now the “consumed” vs “active” is almost a religious debate, personally I don’t see too much value, especially not in a world where memory pages are zeroed when a VM boots and consumed is always high for all VMs, but nevertheless if you prefer you can balance on consumed instead. Last is the CPU Over-Commitment, this is one that could be useful when you want to limit the number of vCPUs per pCPU, apparently this is something that many VDI customers have asked for.

I hope that was useful, we are aiming to update the vSphere Clustering Deepdive at some point as well to include some of these details…

Hyper-Converged is here, but what is next?

Duncan Epping · Oct 11, 2016 ·

Last week I was talking to a customer and they posed some interesting questions. What excites me in IT (why I work for VMware) and what is next for hyper-converged? I thought they were interesting questions and very relevant. I am guessing many customers have that same question (what is next for hyper-converged that is). They see this shiny thing out there called hyper-converged, but if I take those steps where does the journey end? I truly believe that those who went the hyper-converged route simply took the first steps on an SDDC journey.

Hyper-converged I think is a term which was hyped and over-used, just like “cloud” a couple of years ago. Lets breakdown what it truly is: hardware + software. Nothing really groundbreaking. It is different in terms of how it is delivered. Sure, it is a different architectural approach as you utilize a software based / server side scale-out storage solution which sits within the hypervisor (or on top for that matter). Still, that hypervisor is something you were already using (most likely), and I am sure that “hardware” isn’t new either. Than the storage aspect must be the big differentiator right? Wrong, the fundamental difference, in my opinion, is how you manage the environment and the way it is delivered and supported. But does it really need to stop there or is there more?

There definitely is much more if you ask me. That is one thing that has always surprised me. Many see hyper-converged as a complete solution, reality is though that in many cases essential parts are missing. Networking, security, automation/orchestration engines, logging/analytic engines, BC/DR (and orchestration of it) etc. Many different aspects and components which seem to be overlooked. Just look at networking, even including a switch is not something you see to often, and what about the configuration of a switch, or overlay networks, firewalls / load-balancers. It all appears not to be a part of hyper-converged systems. Funny thing is though, if you are going on a software defined journey, if you want an enterprise grade private cloud that allows you to scale in a secure but agile manner these components are a requirement, you cannot go without them. You cannot extend your private cloud to the public cloud without any type of security in place, and one would assume that you would like to orchestrate every thing from that same platform and have the same networking / security capabilities to your disposal both private and public.

That is why I was so excited about the VMworld US keynote. Cross Cloud Services on top of hyper-converged leveraging all the tools VMware provides today (vSphere, VSAN, NSX) will exactly allow you to do what I describe above. Whether that is to IBM, vCloud Air or any other of the mega clouds listed in the slide below is even besides the point. Extending your datacenter services in to public clouds is what we have been talking about for a while, this hybrid approach which could bring (dare I say) elasticity. This is a fundamental aspect of SDDC, of which a hyper-converged architecture is simply a key pillar.

Hyper-converged by itself does not make a private cloud. Hyper-converged does not deliver a full SDDC stack, it is a great step in to the right direction however. But before you take that (necessary) hyper-converged step ask yourself what is next on the journey to SDDC. Networking? Security? Automation/Orchestration? Logging? Monitoring? Analytics? Hybridity? Who can help you reach full potential, who can help you take those next steps? That’s what excites me, that is why I work for VMware. I believe we have a great opportunity here as we are the only company who holds all the pieces to the SDDC puzzle. And with regards to what is next? Deliver all of that in an easy to consume manner, that is what is next!

 

 

 

An Industry Roadmap: From storage to data management #STO7903 by @xtosk

Duncan Epping · Sep 1, 2016 ·

This is the session I have been waiting for, I had it very high on my “must see” list together with the session presented by Christian Dickmann earlier today. Not because it happened to be presented by our Storage an Availability CTO Christos Karamanolis (@XtosK on twitter), but because of the insights I expect to be provided in this session. The title I think says it all: An Industry Roadmap: From storage to data management.

** Keep that in mind when reading the rest of article. Also, this session literally just finished a second ago, I wanted to publish it asap so if there are any typos, my apologies. **

Christos starts with explaining the current problem. There is a huge information growth, 2x growth every 2 years. And that is on the conservative side. Where does the data go? According to analyst it is not expected that this will go to traditional storage, actually the growth of traditional storage is slowing down, actually there is a negative growth seen. Two new types of storage have emerged and are growing fast, Hyper-scale Server SAN Storage and Enterprise Server SAN Storage aka Hyper-converged systems.

With new types of applications changing the world of IT, data management is more important than ever before. Todays storage product do not meet the requirements of this rapidly changing IT world and does not provide the agility your business owners demand. Many of the infrastructure problems can be solved by Hyper-Converged Software, this is all enabled by the hardware evolution we’ve witness over the last years: flash, RDMA, NVMe, 10Gbe etc. These changes from a hardware point of view allowed us to simplify storage architectures and deliver it as software. But it is not just about storage, it is also about operational simplicity. How do we enable our customers to manage more applications and VMs with less. Storage Policy Based Management has enabled this for both Virtual SAN (hyper-converged) and Virtual Volumes in more traditional environments.

Data Lifecycle Management however is still challenging. Snapshots, Clones, Replication, Dedupe, Checksums, Encryption. How do I enable these on a per VM level? How do we decouple all of these data services from the underlying infrastructure? VMware has been doing that for years, best example is vSphere Replication where VMs and Virtual Disks can be replicated on a case by case basis between different types of storage systems. It is even possible to leverage an orchestration solution like Site Recovery Manager to manage your DR strategy end to end from a single interface from private cloud to private cloud, but also from private to public. And from private to public is enabled by vCloud Availability suite, and here you can pay as you g(r)o(w). All of this again driven by policy and through the interface you use on a daily basis, the vSphere Web Client.

How can we improve the world of DR? Just imagine there was a portable snapshot. A snapshot that was decoupled from storage, can be moved between environments, can be stored in public or private clouds and maybe even both at the same time. This is something we as VMware are working on. A portable snapshot that can be used for Data Protection purposes. Local copies, archived copies in remote datacenters with a different SLA/retention.

How does this scale however when you have 10000s of VMs? Especially when there are 10s of snapshots per VM, or even hundreds. This should all be driven by policy. If I can move the data to different locations, can I use this data as well for other purposes? How about leveraging this for test&dev or analytics? Portable snapshots providing application mobility.

Christos next demoed what the above may look like in the future, the demo shows a VM being replicated from vSphere to AWS, but vSphere to vSphere or vSphere to Azure were also available as an option. The normal settings are configured (destination datastore and network) and literally within seconds the replication starts. The UI looks very crisp and seems to be similar to what was shown in the keynote on day 1 (Cross Cloud Services). But how does this work in the new world of IT, what if I have many new gen applications, containers / microservices?

A Distributed File System for Cloud Native apps is now introduced. It appears to be a solution which sits on top of Virtual SAN and provides a file system that can scale to 1000s of hosts with functionality like highly scalable and performing snapshots and clones. These snapshots provided by this Distributed File System are also portable, this concept being developed is called exoclones. It is not something that is just living in the heads of the engineering team, Christos actually showed a demo of an exoclone being exported and imported to another environment.

If VMware does provide that level of data portability, how do you track and control all that data? Data governance is key in most environments, how do we enforce compliance, integrity and availability.  This will be the next big challenge for the industry. There are some products which can provide this today, but nothing that can do this cross-cloud and for both current and new application architectures and infrastructures.

Although for years we seem to have been under the impression that the infrastructure was the center of the universe. Reality is that it serves a clear purpose: host applications and provide users access to data. Your companies data is what is most important. We as VMware realize that and are working to ensure we can help you move forward on your next big journey. In short, it is our goal that you can focus on data management and no longer need to focus on the infrastructure.

Great talk,

Rubrik landed new funding round and announced version 3.0

Duncan Epping · Aug 24, 2016 ·

After having gone through all holiday email it is now time to go over some of the briefings. The Rubrik briefing caught my eye as it had some big news in there. First of all, they landed Series C, big congrats, especially considering the size, $ 61m is a pretty substantial I must say! Now I am not a financial analyst, so I am not going to spend time talking too much about it, as the introduction of a new version of their solution is more interesting to most of you. So what did Rubrik announce with version 3 aka Firefly.

First of all, the “Converged Data Management” term seems to be gone and “Cloud Data Management” was introduced, and to be honest I prefer “Cloud Data Management”. Mainly because data management is not just about data in your datacenter, but data in many different locations, which typically is the case for archival or backup data. So that is the marketing part, what was announced in terms of functionality?

Version 3.0 of Rubrik supports:

  • Physical Linux workloads
  • Physical SQL
  • Edge virtual appliance (for ROBO for instance)
  • Erasure Coding

When it comes to physical SQL and Linux support it is probably unnecessary, but you will be able to backup those systems using the same policy driven / SLA concepts Rubrik already provides in their UI. For those who didn’t read my other articles on Rubrik, policy based backup/data management (or SLA domains as they call it) is their big thing. No longer do you create a backup schedule. You create an SLA and assign that SLA to a workload or a group even. And now this concept applies to SQL and physical Linux as well, which is great if you still have physical workloads in your datacenter! Connecting to SQL is straight forward, there is a connector service which is a simple MSI that needs to be installed.

Now all that data can be store in AWS S3 and for instance Microsoft Azure in the public cloud or maybe in a privately deployed Scality solution. Great thing about the different tiers of storage is that you qualify the tiers in their solution and data flows between it as defined in your workload SLA. This also goes for the announced Edge virtual appliance. This basically is a virtualized version of the Rubrik appliance, which allows you to deploy a solution in ROBO offices. Through the SLA you bring data to your main data center, but you can also keep “locally cached” copies so that restores are fast.

Finally, Rubrik used mirroring in previous versions to safely store data. Very similar to VMware Virtual SAN they now introduce Erasure Coding. Which means that they will be able to store data more efficiently, and according to Chris Wahl at no performance cost.

Overall an interesting 3.0 release of their platform. If you are looking for a new backup/data management solution, definitely one to keep your eye on.

AirSembly by AirVM, great interface for vCloud Director and vCenter for SPs

Duncan Epping · Dec 18, 2015 ·

This week I had a conf call with one of my old colleagues Willem van Engeland. Willem showed me a product called AirSembly by the new company he works for called AirVM. I had bumped in to the AirVM folks at VMworld, their booth kinda stood out you can say. Recently AirVM also revealed that they had been selected by VMware to be the preferred cloud management platform for vCloud Air Network partners. All of this made me curious and I figured I would have a quick look at what they have to offer.

Willem showed me what they offer, which is basically a cloud management platform for vCloud Director environments. It does a lot of things vCloud Director doesn’t offer today out of the box, which saves a lot of time and resources as you would normally custom develop this functionality. Here I am talking about for instance a fully customizable HTML5 interface, and a management solution which allows you as a cloud provider to create distributors, the distributors to create partners and the partners to sell to customers. Yes you can have multiple layers, which is a very common model for cloud providers. (Now you have the option to cut out the “distributor layer” as AirVM found out this is less common than expected at first.)

What I like about the layered approach is that I get to see as a partner what is important to me (same for the provider, customer etc) and aren’t overwhelmed with details I don’t care about. For instance as a customer you will want to know what is running in your VDC.

But as a partner I probably care about other things, things like these:

Now those of course are just two simple examples of what you get with AirSembly. For the vCAN partner it is probably more important to know that there is deep vCloud Director integration. Some of the stats in the AirSembly UI are not even available in the vCloud Director UI itself. On top of leveraging the vCloud Director APIs you can also connect AirSembly to vCenter Server, and they integrate with RabbitMQ. So any changes in your vCloud Director environment will be noticed by AirSembly, and as such AirSembly will always reflect the proper state of the environment.

Even more important, AirSembly allows you to basically customize your whole front-end. You can simply define colour schemes and change the fonts through drop downs, you can add custom headers and footers, logos and slogan… or you can go as far as providing custom CSS and go all-out when it comes to branding. Everything is possible through their interface. It allows you to create a portal that looks and feels like your own website, which is important.

It has been a while since I touched vCloud Director, one thing I still clearly remember though… complexity. That is what I like about AirSembly, it offers a lot of functionality out of the box as a cloud management platform for vCloud Director and vCenter Server, but it doesn’t feel complex at any point. The different layers for cloud provider, distributor, partner and customer take a while to get used to, but depending on where you sit in the chain you should not as a “consumer” ever notice that.

I am going to leave it at that for now, mainly because the weekend is coming up and my holiday is about to get started. If you want to know more, have a look at this interview that Jeremy van Doorn did for VMworld TV with AirVM and the demo that was given to Eric Sloof.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Interim pages omitted …
  • Page 28
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in