• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

cloud

Startup News Flash part 3

Duncan Epping · Aug 20, 2013 ·

Who knew so quickly after part 1 and part 2 there would be a part 3, I guess not strange considering VMworld is coming up soon and there was a Flash Memory Summit last week. It seems that there is a battle going on in the land of the AFA’s (all flash arrays), it isn’t about features / data services as one would expect. No they are battling over capacity density aka how many TBs can I cram in to a single U, not sure how relevant this is going to be over time, yes it is nice to have dense configurations, yes it is awesome to have a billion IOps in 1U but most of all I am worried about availability and integrity of my data.  So instead of going all out on density, how about going all out on data services? Not that I am saying density isn’t useful, it is just… Anyway, I digress…

One of the companies which presented at Flash Memory Summit was Skyera. Skyera announced an interesting new product called skyEagle. Another all-flash array is what I can hear many of you thinking, and yes I thought exactly the same… but skyEagle is special compared to others. This 1u box manages to provide 500TB of flash capacity, now that is 500TB of raw capacity. So just imagine what that could end up being after Skyera’s hardware-accelerated data compression and data de-duplication has done its magic. Pricing wise? Skyera has set a list price for the read-optimized half petabyte (500 TB) skyEagle storage system of $1.99 per GB, or $.49 per GB with data reduction technologies. More specs can be found here. Also, I enjoyed reading this article on The Register which broke the news…

David Flynn (Former Fusion-io CEO) and Rick White (Fusion-io founder) started a new company called Primary Data. The WallStreet Journal reported on this and more or less revealed what they will be working on:”that essentially connects all those pools of data together, offering what Flynn calls a “unified file directory namespace” visible to all servers in company computer rooms–as well as those “in the cloud” that might be operatd by external service companies.” This kind of reminds me of Aetherstore, or at least the description aligns with what Aetherstore is doing. Definitely a company worth tracking if you ask me.

One of the companies I did an introduction post on is Simplivity. I liked their approach to converged as it not only combines just compute and storage, but they also included backup, replication, snapshots, dedupe and cloud integration. They announced this week an update on their Omnicube CN-3000 platform and introduced two new platforms Omnicube CN-2000 and the Omnicube CN-5000. So what are these two new Omnicubes? Basically the CN-5000 is the big brother of the CN-3000 and the CN-2000 is its kid brother. I can understand why they introduced these as it will help expanding the target audience, “one size fits all” doesn’t work when the cost for “all” is the same and so the TCO/ROI changes based on your actual requirements, but in a negative way. One of the features that made SimpliVity unique that has had a major update is the OmniStack Accelerator, this is a custom designed PCIe card that does inline dedupe and compression. Basically an offload mechanism for dedupe and compression where others are leveraging the server CPU. Another nice thing SimpliVity added is support for VAAI. If you are interested in getting to know more, two white papers were released which are interesting to read: a deep dive by Hans de Leenheer and Stephen Foskett and one with a focus on “data management” by Howard Marks.

A bit older announcement, but as I spoke with these folks this week and they demoed their GA product I figured I would add them to the list. Ravello Systems developed a cloud hypervisor which abstracts your virtualization layer and allows you to move virtual machines / vApps between clouds (private and public) without the need to rebuild your virtual machines or guest OS’s. What I am saying is that they can move your vApps from vSphere to AWS to Rackspace without painful conversions every time. Pretty neat right? On top of that, Ravello is your single point of contact meaning that they are also a cloud broker. You pay Ravello and they will take care of AWS / RackSpace etc. of course they allow you to do stuff like snapshotting, cloning and create complex network configurations if needed. They managed to impress me during the short call we had, and if you want to know more I recommend reading this excellent article by William Lam or visit their booth during VMworld!

That is it for part 3, I bet I will have another part next week during or right after VMworld as press releases are coming in every hour at this point. Thanks for reading,

Top 3 Skills Your IT Team Needs to Prepare for the Cloud

Duncan Epping · Jun 11, 2013 ·

I just wrote an article for the vCloud blog which is titled “Top 3 Skills Your IT Team Needs to Prepare for the Cloud“. Although it is far less technical then I normally post here, it might still be worth a read for those who are considering a private or hybrid cloud, and even consuming a public cloud. Below is a short out take from the post, but for the full post you will need to head over to the vCloud Blog.

When I am talking about skills, I am not only talking about your team’s technical competency. For a successful adoption of cloud, it is of a great importance that the silos within the IT organization are broken down, or at a bare minimum bridged. Now more than ever, inter- and intra-team communication is of utmost importance. Larger organizations have realized this over the years while doing large virtualization projects, leading many to introduce a so-called “Center of Excellence.” This Center of Excellence was typically a virtual team formed out of the various teams (network, storage, security, server, application, business owners), and would ensure everyone’s requirements were met during the course of the project. With cloud, a similar approach is needed.

What is static overhead memory?

Duncan Epping · May 6, 2013 ·

We had a discussion internally on static overhead memory. Coincidentally I spoke with Aashish Parikh from the DRS team on this topic a couple of weeks ago when I was in Palo Alto. Aashish is working on improving the overhead memory estimation calculation so that both HA and DRS can be even more efficient when it comes to placing virtual machines. The question was around what determines the static memory and this is the answer that Aashish provided. I found it very useful hence the reason I asked Aashish if it was okay to share it with the world. I added some bits and pieces where I felt additional details were needed though.

First of all, what is static overhead and what is dynamic overhead:

  • When a VM is powered-off, the amount of overhead memory required to power it on is called static overhead memory.
  • Once a VM is powered-on, the amount of overhead memory required to keep it running is called dynamic or runtime overhead memory.

Static overhead memory of a VM depends upon various factors:

  1. Several virtual machine configuration parameters like the number vCPUs, amount of vRAM, number of devices, etc
  2. The enabling/disabling of various VMware features (FT, CBRC; etc)
  3. ESXi Build Number

Note that static overhead memory estimation is calculated fairly conservative and we take a worst-case-scenario in to account. This is the reason why engineering is exploring ways of improving it. One of the areas that can be improved is for instance including host configuration parameters. These parameters are things like CPU model, family & stepping, various CPUID bits, etc. This means that as a result, two similar VMs residing on different hosts would have different overhead values.

But what about Dynamic? Dynamic overhead seems to be more accurate today right? Well there is a good reason for it, with dynamic overhead it is “known” where the host is running and the cost of running the VM on that host can easily be calculated. It is not a matter of estimating it any longer, but a matter of doing the math. That is the big difference: Dynamic = VM is running and we know where versus Static = VM is powered off and we don’t know where it might be powered!

Same applies for instance to vMotion scenarios. Although the platform knows what the target destination will be; it still doesn’t know how the target will treat that virtual machine. As such the vMotion process aims to be conservative and uses static overhead memory instead of dynamic. One of the things or instance that changes the amount of overhead memory needed is the “monitor mode” used (BT, HV or HWMMU).

So what is being explored to improve it? First of all including the additional host side parameters as mentioned above. But secondly, but equally important, based on the vm -> “target host” combination the overhead memory should be calculated. Or as engineering calls it calculating “Static overhead of VM v on Host h”.

Now why is this important? When is static overhead memory used? Static overhead memory is used by both HA and DRS. HA for instance uses it with Admission Control when doing the calculations around how many VMs can be powered on before unreserved resources are depleted. When you power-on a virtual machine the host side “admission control” will validate if it has sufficient unreserved resource available for the “static memory overhead” to be guaranteed… But also DRS and vMotion use the static memory overhead metric, for instance to ensure a virtual machine can be placed on a target host during a vMotion process as the static memory overhead needs to be guaranteed.

As you can see, a fairly lengthy chunk of info on just a single simple metric in vCenter / ESXTOP… but very nice to know!

Automating vCloud Director Resiliency whitepaper released

Duncan Epping · Mar 28, 2013 ·

About a year ago I wrote a whitepaper about vCloud Director resiliency, or better said I developed a disaster recovery solution for vCloud Director. This solution allows you to fail-over vCloud Director workloads between sites in the case of a failure. Immediately after it was published various projects started to implement this solution. As part of our internal project our PowerCLI guru’s Aidan Dalgleish and Alan Renouf started looking in to automating the solution. Those who read the initial case study probably have seen the manual steps required for a fail-over, those who haven’t read this white paper first…

The manual steps in the vCloud Director Resiliency whitepaper is exactly what Alan and Aidan addressed. So if you are interested in implementing this solution then it is useful to read this paper new white paper about Automating vCloud Director Resiliency as well. Nice work Alan and Aidan!

vCloud ecosystem announcements at VMware Partner Exchange

Duncan Epping · Feb 27, 2013 ·

It is has been a while since I wrote anything about vCloud Director itself… Primarily because I have been focused on other things within the vCloud Suite the last couple of months. This week various partners of VMware announced new products at VMware Partner Exchange 2013 (By the way, 2014 is scheduled to be held at the Moscone Center in San Francisco). I wanted to take a couple of minutes to provide a quick overview of what was announced. Personally I think it is great that we are starting to see more and more partners developing products and solutions to enhance the vCloud Director experience, especially in the Backup/Restore space this was more than welcome. So what was announced this week so far, in no particular order:

  • Zerto announced Virtual Replication 3.0
    In a blog article they explain what is new and improved in version 3.0. What I personally think is exciting is the fact that they support vCloud Director 5.1 and provide support and integration with vCloud Automation Center. On top of that the 3.0 product offers a self-service portal, I bet this is what a lot of the service providers were waiting for. It will make creating DR as a Service solution offering a lot simpler. There is a lot more added, so for all the details make sure you hit the links above.
  • Veeam announced version 7 of Veeam Backup and Replication with vCloud Director integration
    I guess the title says it all. With version 7 Veeam will support vCloud Director environment. Veeam will not only allow you to back up the VMs in a vCloud Director vApp but it will also allow you to back all vApp metadata and attributes. Of course restore functionality of vApps and VMs directly in to vCloud Director is included. Definitely something I know a lot people were waiting for.
  • Commvault announced Simpana version 10 with vCloud Director integration
    I have always been impressed with Commvault’s backup solution. It is simple and robust. So you can imagine I was happy when I found out they were working on integration Simpana with vCloud Director. I can’t find too much details about the level of integration to be honest, but there is a long list of new features in version 10 to be found here. Hopefully we will find out more soon. For more details it probably easier to read Viktor’s post then it is to read the Commvault website.
  • EMC announced VMAX Cloud Edition
    Not so much specifically targeted at vCloud users, but more at Service Providers who are looking to build a large fully self-service cloud environment. VMAX Cloud Edition is not a bigger or more scalable version of VMAX, no it is a fully self-service, multi-tiered and multi-tenancy capable VMAX. No point in me diving too deep, as Chad wrote an excellent article about VMAX Cloud Edition, make sure to read it.

If I find other announcements I will add them to this article throughout the week of VMware Partner Exchange

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 12
  • Page 13
  • Page 14
  • Page 15
  • Page 16
  • Interim pages omitted …
  • Page 28
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in