• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

virtual san

Benchmarking an HCI solution with legacy tools

Duncan Epping · Nov 17, 2016 ·

I was driving back home from Germany on the autobahn this week when thinking about 5-6 conversations I have had the past couple of weeks about performance tests for HCI systems. (Hence the pic on the rightside being very appropriate ;-)) What stood out during these conversations is that many folks are repeating the tests they’ve once conducted on their legacy array and then compare the results 1:1 to their HCI system. Fairly often people even use a legacy tool like Atto disk benchmark. Atto is a great tool for testing the speed of your drive in your laptop, or maybe even a RAID configuration, but the name already more or less reveals its limitation: “disk benchmark”. It wasn’t designed to show the capabilities and strengths of a distributed / hyper-converged platform.

Now I am not trying to pick on Atto as similar problems exist with tools like IOMeter for instance. I see people doing a single VM IOMeter test with a single disk. In most hyper-converged offerings that doesn’t result in a spectacular outcome, why? Well simply because that is not what the solution is designed for. Sure, there are ways to demonstrate what your system is capable off with legacy tools, simply create multiple VMs with multiple disks. Or even with a single VM you can produce better results when picking the right policy as vSAN allows you to stripe data across 12 devices for instance (which can be across hosts, diskgroups etc). Without selecting the right policy or having multiple VMs, you may not be hitting the limits of your system, but simply the limits of your VM virtual disk controller, host disk controller, single device capabilities etc.

But there is even a better option, pick the right toolset and select the right workload(Surely only doing 4k blocks isn’t representative of your prod environment). VMware has developed a benchmarking solution that works with both traditional as well as with hyper-converged offerings called HCIBench. HCIBench can be downloaded for free, and used for free, through the VMware Flings website. Instead of that single VM single disk test, you will now be able to test many VMs with multiple disks to show how a scale-out storage system behaves. It will provide you great insights of the capabilities of your storage system, whether that is vSAN or any other HCI solution, or even a legacy storage system for that matter. Just like the world of storage has evolved, so has the world of benchmarking.

What is new for Virtual SAN 6.5?

Duncan Epping · Oct 18, 2016 ·

As most of you have seen by the explosion of tweets, Virtual SAN 6.5 was just announced. Some may wonder how 6.5 can be announced so fast after the Beta Announcement, well this is not the same release. This release has a couple of key new features, lets look at those:

  • Virtual SAN iSCSI Service
  • 2-Node Direct Connect
    • Witness Traffic Separation for ROBO
  • 512e drive support

Lets start with the biggest feature of this release, at least in my opinion, Virtual SAN iSCSI Service. This provides what you would think it provides: the ability to create iSCSI targets and LUNs and expose those to the outside world. These LUNs by the way are just VSAN objects, and these objects have a storage policy assigned to them. This means that you get iSCSI LUNs with the ability to change performance and/or availability on the fly. All of it through the interface you are familiar with, the Web Client. Let me show you how to enable it:

  • Enable the iSCSI Target Service
  • Create a target, and a LUN in the same interface if desired
  • Done

How easy is that? Note that there are some restrictions in terms of use cases. It is primarily targeting physical workloads and Oracle RAC. It is not intended for connecting to other vSphere clusters for instance. Also, you can have a max of 1024 LUNs per cluster and 128 targets per cluster at most. LUN capacity limit is 62TB.

Next up on the list is 2-node direct connect. What does this mean? Well it basically means you can now cross-connect two VSAN hosts with a simple ethernet cable as shown in the diagram in the right. Big benefit of course is that you can equip your hosts with 10GbE NICs and get 10GbE performance for your VSAN traffic (and vMotion for instance) but don’t incur the cost of a 10GbE switch.

This can make a huge difference when it comes to total cost of ownership. In order to achieve this though you will also need to setup a separate VMkernel interface for Witness Traffic. And this is the next feature I wanted to mention briefly. For 2-node configurations it will be possible as of VSAN 6.5 to separate witness traffic by tagging a VMkernel interface as a designated witness interface. There’s a simple “esxcli” command for it, note that <X> needs to be replaced with the number of the VMkernel interface:

esxcli vsan network ip set -i vmk<X> -T=witness

Then there is the support for 512e drives, note that 4k native is not supported at this point. Not sure what more to say about it than that… It speaks for itself.

Oh and one more thing… All-Flash has been moved down from a licensing perspective to “Standard“. This means anyone can now deploy all-flash configuration at no additional licensing cost. Considering how fast the world is moving to all-flash I think this only makes sense. Note though that data services like dedupe/compression/raid-5/raid-6 are still part of VSAN Advanced. Nevertheless, I am very happy about this positive licensing change!

Running your VSAN witness for a 2 node cluster on a 2 node cluster

Duncan Epping · Sep 20, 2016 ·

A week ago we had a discussion on twitter about a scenario which was talked about at VMworld. The scenario is one where you have two 2-node clusters and for each 2-node cluster the required Witness VM is running on the other. Let me show you what I mean to make it clear:

The Witness VM on Cluster A is the witness for Cluster B, and the Witness VM on Cluster B is the witness for Cluster A. As it stands today this is not a supported configuration out of the box. For ongoing support, it is required that users go through the RPQ process so VMware can validate the design. Please contact your VMware representative for more details.

A knowledge base article should be published on this topic soon, if and when it is published I will update this post and point to it.

Taking VSAN to the next level, join the VSAN beta!

Duncan Epping · Sep 5, 2016 ·

In 2013 VSAN saw the day of light, the beta was released. Now 3 years later you have the opportunity to once again sign up and help improve our product. Over the last couple of years we have seen many new features introduced, so you may wonder what is left. There will be various enhancements in this release, but we are in particular looking for feedback on the following 3 features:

  • Software data-at-rest encryption
  • Local Protection for Stretched Clusters, for two-layer protection of stretched clusters: across sites and locally
  • New operational management enhancements (Health checks from vCenter, monitoring networking stats, and more)

I think most of these speak for itself, with regards to the Local Protection for Stretched Clusters, it basically allows you to protect your data within a site and across sites at the same time in a stretched cluster. In other words: RAID-1 across site and then RAID-1, 5 or 6 within a site. Something I am very excited about personally.

I don’t know how many people will be accepted for the beta, but if I find out I will report back. Sign up now: vmware.com/go/virtual-san-beta. PS: remember, these capabilities are still under development, are subject to change without notice and there is no guarantee of future availability.

An Industry Roadmap: From storage to data management #STO7903 by @xtosk

Duncan Epping · Sep 1, 2016 ·

This is the session I have been waiting for, I had it very high on my “must see” list together with the session presented by Christian Dickmann earlier today. Not because it happened to be presented by our Storage an Availability CTO Christos Karamanolis (@XtosK on twitter), but because of the insights I expect to be provided in this session. The title I think says it all: An Industry Roadmap: From storage to data management.

** Keep that in mind when reading the rest of article. Also, this session literally just finished a second ago, I wanted to publish it asap so if there are any typos, my apologies. **

Christos starts with explaining the current problem. There is a huge information growth, 2x growth every 2 years. And that is on the conservative side. Where does the data go? According to analyst it is not expected that this will go to traditional storage, actually the growth of traditional storage is slowing down, actually there is a negative growth seen. Two new types of storage have emerged and are growing fast, Hyper-scale Server SAN Storage and Enterprise Server SAN Storage aka Hyper-converged systems.

With new types of applications changing the world of IT, data management is more important than ever before. Todays storage product do not meet the requirements of this rapidly changing IT world and does not provide the agility your business owners demand. Many of the infrastructure problems can be solved by Hyper-Converged Software, this is all enabled by the hardware evolution we’ve witness over the last years: flash, RDMA, NVMe, 10Gbe etc. These changes from a hardware point of view allowed us to simplify storage architectures and deliver it as software. But it is not just about storage, it is also about operational simplicity. How do we enable our customers to manage more applications and VMs with less. Storage Policy Based Management has enabled this for both Virtual SAN (hyper-converged) and Virtual Volumes in more traditional environments.

Data Lifecycle Management however is still challenging. Snapshots, Clones, Replication, Dedupe, Checksums, Encryption. How do I enable these on a per VM level? How do we decouple all of these data services from the underlying infrastructure? VMware has been doing that for years, best example is vSphere Replication where VMs and Virtual Disks can be replicated on a case by case basis between different types of storage systems. It is even possible to leverage an orchestration solution like Site Recovery Manager to manage your DR strategy end to end from a single interface from private cloud to private cloud, but also from private to public. And from private to public is enabled by vCloud Availability suite, and here you can pay as you g(r)o(w). All of this again driven by policy and through the interface you use on a daily basis, the vSphere Web Client.

How can we improve the world of DR? Just imagine there was a portable snapshot. A snapshot that was decoupled from storage, can be moved between environments, can be stored in public or private clouds and maybe even both at the same time. This is something we as VMware are working on. A portable snapshot that can be used for Data Protection purposes. Local copies, archived copies in remote datacenters with a different SLA/retention.

How does this scale however when you have 10000s of VMs? Especially when there are 10s of snapshots per VM, or even hundreds. This should all be driven by policy. If I can move the data to different locations, can I use this data as well for other purposes? How about leveraging this for test&dev or analytics? Portable snapshots providing application mobility.

Christos next demoed what the above may look like in the future, the demo shows a VM being replicated from vSphere to AWS, but vSphere to vSphere or vSphere to Azure were also available as an option. The normal settings are configured (destination datastore and network) and literally within seconds the replication starts. The UI looks very crisp and seems to be similar to what was shown in the keynote on day 1 (Cross Cloud Services). But how does this work in the new world of IT, what if I have many new gen applications, containers / microservices?

A Distributed File System for Cloud Native apps is now introduced. It appears to be a solution which sits on top of Virtual SAN and provides a file system that can scale to 1000s of hosts with functionality like highly scalable and performing snapshots and clones. These snapshots provided by this Distributed File System are also portable, this concept being developed is called exoclones. It is not something that is just living in the heads of the engineering team, Christos actually showed a demo of an exoclone being exported and imported to another environment.

If VMware does provide that level of data portability, how do you track and control all that data? Data governance is key in most environments, how do we enforce compliance, integrity and availability.  This will be the next big challenge for the industry. There are some products which can provide this today, but nothing that can do this cross-cloud and for both current and new application architectures and infrastructures.

Although for years we seem to have been under the impression that the infrastructure was the center of the universe. Reality is that it serves a clear purpose: host applications and provide users access to data. Your companies data is what is most important. We as VMware realize that and are working to ensure we can help you move forward on your next big journey. In short, it is our goal that you can focus on data management and no longer need to focus on the infrastructure.

Great talk,

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Interim pages omitted …
  • Page 36
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in