• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vm

Playing around with Tintri Global Center and Tintri storage systems

Duncan Epping · Jul 21, 2016 ·

Last week the folks from Tintri reached out and asked me if I was interested to play around with a lab they have running. They gave me a couple of hours of access to their Partner Lab. It had a couple of hosts, 4 different Tintri VMStore systems including their all-flash offering and of course their management solution Global Center. I have done a couple of posts on Tintri in the past, so if you want to know more about Tintri make sure to read those as well. (1, 2, 3, 4)

For those who have no clue whatsoever, Tintri is a storage company which sells “VM-Aware” storage. This basically means that all of the data services they offer can be enabled on a VM/VMDK level and they give visibility all the way down to the lowest level. And not just for VMware, they support other hypervisors as well by the way. I’ve been having discussions with Tintri since 2011 and it is safe to say they came a long way, most of my engagements however were presentations and the occasional demo so it was nice to actually go through the experience personally.

First of all their storage system management interface. If you login to one of them you are presented with all the info you would want to know, IOPS / Bandwidth / Latency, but even for latency you can see a split in network, host and storage latency. So if anything is misbehaving you will find out what and why probably relative fast.

Not just that, if you look at the VMs running on your system from the array side you can also do things like take a storage snapshot, clone the vm, restore the VM, replicate it or set QoS for that VM. Very powerful, all of that is also available in vCenter by the way through a plugin.

Now when you clone a VM, you can also create many VMs, pretty neat. I say give me 10 with the name Duncan and you get 10 of those called Duncan-01 –> Duncan-10.

Their central management solution is what I was interested in as I had only seen it once in a demo and that is it, it is called Tintri Global Center. Now one thing I have to say, it has been said by some that Tintri offers a scale out solution but the storage system itself is not a scale out system. When they refer to scale out, they refer to the ability to manage all storage systems through a single interface and the ability to group storage systems and load balance between those, which is done through Global Center and their “Pools” functionality. Pools kind of feels like SDRS to me as said in a previous post, now that I have played with it a bit it definitely feels a lot like SDRS. When I was playing with the lab I received the following message.

If you have used SDRS at some point in time and look at the screenshot (click it for bigger screenshot) you know what I mean. Anyway, good functionality to have. Pool different arrays and balance between based on space and performance. Nothing wrong with that. But that is not the best thing about Global Center, like I said I like the simplicity of Tintri’s interfaces and that also applies to Global Center. For instance when you login, this is the first you see

I really like the simplicity, it gives a great overview of the state of the total environment, and at the same time it will give you the ability to dive deeper when needed. You can look for per VMstore details, and figure out where your capacity is going for instance. (Snapshots, live data etc) But also see basic trending in terms of how what VMs are demanding from a storage performance and capacity point of view.

Having said all of that there is one thing that bugs me. Yes Tintri provides a VASA Provider but this is the “old style” VASA Provider which revolves around the datastore. Now if you look at VVols, it is all about the VM and which capabilities it needs. I would definitely welcome VVol support from Tintri, now I can understand this is no big priority for them as they have “similar” functionality, it is just that as a VM-Aware storage system I would expect there to be deep(er) integration from that perspective as well. But that is just me nitpicking I guess, as a VMware employee working for the BU that brought you VVols, it is safe to say I am biased when it comes to this. Tintri does offer an alternative which makes it easy to manage groups of VMs and it is called Service Groups. It allows you to apply data service to a logical grouping, which is defined by a rule. I could for instance say, all VMs that start with “Dun” need to be snapshotted every 5 hours, and this snapshot needs to be replicated etc etc. Pretty powerful stuff, and fairly easy to use as well. Still, for consistency it would be nice to be able to do this through SPBM in vSphere so that if I have other storage systems I can use the same mechanism to define services all through the same interface.

** Update: I was just pointed to the fact that there is a VVol capable VASA Provider, at least according to the VMware HCL. I have not seen the implementation and what is / what is not exposed unfortunately. Also just read the documentation and VVol is indeed supported. With a caveat for some systems: Tintri OS 4.1 supports VMware VMware vSphere Aware Storage API (VASA), 3.0 (VVOL 1.0). The Tintri vCenter Web Client Plugin is not required to run VVOLs on Tintri. VVOLs is not available for Tintri VMstore T540 or T445 systems. Also, the docs I’ve see don’t show the capabilities exposed through VVols unfortunately. **

Again, I really liked the simplicity of the solution. The overall user experience was great, I mean taking a snapshot is dead simple. Replicating that snapshot? One click. Clone? One click. QoS? 3 settings. Do I need to say more? Well done Tintri, and looking forward to what you guys will release next and thanks for providing me the opportunity to play around in your lab, I hope I didn’t break anything.

Enabling Hot-Add by default? /cc @gabvirtualworld

Duncan Epping · Jan 16, 2012 ·

Gabe asked the question on one of my recent posts if it made sense to enable Hot-Add by default and if there was an impact/overhead?

Lets answer the impact/overhead portion first, yes there is an overhead. It is in the range of percents. You might ask yourself where this overhead is coming from and if that is vSphere overhead or… When CPU and Memory Hot-add is enabled the Guest OS, especially Windows, will accommodate for all possible memory and CPU changes. For CPU is will take the max amount of vCPUs into account, so with vSphere 5 that would be 32. For memory it will take 16 x  power-on memory in to account, as that is the max you can provision . Does it have an impact? Again, a matter of percents. It could also lead to problems however when you don’t have sufficient memory provisioned as described in this KB by Microsoft: http://support.microsoft.com/kb/913568.

Another impact, mentioned by Valentin (VMware), is the fact that on ESXi 5.0 vNUMA would not be used if you had the HotAdd feature enabled for that VM.

What is our recommendation? Enable it only when you need it. Yes they impact might be small, but if you don’t need it why would you incur it?!

Resizing your IDE virtual harddisk?

Duncan Epping · May 28, 2010 ·

** Revised blog post about Resizing your IDE virtual disk can be found here **

I am working on a “top secret” upcoming product (around Cloud of course) and because of that I am testing various things. I had never noticed this before but today I wanted to change the size of a disk within vCenter as part of the test procedure. For some weird reason this option was greyed out:

I checked if there was a snapshot on the disk but that wasn’t the case. I tried the same thing on a different VM and it actually wasn’t greyed out. Then I noticed the difference between the VMs… The VM on which it was greyed out had an “IDE” disk and the other VM had a “SCSI” disk. It appears that it is currently not possible to change the size of an IDE virtual harddisk within vCenter.

Limiting your vCPU

Duncan Epping · May 18, 2010 ·

I had a discussion with someone around limiting a VM to a specific amount of Mhz’s after I found out that limits where set on most VMs. This environment was a “cloud” environment and the limit was set to create an extra level of fairness.

My question of course was doesn’t this impact performance? The answer was simple: No as a limit on a vCPU is only applied when there’s a resource constraint. It took me a couple of minutes to figure out what he actually tried to tell me but basically it came down to the following:

When a single VM has a limit of 300MHz and is the only VM running on a host than it will run it full speed as it will be constantly rescheduled for 300MHz.

However, that’s not what happens in my opinion. It took me a while to get the wording right but after a discussion with @frankdenneman this is what we came up with:

Look at a vCPU limit as a restriction within a specific time frame. When a time frame consists of 2000 units and a limit has been applied of 300 units it will take a full pass, so 300 “active” + 1700 units of waiting before it is scheduled again.

In other words applying a limit on a vCPU will slow your VM down no matter what. Even if there are no other VMs running on that 4 socket quad core host.

Would I ever recommend setting a limit? Only in very few cases. For instance when you have an old MS DOS application which is polling 10000 times a second it might be useful to limit it. Personally witnessed they can consume 100% of your resources, unnecessary as it isn’t doing anything actually.

In most cases however I would recommend against it. It will degrade user experience / performance and there is no need in my opinion. The VMkernel has got a great scheduler which will take fairness into account.

Aligning your VMs virtual hard disks

Duncan Epping · Apr 8, 2010 ·

I receive a lot of hits on an old article regarding aligning your VMDKs. This article doesn’t actually explain why it is important but only how to do it. The how is not actually as important in my opinion. I do however want to take the opportunity to list some of the options you have today to align your VMs VMDKs. Keep in mind that some require a license(*) or login for that matter:

  • UberAlign by Nick Weaver
  • mbralign by NetApp(*)
  • vOptimizer by Vizioncore(*)
  • GParted (Free tool, Thanks Ricky El-Qasem).

First let’s explain why alignment is important. Take a look at the following diagram:

In my opinion there is no need to discuss VMFS alignment. Everyone, and if  you don’t you should!, creates their VMFS via vCenter which means it is automatically aligned and you won’t need to worry about it. However you will need to worry about the Guest OS. Take Windows 2003, by default when you install the OS your partition is misaligned. (Both Windows 7 and Windows 2008 create aligned partitions by the way.) Even when you create a new partition it will be misaligned. As you can clearly see in the diagram above every cluster will span multiple chunks. Well actually it depends. I guess that’s the next thing to discuss but first let’s show what an aligned OS partition looks like:

I would recommend everyone to read this document. Although it states at the beginning it is obsolete it still contains relevant details! And I guess the following quote from the vSphere Performance Best Practices whitepaper says it all:

Src
The degree of improvement from alignment is highly dependent on workloads and array types. You might want to refer to the alignment recommendations from your array vendor for further information.

Now you might wonder why some vendors are more effected by misalignment than others. The reason for this is block sizes on the back end. For instance NetApp uses a 4KB block size (correct me if I am wrong). If your filesystem uses a 4KB block size (or cluster size as Microsoft calls it) as well this basically means every single IO will require the array to read or write to two blocks instead of 1 when your VMDK’s are misaligned as the diagrams clearly show.

Now when you take for instance an EMC Clariion it’s a different story. As explained in this article, which might be slightly outdated, Clariion arrays use a 64KB chunk size to write their data which means that not every Guest OS cluster is misaligned and thus EMC Clariion is less effected by misalignment. Now this doesn’t mean EMC is superior to NetApp, I don’t want to get Vaughn and Chad going again ;-), but it does mean that the impact of misalignment is different for every vendor and array/filer. Keep this in mind when migrating and / or creating your design.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium
Aug 21st – VMware Explore
Sep 20th – VMUG DK
Nov 6th – VMware Explore
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in