• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vstorage

Cool Fling: Reclaiming deadspace from a thin provisioned guest disk

Duncan Epping · Jul 3, 2012 ·

Yesterday a very cool fling was released. This fling allows you to reclaim “deadspace” from a thin provisioned guest disk! I have written some articles about dead space reclamation of the VMFS layer but this is about one layer up, the guest itself! Yes we used to have a work around of “in-guest” reclamation of blocks by using sdelete and Storage vMotion but due to the VAAI offloading that vSphere did this work around did not longer work. Now there is a cool fling released by Faraz Shaikh and Prasanna Aithal that addresses this. To be clear here, Sdelete by itself on vSphere doesn’t make your disk thin again… it just zeroes out blocks… GuestReclaim actually also issues a SCSI unmap command to allow the underlying storage to reclaim the dead space! For now this however only works for RDMs.

The fling has been tested for Windows 7, Windows XP, Windows Server 2003 and 2008. Note that it required administrator rights to run. GuestReclaim can reclaim deadspace from “simple volumes” and also when full volumes / disks are deleted. Although called out in the documentation I wanted to make sure everyone is clear on this,  there needs to be disk space available in order to be able to reclaim disk space!

I suggest you head over to the Fling and download it and give it a spin in your test environment. More details about the fling itself and some common Q&A can be found in this Doc.

If I have the time I will definitely give it a spin in my lab in the upcoming week.

Tintri follow up

Duncan Epping · Aug 18, 2011 ·

Back in March I wrote about this new and interesting storage vendor called Tintri which had just released a new NAS appliance called VMstore. I wrote about their level of integration and the fact that their NAS appliance is virtual machine aware and allows you to define performance policies per virtual machine. I am not going to rehash the complete post so for more details read it before you continue reading this article. During the briefing for that article we discussed some of the caveats with regards to their design and some possible enhancements. Tintri apparently is the type of company who listens to community input and can act quick. Yesterday I had a briefing of some of the new features Tintri will announce next week. I’ve been told that none of this is under embargo so I will go ahead and share with you what I feel is very exciting. Before I do though I want to mention that Tintri now also has teams in APAC and EMEA, as some of you know they started out only in North-America but now have expanded to the rest of the world.

First of all, and this is probably the most heard complaint, is that the upcoming Tintri VMstore devices will be available in a dual controller configuration which makes it more interesting to many of you probably. Especially the more up-time sensitive environments will appreciate this, and who isn’t sensitive about up-time these days? Especially in a virtualized environment where many workloads share a single device this improvement is more than welcome! The second thing which I really liked is how they enhanced their dashboard. Now this seems like a minor thing but I can ensure you that it will make your life a lot easier. Let me dump a screenshot first and then discuss what you are looking at.

The screenshot shows the per VM latency statistics… Now what is exciting about that? Well if you look at the bottom you will see the different colors and each of those represent a specific type of latency. Lets assume your VM experiences 40ms of latency and your customer starts complaining. The main thing to figure out is what causes this slow down. (Or in many cases, who can I blame?) Is your network saturated? Is the host swamped? Is it your storage device? In order to identify these types of problems you would need a monitor tool and most likely multiple tools to pinpoint the issue. Tintri decided to hook in to vCenter and just pull down the various metrics and use this to create the nice graph that you see in the screenshot. This allows you to quickly pinpoint the issue from a single pane of glass. And yes you can also expect this as a new tab within vCenter.

Another great feature which Tintri offers is the ability to realign your VMDKs. Tintri does this, unlike most solutions out there, from the “inside”. With that meaning that their solution is incorporated into their appliance and not a separate tool which needs to run against each and every VM. Smart solution which can and will safe you a lot of time.

It’s all great and amazing isn’t it? Or are there any caveats? One thing I still feel needs to be addressed is replication. With this next release it is not available yet but is that a problem now that SRM offers vSphere Replication? I guess that relieves some of the immediate pressure but I would still like to see a native Tintri’s solution providing a-sync and sync replication. Yes it will take time but I would expect though that Tintri is working on this. I tried to persuade them to make a statement yesterday they unfortunately couldn’t say anything with regards to a timeline / roadmap.

Definitely a booth I will be checking out at VMworld.

VM with disks in multiple datastore clusters?

Duncan Epping · Aug 9, 2011 ·

This week I received a question about Storage DRS. The question was if it was possible to have a VM with multiple disks in different datastore clusters? It’s not uncommon to have set ups like these so I figured it would be smart to document it. The answer is yes that is supported. You can create a virtual machine with a system disk on a raid-5 backed datastore cluster and a data disk on a raid-10 backed datastore cluster. If Storage DRS sees the need to migrate either of the disks to a different datastore it will make the recommendation to do so.

vSphere 5 Coverage

Duncan Epping · Aug 6, 2011 ·

I just read Eric’s article about all the topics he covered around vSphere 5 over the last couple of weeks and as I just published the last article I had prepared I figured it would make sense to post something similar. (Great job by  the way Eric, I always enjoy reading your articles and watching your videos!) Although I did hit roughly 10.000 unique views on average per day the first week after the launch and still 7000 a day currently I have the feeling that many were focused on the licensing changes rather then all the new and exciting features that were coming up, but now that the dust has somewhat settled it makes sense to re-emphasize them. Over the last 6 months I have been working with vSphere 5 and explored these features, my focus for most of those 6 months was to complete the book but of course I wrote a large amount of articles along the way, many of which ended up in the book in some shape or form. This is the list of articles I published. If you feel there is anything that I left out that should have been covered let me know and I will try to dive in to it. I can’t make any promises though as with VMworld coming up my time is limited.

  1. Live Blog: Raising The Bar, Part V
  2. 5 is the magic number
  3. Hot of the press: vSphere 5.0 Clustering Technical Deepdive
  4. vSphere 5.0: Storage DRS introduction
  5. vSphere 5.0: What has changed for VMFS?
  6. vSphere 5.0: Storage vMotion and the Mirror Driver
  7. Punch Zeros
  8. Storage DRS interoperability
  9. vSphere 5.0: UNMAP (vaai feature)
  10. vSphere 5.0: ESXCLI
  11. ESXi 5: Suppressing the local/remote shell warning
  12. Testing VM Monitoring with vSphere 5.0
  13. What’s new?
  14. vSphere 5:0 vMotion Enhancements
  15. vSphere 5.0: vMotion enhancement, tiny but very welcome!
  16. ESXi 5.0 and Scripted Installs
  17. vSphere 5.0: Storage initiatives
  18. Scale Up/Out and impact of vRAM?!? (part 2)
  19. HA Architecture Series – FDM (1/5)
  20. HA Architecture Series – Primary nodes? (2/5)
  21. HA Architecture Series – Datastore Heartbeating (3/5)
  22. HA Architecture Series – Restarting VMs (4/5)
  23. HA Architecture Series – Advanced Settings (5/5)
  24. VMFS-5 LUN  Sizing
  25. vSphere 5.0 HA: Changes in admission control
  26. vSphere 5 – Metro vMotion
  27. SDRS and Auto-Tiering solutions – The Injector

Once again if there it something you feel I should be covering let me know and I’ll try to dig in to it. Preferably something that none of the other blogs have published of course.

SDRS and Auto-Tiering solutions – The Injector

Duncan Epping · Aug 5, 2011 ·

A couple of weeks ago I wrote an article about Storage DRS (hereafter SDRS) interoperability and I mentioned that using SDRS with Auto-Tiering solutions should work… Now the truth is slightly different, however as I noticed some people  started throwing huge exclamation marks around SDRS I wanted to make a statement. Many have discussed this and made comments around why SDRS would not be supported with auto-tiering solutions and I noticed the common idea is that SDRS would not be supported with them as it could initiate a migration to a different datastore and as such “reset” the tiered VM back to default. Although this is correct there is a different reason why VMware recommends to follow the guidelines provided by the Storage Vendor. The guideline by the way is to use Space Balancing but not enable I/O metric. Those who were part of the beta or have read the documentation, or our book might recall this when creating datastore clusters select datastores which have similar performance characteristics. In other words do not mix an SSD backed datastore with a SATA backed datastore, however mixing SATA with SAS is okay. Before we will explain why lets repeat the basics around SDRS:

SDRS allows the aggregation of multiple datastores into a single object called a datastore cluster. SDRS will make recommendations to balance virtual machines or disks based on I/O and space utilization and during virtual machine or virtual disk provisioning make recommendations for placement. SDRS can be set in fully automated or manual mode. In manual mode SDRS will only make recommendations, in fully automated mode these recommendations will be applied by SDRS as well. When balancing recommendations are applied Storage DRS is used to move the virtual machine.

So what about Auto-Tiering solutions? Auto-tiering solutions move “blocks” around based hotspots. Yes, again, when SvMotion would migrate the virtual machine or virtual disk this process would be reset. In other words the full disk will land on the same tier and the array will need to decide at some point what belongs where… but is this an issue? In my opinion it probably isn’t but it will depend on why SDRS decides to move the virtual machine as it might lead to a temporary decrease in performance for specific chunks of data within the VM. As auto-tiering solutions help preventing performance issues by moving blocks around you might not want to have SDRS making performance recommendations but why… what is the technical reason for this?

As stated SDRS uses I/O and space utilization for balancing… Space makes sense I guess but what about I/O… what does SDRS use, how does it know where to place a virtual machine or disk? Many people seem to be under the impression that SDRS simply uses average latency but would that work in a greenfield deployment where no virtual machines are deployed yet? It wouldn’t and it would also not say much about the performance capabilities of the datastore. No in order to ensure the correct datastore is selected  SDRS needs to know what the datastore is capable off, it will need to characterize the datastore and in order to do so it uses Storage IO Control (hereafter SIOC), more specifically what we call “the injector”. The injector is part of SIOC and is a mechanism which is used to characterize each of the datastore by injecting random (read) I/O. Before you get worried, the injector only injects I/O when the datastore is idle. Even when the injector is busy and it notices other activity on the datastore it will back down and retry later. Now in order to characterize the datastore the injector uses different amount of outstanding I/Os and measures the latency for these I/Os. For example it starts with 1 outstanding I/O and gets a response within 3 miliseconds. When 3 outstanding I/Os are used the average latency for these I/Os is 3.8 miliseconds. With 5 I/Os the average latency is 4.3 and so on and so forth. For each device the outcome can be plotted as show in the below screenshot and the slope of the graph indicates the performance capabilities of the datastore. The steeper the line the lower the performance capabilities. The graphs shows the test where a multitude of datastores are characterized each being backed by a different number of spindles. As clearly shown there is a relationship between the steepness and the number of spindles used.

So why does SDRS care? Well in order to ensure the correct recommendations are made each of the datastores will be characterized in other words a datastore backed by 16 spindles will be a more logical choice than a datastore with 4 spindles. So what is the problem with Auto-Tiering solutions? Well think about it for a second… when a datastore has many hotspots an auto-tiering solution will move chunks around. Although this is great for the virtual machine it also means that when the injector characterizes the datastore it could potentially read from the SSD backed chunks or the SATA backed chunks and this will lead to unexpected results in terms of average latency and as you can imagine this will be confusing to SDRS and possibly lead to incorrect recommendations. Now, this is typically one of those scenarios which requires extensive testing and hence the reason VMware refers to the storage vendor for their recommendation around using SDRS in combination with auto-tiering solutions. My opinion: Use SDRS Space Balancing as this will help preventing downtime related to “out of space” scenarios and also help speeding up the provisioning process. On top of that you will get Datastore Maintenance Mode and Affinity Rules.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 11
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium
Aug 21st – VMware Explore
Sep 20th – VMUG DK
Nov 6th – VMware Explore
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in