• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Storage

vSphere and iSCSI storage best practices

Duncan Epping · Nov 1, 2017 ·

And here’s the next paper I updated. This time it is the iSCSI storage best practices for vSphere. It seems that we have now overhauled most of the core storage white papers. You can find them all under core storage on storagehub.vmware.com, but for your convenience I will post the iSCSI links below here as well:

  • Best Practices For Running VMware vSphere On iSCSI (web based reading)
  • Best Practices For Running VMware vSphere On iSCSI (pdf download)

One thing I want to point out, as it is a significant change compared to the last version of the paper is the following: In the past vSphere did not support IPSec so for the longest time this was also not supported for iSCSI as a result. When reviewing all available material I noticed that although vSphere now supports IPSec with IPv6 only there was no statement around iSCSI.

So what does that mean for iSCSI? Well, it is supported as of vSphere 6.0 to have IPSec enabled on the vSphere Software iSCSI implementation, but only for IPv6 implementations and not for IPv4! Note however, that there’s no data on the potential performance impact, and enabling IPSec could (I should probably say “will” instead of “could”) introduce latency / overhead. In other words, if you want to enable this make sure to test the impact it has on your workload.

vSphere 6.5 what’s new – VMFS 6 / Core Storage

Duncan Epping · Oct 18, 2016 ·

I haven’t spend a lot of time looking at VMFS lately. I was looking in to what was new for vSphere 6.5 and then noticed a VMFS section. Good to see there is still being worked on new features and functionality for the core vSphere file system. So what is new with VMFS 6:

  • Support for 4K Native Drives in 512e mode
  • SE Sparse Default
  • Automatic Space Reclamation
  • Support for 512 devices and 2000 paths (versus 256 and 1024 in the previous versions)
  • CBRC aka View Storage Accelerator

Lets look at them one by one, I think support for 4K native drives in 512e mode speaks for itself. Sizes of spindles keep growing and these new “advanced format” drives come with a 4K byte sector instead of the usual 512 byte sector, which is primarily for better handling of media errors. As of vSphere 6.5 this is now fully supported but note that for now it is only supported when running in 512e mode! The same applies to Virtual SAN in the 6.5 release, only supported in 512e mode. This basically means that 512 byte sectors is being emulated on a 4k drive. Hopefully we will have more on full support for 4Kn for vSphere/VSAN soon.

From an SE Sparse perspective, right now SE Sparse is used primarily View and for LUNs larger than 2TB. When on VMFS 6 the default will be SE Sparse. Not much more to it than that. If you want to know more about SE Sparse, read this great post by Cormac.

Automatic Space Reclamation is something that I know many of my customers have been waiting for. Note that this is based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks. In other words, storage capacity is reclaimed and released to the array so that when needed other volumes can use these blocks. In the past you needed to run a command to reclaim  the blocks, now this has been integrated in the UI and can simply be turned on or off. Oh, you can find this in the UI when you go to your datastore object and then click configure, you can set it to “none” which means you disable it, or you set it to low in the UI as shown in the screenshot below.

If you prefer “esxcli” then you can do the following to get the info of a particular datastore (sharedVmfs-0 in my case) :

esxcli storage vmfs reclaim config get -l sharedVmfs-0
   Reclaim Granularity: 1048576 Bytes
   Reclaim Priority: low

Or set the datastore to a particular level, note that using esxcli you can also set the priority to medium and high if desired:

esxcli storage vmfs reclaim config set -l sharedVmfs-0 -p high

Next up, support for 512 Devices and 2000 Paths. In previous versions the limit was 256 devices and 1024 paths and some customers were hitting this limit in their cluster. Especially when RDMs are used or people have a limited number of VMs per datastore, or maybe 8 paths to each device are used it becomes easy to hit those limits. Hopefully with 6.5 that will not happen anytime soon. On the other hand, personally I would hope more and more people are considering moving towards either VSAN or Virtual Volumes.

This is one I accidentally ran in to and not really directly related to VMFS but I figured I would add it here anyway otherwise I would forget about it. In the past CBRC aka View Storage Accelerator was limited to 2GB of memory cache per host. I noticed in the advanced settings that it now is set to 32GB, which is a big difference compared to the 2GB in previous releases. I haven’t done any testing, but I assume our EUC team has and hopefully we will see some good performance data on this big increase soon.

And that was it… some great enhancements in the core storage space if you ask me. And I am sure there was even more, and if I find out more details I will share those with you as well.

Hyper-Converged is here, but what is next?

Duncan Epping · Oct 11, 2016 ·

Last week I was talking to a customer and they posed some interesting questions. What excites me in IT (why I work for VMware) and what is next for hyper-converged? I thought they were interesting questions and very relevant. I am guessing many customers have that same question (what is next for hyper-converged that is). They see this shiny thing out there called hyper-converged, but if I take those steps where does the journey end? I truly believe that those who went the hyper-converged route simply took the first steps on an SDDC journey.

Hyper-converged I think is a term which was hyped and over-used, just like “cloud” a couple of years ago. Lets breakdown what it truly is: hardware + software. Nothing really groundbreaking. It is different in terms of how it is delivered. Sure, it is a different architectural approach as you utilize a software based / server side scale-out storage solution which sits within the hypervisor (or on top for that matter). Still, that hypervisor is something you were already using (most likely), and I am sure that “hardware” isn’t new either. Than the storage aspect must be the big differentiator right? Wrong, the fundamental difference, in my opinion, is how you manage the environment and the way it is delivered and supported. But does it really need to stop there or is there more?

There definitely is much more if you ask me. That is one thing that has always surprised me. Many see hyper-converged as a complete solution, reality is though that in many cases essential parts are missing. Networking, security, automation/orchestration engines, logging/analytic engines, BC/DR (and orchestration of it) etc. Many different aspects and components which seem to be overlooked. Just look at networking, even including a switch is not something you see to often, and what about the configuration of a switch, or overlay networks, firewalls / load-balancers. It all appears not to be a part of hyper-converged systems. Funny thing is though, if you are going on a software defined journey, if you want an enterprise grade private cloud that allows you to scale in a secure but agile manner these components are a requirement, you cannot go without them. You cannot extend your private cloud to the public cloud without any type of security in place, and one would assume that you would like to orchestrate every thing from that same platform and have the same networking / security capabilities to your disposal both private and public.

That is why I was so excited about the VMworld US keynote. Cross Cloud Services on top of hyper-converged leveraging all the tools VMware provides today (vSphere, VSAN, NSX) will exactly allow you to do what I describe above. Whether that is to IBM, vCloud Air or any other of the mega clouds listed in the slide below is even besides the point. Extending your datacenter services in to public clouds is what we have been talking about for a while, this hybrid approach which could bring (dare I say) elasticity. This is a fundamental aspect of SDDC, of which a hyper-converged architecture is simply a key pillar.

Hyper-converged by itself does not make a private cloud. Hyper-converged does not deliver a full SDDC stack, it is a great step in to the right direction however. But before you take that (necessary) hyper-converged step ask yourself what is next on the journey to SDDC. Networking? Security? Automation/Orchestration? Logging? Monitoring? Analytics? Hybridity? Who can help you reach full potential, who can help you take those next steps? That’s what excites me, that is why I work for VMware. I believe we have a great opportunity here as we are the only company who holds all the pieces to the SDDC puzzle. And with regards to what is next? Deliver all of that in an easy to consume manner, that is what is next!

 

 

 

Playing around with Tintri Global Center and Tintri storage systems

Duncan Epping · Jul 21, 2016 ·

Last week the folks from Tintri reached out and asked me if I was interested to play around with a lab they have running. They gave me a couple of hours of access to their Partner Lab. It had a couple of hosts, 4 different Tintri VMStore systems including their all-flash offering and of course their management solution Global Center. I have done a couple of posts on Tintri in the past, so if you want to know more about Tintri make sure to read those as well. (1, 2, 3, 4)

For those who have no clue whatsoever, Tintri is a storage company which sells “VM-Aware” storage. This basically means that all of the data services they offer can be enabled on a VM/VMDK level and they give visibility all the way down to the lowest level. And not just for VMware, they support other hypervisors as well by the way. I’ve been having discussions with Tintri since 2011 and it is safe to say they came a long way, most of my engagements however were presentations and the occasional demo so it was nice to actually go through the experience personally.

First of all their storage system management interface. If you login to one of them you are presented with all the info you would want to know, IOPS / Bandwidth / Latency, but even for latency you can see a split in network, host and storage latency. So if anything is misbehaving you will find out what and why probably relative fast.

Not just that, if you look at the VMs running on your system from the array side you can also do things like take a storage snapshot, clone the vm, restore the VM, replicate it or set QoS for that VM. Very powerful, all of that is also available in vCenter by the way through a plugin.

Now when you clone a VM, you can also create many VMs, pretty neat. I say give me 10 with the name Duncan and you get 10 of those called Duncan-01 –> Duncan-10.

Their central management solution is what I was interested in as I had only seen it once in a demo and that is it, it is called Tintri Global Center. Now one thing I have to say, it has been said by some that Tintri offers a scale out solution but the storage system itself is not a scale out system. When they refer to scale out, they refer to the ability to manage all storage systems through a single interface and the ability to group storage systems and load balance between those, which is done through Global Center and their “Pools” functionality. Pools kind of feels like SDRS to me as said in a previous post, now that I have played with it a bit it definitely feels a lot like SDRS. When I was playing with the lab I received the following message.

If you have used SDRS at some point in time and look at the screenshot (click it for bigger screenshot) you know what I mean. Anyway, good functionality to have. Pool different arrays and balance between based on space and performance. Nothing wrong with that. But that is not the best thing about Global Center, like I said I like the simplicity of Tintri’s interfaces and that also applies to Global Center. For instance when you login, this is the first you see

I really like the simplicity, it gives a great overview of the state of the total environment, and at the same time it will give you the ability to dive deeper when needed. You can look for per VMstore details, and figure out where your capacity is going for instance. (Snapshots, live data etc) But also see basic trending in terms of how what VMs are demanding from a storage performance and capacity point of view.

Having said all of that there is one thing that bugs me. Yes Tintri provides a VASA Provider but this is the “old style” VASA Provider which revolves around the datastore. Now if you look at VVols, it is all about the VM and which capabilities it needs. I would definitely welcome VVol support from Tintri, now I can understand this is no big priority for them as they have “similar” functionality, it is just that as a VM-Aware storage system I would expect there to be deep(er) integration from that perspective as well. But that is just me nitpicking I guess, as a VMware employee working for the BU that brought you VVols, it is safe to say I am biased when it comes to this. Tintri does offer an alternative which makes it easy to manage groups of VMs and it is called Service Groups. It allows you to apply data service to a logical grouping, which is defined by a rule. I could for instance say, all VMs that start with “Dun” need to be snapshotted every 5 hours, and this snapshot needs to be replicated etc etc. Pretty powerful stuff, and fairly easy to use as well. Still, for consistency it would be nice to be able to do this through SPBM in vSphere so that if I have other storage systems I can use the same mechanism to define services all through the same interface.

** Update: I was just pointed to the fact that there is a VVol capable VASA Provider, at least according to the VMware HCL. I have not seen the implementation and what is / what is not exposed unfortunately. Also just read the documentation and VVol is indeed supported. With a caveat for some systems: Tintri OS 4.1 supports VMware VMware vSphere Aware Storage API (VASA), 3.0 (VVOL 1.0). The Tintri vCenter Web Client Plugin is not required to run VVOLs on Tintri. VVOLs is not available for Tintri VMstore T540 or T445 systems. Also, the docs I’ve see don’t show the capabilities exposed through VVols unfortunately. **

Again, I really liked the simplicity of the solution. The overall user experience was great, I mean taking a snapshot is dead simple. Replicating that snapshot? One click. Clone? One click. QoS? 3 settings. Do I need to say more? Well done Tintri, and looking forward to what you guys will release next and thanks for providing me the opportunity to play around in your lab, I hope I didn’t break anything.

VSAN made storage management a non issue for the 1st time

Duncan Epping · Sep 28, 2015 ·

When ever I talk to customers about Virtual SAN the question that comes up usually is why Virtual SAN? Some of you may expect it to be performance, or the scale-out aspect, or the resiliency… None of that is the biggest differentiator in my opinion, management truly is. Or should I say the fact that you can literally forget about it after you have configured it? Yes, of course that is something you expect every vendor to say about their own product. I think the reply of one of the users during the VSAN Chat that was held last week is the biggest testimony I can provide: “VSAN made storage management a non-issue for the first time for the vSphere cluster admin”. (see tweet below)

@vmwarevsan VSAN made storage management a non-issue for this first time vSphere cluster admin! #vsanchat http://t.co/5arKbzCdjz

— Aaron Kay (@num1k) September 22, 2015

When we released the first version of Virtual SAN I strongly believed we had a winner on our hands. It was so simple to configure, you don’t need to be a VCP to enable VSAN, it is two clicks. Of course VSAN is a bit more than just that tick box on a cluster level that says “enable”. You want to make sure it performs well, all drivers/firmware combinations are certified, the network is correctly configured etc. Fortunately we also have a solution for that, this isn’t a manual process.

No, you simply go to the VSAN healthcheck section on your VSAN cluster object and validate everything is green. Besides simply looking at those green checks, you can also run certain pro-active tests that will allow you to test for instance multicast performance, VM creation, VSAN performance etc. It all comes as part of vCenter Server as of the 6.0 U1 release. On top of that there is more planned. At VMworld we already hinted at it, advanced performance management inside vCenter based on a distributed and decentralized model. You can expect that at some point in the near future, and of course we have the vROps pack for Virtual SAN if you prefer that!

No, if you ask me, the biggest differentiator definitely is management… simplicity is the key theme, and I guarantee that things will only improve with each release.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 53
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in