• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

san

Virtual SAN news flash pt 1

Duncan Epping · Oct 3, 2013 ·

I had a couple of things I wanted to write about with regards to Virtual SAN which I felt weren’t beefy enough to dedicate a full article to so I figured I would combine a couple of news worthy items and create a Virtual SAN news flash article / series.

  • I was playing with Virtual SAN last week and I noticed something I hadn’t noticed yet… I was running vSphere with an Enterprise license and I added the Virtual SAN license for my cluster. After adding the Virtual SAN license all of a sudden I had the Distributed Switch capability on the cluster I had VSAN licensed. Now I am not sure what this will look like when VSAN will go GA, but for now those who want to test with VSAN and want to use the Distributed Switch you can. Use the Distributed Switch to guarantee bandwidth (leveraging Network IO Control) to Virtual SAN when combining different types of traffic like vMotion / Management / VM traffic on a 10GbE pair. I would highly recommend to start playing around with this and get experienced. Especially because vSphere HA traffic and VSAN traffic are combined on a single NIC pair and you do not want HA traffic to be impacted by replication traffic.
  • The Samsung SM1625 SSD series (eMLC) has been certified for Virtual SAN. It comes in sizes ranging between 100Gb and 800GB and can do up to 120k IOps random read… Nice to see the list of supported SSDs expanding, will try to get my hands on one of these at some point to see if I can do some testing.
  • Most people by now are aware of the challenges there were with the AHCI controller. I was just talking with one of the VSAN engineers who mentioned that they have managed to do a full root cause analysis and pinpoint the root of this problem. Currently there is a team working on solving it and things are looking good and hopefully soon a new driver will be released, when we do I will let you guys know as I realize that many use these controllers in their home-lab.

Storage Migrations?

Duncan Epping · Jul 28, 2010 ·

On an internal mailing list we had a very useful discussion around storage migrations when a SAN is replaced or a migration needs to take place to a different set of disks. Many customers face this at some point. The question usually is what is the best approach? SAN Replication or Storage vMotion… Both have its pros and cons I guess.

SAN Replication:

  • Can utilize Array based copy mechanisms for fast replication (+)
  • Per LUN migration, high level of concurrency (+)
  • Old volumes still available (+)
  • Need to resignature or mount the volume again (-)
    • A resignature also means you will need to reregister the VM! (-)
  • Downtime for the VM during the cut over (-)

Storage vMotion:

  • No downtime for your VMs (+)
  • Fast Storage vMotion when your Array supports VAAI (+)
    • If your Array doesn’t support VAAI migrations can be slow (-)
    • Induced cost if VAAI isn’t supported (-)
    • Only intra Array not across arrays (-)
  • No resignaturing or re-registering needed (+)
  • Per VM migration (-)
    • Limited concurrency (2 per host, 8 per vmfs volume) (-)

As you can see both have its pros and cons and it boils down to the following questions:

How much down time can you afford?
How much time do you have for the migration?

Single Initiator Zoning, recommended or not?

Duncan Epping · Mar 4, 2010 ·

A question we receive a lot is what kind of zoning should be implemented for our storage solution? The answer is usually really short and simple: at least single initiator zoning.

Single initiator zoning is something we have always recommend in the field (VMware PSO Consultants/Architects) and something that is clearly mentioned in our documentation… at least that’s what I thought.

On page 31 of the SAN Design and Deploy guide we clearly state the following:

When a SAN is configured using zoning, the devices outside a zone are not visible to the devices inside the zone. When there is one HBA or initiator to a single storage processor port or target zone, it is commonly referred to as single zone. This type of single zoning protects devices within a zone from fabric notifications, such as Registered State Change Notification (RSCN) changes from other zones. In addition, SAN traffic within each zone is isolated from the other zones. Thus, using single zone is a common industry practice.

That’s crystal clear isn’t it? Unfortunately there’s another document floating around which is called “Fibre Channel SAN Configuration Guide” and this document states the following on page 36:

  • ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in one zone.
  • If you have a very large deployment, you might need to create separate zones for different areas of functionality. For example, you can separate accounting from human resources.

So which one is correct and which one isn’t? I don’t want any confusion around this. The first document, the SAN Design and Deploy guide is correct. VMware recommends single initiator zoning. Of course if you want to do “single initiator / single target” that would even be better, but single initiator is the bare minimum. Now let’s hope the VMware Tech Writers can get that document fixed…

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist in the Office of the CTO in the Cloud Infrastructure Business Group (CIBG) at VMware. Besides writing on Yellow-Bricks, Duncan co-authors the vSAN Deep Dive book series and the vSphere Clustering Deep Dive book series. Duncan also co-hosts the Unexplored Territory Podcast.

Follow Me

  • Twitter
  • LinkedIn
  • Spotify
  • YouTube

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2023 · Log in