• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

scripts

Speed up your powershell scripts

Duncan Epping · Mar 24, 2009 ·

On the VI Toolkit blog there’s a great article for people like me. They explain how to speed up your scripts. I’m no powershell guru, and these kind of articles are more than welcome to boost my scripting skills.

In short, it comes down to these three tips:

  1. Try to load as many objects as possible into arrays beforehand. Once you’ve got them loaded you can use them as arguments to multiple calls without having to resort to potentially expensive lookups every time.
  2. Just like in sample 1 above, when you’ve loaded objects, use the objects directly rather than using their names. This is usually not hard as our cmdlets are designed to take object first-and-foremost, and names are supported just as a convenience.
  3. If you absolutely need to load a single VM object by name, load it using the Get-VMFast function below. While this approach can certainly help, it’s not nearly as good as using the other two techniques mentioned above.

Head over to the VI Toolkit blog and start reading.

mbrscan, mbralign and RCU

Duncan Epping · Mar 23, 2009 ·

A while back I wrote an article on checking your disk alignment and even changing the disk alignment from the service console. Since then a lot of people asked me for the exact link, because I don’t have a now.NetApp.com account I wasn’t able to provide it. Today I received an email from the developer, Eric Forgette, with a link to a community article which contains links to both tools, mbralign and mbrscan.

Eric is also the one who developed RCU(Rapid Cloning Utilities). I just watched the demo video on youtube. In short: It’s a vCenter pluging which enables you to deploy hundreds of VDI desktops by utilizing the capabilities of the array. Keith Aasen wrote a blog article on this plugin which has some more details. I guess with the vStorage API coming up we can expect more vendors to add storage capabilities to the vCenter GUI, think snapshots / clones and more…

VIMA and the UPS initiated shutdown, the “lamw” version

Duncan Epping · Feb 19, 2009 ·

I already predicted that this was bound to happen sooner or later. It only took William Lam, aka lamw, a couple of days to enhance the work that Joseph Holland did. Joseph wrote a procedure that let’s APC’s software initiate a shutdown of the VM’s and ESXi host when a power failure occurs. Joseph’s solution included a modification of ESXi which means no VMware support.

I hinted William via twitter and he came up with a perl script that uses the API to initiate the shutdown of the VM’s and the ESXi host. This script will be run on the VIMA VM. There’s no need to change the ESXi host anymore!

ghettoShutdown.pl – This script initiates the shutdown of all VM(s) within an ESX/ESXi host excluding the virtual machine that’s monitoring the UPS device and then shutdowns the host. It accepts two commandline parameters: –sleep the duration in seconds to wait after a VM has initiated the shutdown before moving onto the next VM (shutdownVM() is non-blocking function) and –ups_vm the name of the displayName of your VM that is monotiring the UPS device [more details to come later].

upsVIShutdown.pl – This script is a wrapper which will hold the configurations of the order of hosts to shutdown. It may be used inconjunction with other UPS monitoring utility, though with our example, it’ll be placed in the apccontrol script to execute upon a power interuption.

Now head over to the VMware Communities, download the script and testdrive it! Awesome work William!

Load Balancing your LUNs on Active/Active SANs?

Duncan Epping · Feb 10, 2009 ·

I really love the discussions going on in some of the blog postings. And some posts even trigger other bloggers to respond. Frank Denneman commented on my “Balancing LUN paths with Powershell” post and explained in short why load balancing your LUNs on some Active/Active SANs might not always lead to a performance increase. It even can lead to a performance decrease.

Frank was so kind to elaborate on why this exactly is some more on his own blog:

The arrays from the EVA family are AAA arrays. In an Asymmetric Active-Active both controllers are online and both can accept IO, but one controller is assigned as the preferred (owning) controller of the LUN. The owning controller can issue IO commands directly to the LUN. The non-owning controller, – or to make this text more legible – proxy controller can accept IO commands, but cannot communicate with the LUN. For example, if a read request reaches the array through the proxy controller, it will be forwarded to the owning controller of the LUN.

If the array detects in 60 minutes that at least 2/3 of the total read request to a LUN are proxy reads, ownership of the LUN is transitioned to the non-owning proxy controller. Making it the owning controller. Justin’s powershell script assigns the same path to every server the same way. This way the EVA should switch the managing controller within the hour. (If you have multiple ESX hosts run multiple VM’s on the LUN of course)

Now, you will probably say that this is just what should happen… But for LUNs replicated via HP’s Continuous Access this might be a problem. Go to Frank’s blog and read why…

I was just about to publish this article and noticed that Chad also wrote an article on this subject yesterday! Chad seems to be reading my mind.

An “Active/Passive” array using VMware’s definition would be a EMC CLARiiON, an HP EVA,NetApp FAS or LSI Engenio (rebranded as  several IBM midrange platforms).   These are usually called “Mid-range” arrays by the array vendors.   It’s notable that all the array vendors (including EMC) call these “Active/Active” – so we have a naming conflict (hey… “SRM” to storage people means “Storage Resource Management” – not “Site Recovery Manager” 🙂   They are “Active/Active” in the sense that historically each head can carry an active workload on both “brains” (storage processor), but not for a single LUN.   I say historically, because they can also support something called “ALUA”, or “Asymmetrix Logical Unit Access” – where LUNs that are “owned” by a storage processor can be access via ports from the other using an internal interconnect – each vendor’s implementation and internal interconnect varies.   This is moot for the topic of loadbalancing a given LUN with ESX 3.5, though, because until the next major release, ALUA is not supported.   I prefer to call this an “Active/Passive LUN ownership” array.  The other big standout is that these “midrange” Active/Passive arrays lose half their “brains”  (each vendor calls these something different) if one fails – so either you accept that and oversubscribe – accepting some performance degradation if you lose a brain (acceptable in many use cases), or use it to only 50% of it’s performance envelope.

Read Chad’s full article here cause there’s a lot of useful information in this post! Thanks Chad for clearing this up.

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 5
  • Go to page 6
  • Go to page 7

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
Aug 21st – VMware Explore
Sep 20th – VMUG DK
Nov 6th – VMware Explore
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in