Re: SFD5 event and negativity / respect

Storage Field Day was hosted last week, and I typically like these events. Mainly because they have start-ups presenting their new technology and I like the flow of the sessions typically. I also like the interaction between the “delegates” and the vendors, well at times I do. There were several blog posts on the topic from people who are part of the, what I would call at this point, old boys club (yes there were women attending as well but you get the point) as that is what it felt like during the event. I wanted to comment on Bob’s article, but it looks like he is not looking for a healthy debate so I figured a blog post would be the best way to reply.

For those who don’t know: The sessions usually start with some background on the company, a problem description and then followed by a product session with demos and deep-dives where and when needed. Delegates will fire off questions during these sessions, sometimes this leads to a great discussion and sometimes it doesn’t.

This week, as some of you may have noticed on twitter, the event was held but personally I didn’t enjoy it very much. I think this tweet from my friend Jason Boche captures the feeling I had well:

What stood out to me, and by watching twitter to others as well, was the negativity from some of the delegates about some of the vendors. When the initial problem statement/marketing fluff would take too long the “boring” comments from the delegates started to pass by on twitter, especially during the start of the EMC session this was particularly bad. (Not the first time I have seen it… and definitely not trying to defend a vendor here as they could have known what they were up against and should know the first rule of presenting: know your audience.) Maybe even more annoying for the person watching the feed were the “inside jokes” and the “annecotes” / “incrowd discussions”. It really disrupted the flow of some of the sessions, and I think the PernixData session was the best example of it… it derailed too often leading to the presenter running out of time, or as Frank put it:

When several people commented on the tweets/atmosphere some heated debates kicked off. What stood out to me during these debates was that the “delegates” felt that they were doing the vendors a service and that the vendors should respect their time/effort. (I agree with them to  a certain extend) It was also mentioned various times that they were all experts and there was no need for basics/problem descriptions as all had done their due diligence and came well prepared. Personally I don’t believe that based on the questions asked, and personally I think everyone can learn something even from the basics, besides that I would argue that the Tech Field Day website is really clear on this:

Don’t assume all of the attendees are experts in your area. True to the spirit of Gestalt IT, we intentionally mix many IT disciplines at Tech Field Day to spark creativity and out-of-the-box thinking.

And on the topic of respect; it goes both ways and it seems that the Tech Field Day charter agrees with me on this as this is what it states in the section what it is like to be a delegate:

… just treat them with the thoughtfulness, professionalism and mutual respect they deserve.

But what is the underlying problem? What the delegates seem to have forgotten is the vendor’s perception… Why are these vendors there. What is their reason to participate? Are they looking for feedback from a handful of people on their product(s) and aiming to make road map changes  when needed… Or are they looking to introduce their product (or new version) to the world through the reach the event has? (note I said event and not delegates on purpose) I would expect it to be the latter, as the majority of companies presenting are presenting a new product or version and not a road map on top of that I would argue that if they are looking for direct product feedback they would do this in a closed setting with a limited group of people under a strict NDA. Even when that would not be the case, just as you are asking the vendor to be respectful of your time, you should also be respectful towards them for what they are investing. Which is probably a lot more than just time as without their sponsorship there would not be an event. (Assuming Mr Stephen Foskett is not a secret billionaire… But who knows :-)) Either way, think about what allows these events to exist. Without these companies investing, it would be difficult for Stephen to organize these. Also, think about the people watching the event online and even about the person sitting next to you. What is glaringly obvious to you, may not be so for the person sitting next to you simply because they come from a different background.

So why am I writing this, well hopefully so things will change for the better. As I stated, I like these events  as they are valuable to the community in my opinion and they provide a nice podium for start-ups to present themselves to the world, but that positive aspect should not get lost in unneeded debates and negativity. As that is what these events are about in my opinion, it is providing a service to the community and I hope it will stay that way.

PS: I have a lot of respect for the endless effort Stephen puts in organizing these sessions / events…

PernixData feature announcements during Storage Field Day

During Storage Field Day today PernixData announced a whole bunch of features that they are working on and will be released in the near future. In my opinion there were four major features announced:

  • Support for NFS
  • Network Compression
  • Distributed Fault Tolerant Memory
  • Topology Awareness

Lets go over these one by one:

Support for NFS is something that I can be brief about I guess; as it is what it says it is. Something that has come up multiple times in conversations seen on twitter around Pernix and it looks like they have managed to solve the problem and will support NFS in the near future. One thing I want to point out, PernixData does not introduce a virtual appliance in order to support NFS or create an NFS server and proxy the IOs, sounds like magic right… Nice work guys!

It gets way more interesting with Network compression. What is it, what does it do? Network Compression is an adaptive mechanism that will look at the size of the IO and analyze if it makes sense to compress the data before replicating it to a remote host. As you can imagine especially with larger block sizes (64K and up) this could significantly reduce the data that is transferred over the network. When talking to PernixData one of the questions I had was well what about the performance and overhead… give me some details, this is what they came back with as an example:

  • Write back with local copy only = 2700 IOps
  • Write back + 1 replica = 1770 IOps
  • Write back + 1 replica + network compression = 2700 IOps

As you can see the number of IOps went down when a remote replica was added. However, it went up again to “normal” values when network compression was enabled, of course this test was conducted using large blocksizes. When it came to CPU overhead it was mentioned that the overhead so far has been demonstrated to be negligible.You may ask yourself why, it is fairly simple: the cost of compression weighs up against the CPU overhead and results in an equal performance due to lower network transfer requirements. What also helps here is that it is an adaptive mechanism that does a cost/benefit analyses before compressing. So if you are doing 512 byte or 4KB IOs then network compression will not kick in, keeping the overhead low and the benefits high!

I personally got really excited about this feature: DFTM = Distributed Fault Tolerant Memory. Say what? Yes, distributed fault tolerant memory! FVP, indeed besides virtualizing flash, can now also virtualize memory and create an aggregated pool of resources out of it for caching purposes. Or in a more simplistic way: what they allow you to do is reserve a chunk of host memory as virtual machine cache. Once again happens on a hypervisor level, so no requirement to run a virtual appliance, just enable and go! I would want to point out though that there is “cache tiering” at the moment, but I guess Satyam can consider that as a feature request. Also, when you create an FVP cluster hosts within that cluster will either provide “flash caching” capabilities or “memory caching” capabilities. This means that technically virtual machines can use “local flash” resources while the remote resources are “memory” based (or the other way around). I would avoid this at all cost personally though as it will give some strange unpredictable performance result.

So what does this add? Well crazy performance for instance…. We are talking 80k IOps easily with a nice low latency of 50-200 microseconds. Unlike other solutions, FVP doesn’t restrict the size of your cache either. By default it will make a recommendation of 50% unreserved capacity to be used per host. Personally I think this is a bit high, as most people do not reserve memory this will typically result 50% of your memory to be recommended… but fortunately FVP allows you to customize this as required. So if you have 128GB of memory and feel 16GB of memory is sufficient for memory caching then that is what you assign to FVP.

Another feature that will be added is Topology Awareness. Basically what this allows you to do is group hosts in a cluster and create failure domains. An example may make this a bit easier to grasp: Lets assume you have 2 blade chassis each with 8 hosts, when you enable “write back caching” you probably want to ensure that your replica is stored on a blade in the other chassis… and that is exactly what this feature allows you to do. Specify replica groups, add hosts to the replica groups, easy as that!

And then specify for your virtual machine where the replica needs to reside. Yes you can even specify that the replica needs to reside within its failure domain if there are requirements to do so, but in the example below the other “failure domain” is chosen.

Is that awesome or what? I think it is, and I am very impressed by what PernixData has announced. For those interested, the SFD video should be online soon, and those who are visiting the Milan VMUG are lucky as Frank mentioned that he will be presenting on these new features at the event. All in all, an impressive presentation again by PernixData if you ask me… awesome set of features to be added soon!

Heartbleed Security Bug fixes for VMware

It seems to be patch Saturday as today a whole bunch of updates of products were released. All of these updates relate to the heartbleed security bug fix. There is no point in listing every single product as I assume you all know the VMware download page by now, but I do want to link the most commonly used for your convenience:

Time to update, but before you do… if you are using NFS based storage make sure to read this first before jumping straight to vSphere 5.5 U1a!

Alert: vSphere 5.5 U1 and NFS issue!

Some had already reported on this on twitter and the various blog posts but I had to wait until I received the green light from our KB/GSS team. An issue has been discovered with vSphere 5.5 Update 1 that is related to loss of connection of NFS based datastores. (NFS volumes include VSA datastores.)

*** Patch released, read more about it here ***

This is a serious issue, as it results in an APD of the datastore meaning that the virtual machines will not be able to do any IO to the datastore at the time of the APD. This by itself can result in BSOD’s for Windows guests and filesystems becoming read only for Linux guests.

Witnessed log entries can include:

2014-04-01T14:35:08.074Z: [APDCorrelator] 9413898746us: [vob.storage.apd.start] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down state.
2014-04-01T14:35:08.075Z: [APDCorrelator] 9414268686us: [esx.problem.storage.apd.start] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down state.
2014-04-01T14:36:55.274Z: No correlator for vob.vmfs.nfs.server.disconnect
2014-04-01T14:36:55.274Z: [vmfsCorrelator] 9521467867us: [esx.problem.vmfs.nfs.server.disconnect] 192.168.1.1/NFS-DS1 12345678-abcdefg0-0000-000000000000 NFS-DS1
2014-04-01T14:37:28.081Z: [APDCorrelator] 9553899639us: [vob.storage.apd.timeout] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.
2014-04-01T14:37:28.081Z: [APDCorrelator] 9554275221us: [esx.problem.storage.apd.timeout] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.

If you are hitting these issues than VMware recommends reverting back to vSphere 5.5. Please monitor the following KB closely for more details and hopefully a fix in the near future: http://kb.vmware.com/kb/2076392

 

Disk Controller features and Queue Depth?

I have been working on various VSAN configurations and a question that always comes up is what are my disk controller features and queue depth for controller X? (Local disks, not FC based…) Note that this is not only useful to know when using VSAN, but also when you are planning on doing host local caching with solutions like PernixData FVP or SanDisk FlashSoft for instance. The controller used can impact the performance, and a really low queue depth will result in a lower performance, it is as simple as that.

** NOTE: This post is not about VSAN disk controllers, but rather about disk controllers and their queue depth. Always check the HCL before buying! **

I have found myself digging through documentation and doing searches on the internet until I stumbled across the following website. I figured I would share the link with you, as it will help you (especially consultants) when you need to go through this exercise multiple times:

http://forums.servethehome.com/index.php?threads/lsi-raid-controller-and-hba-complete-listing-plus-oem-models.599/

Just as an example, the Dell H200 Integrated disk controller is on the VSAN HCL. According to the website above it is based on the LSI 2008 and provides the following feature set: 2×4 port internal SAS, no cache, no BBU, RAID 0, 1 and 10. According to the VSAN HCL also provides “Virtual SAN Pass-Through”. I guess the only info missing is queue depth of the controller. I have not been able to find a good source for this. So I figured I would make this thread a source for that info.

Before we dive in to that, I want to show something which is also important to realize. Some controllers take: SAS / NL-SAS and SATA. Although typically the price difference between SATA and NL-SAS is neglectable, the queue depth difference is not. Erik Bussink was kind enough to provide me with these details of one of the controllers he is using as an example, first in the list is “RAID” device – second is SATA and third SAS… As you can see SAS is the clear winner here, and that includes NL-SAS drives.

mpt2sas_raid_queue_depth: int
     Max RAID Device Queue Depth (default=128)
  mpt2sas_sata_queue_depth: int
     Max SATA Device Queue Depth (default=32)
  mpt2sas_sas_queue_depth: int
     Max SAS Device Queue Depth (default=254)

If you want to contribute, please take the following steps and report the Vendor, Controller type and aqlength in a comment please.

  1. Run the esxtop command on the ESXi shell / SSH session
  2. Press d
  3. Press f and select Queue Stats (d)
  4. The value listed under AQLEN is the queue depth of the storage adapter

The following table shows the Vendor, Controller and Queue Depth. Note that this is based on what we (my readers and I) have witnessed in our labs and results my vary depending on the firmware and driver used. Make sure to check the VSAN HCL for the supported driver / firmware version, note that not all controllers below are on the VSAN HCL, this is a “generic” list as I want it to serve multiple use cases.

Generally speaking it is recommended to use a disk controller with a queue depth > 256 when used for VSAN or “host local caching” solutions.

Vendor Disk Controller Queue Depth
Adaptec RAID 2405 504
Dell (R610) SAS 6/iR 127
Dell PERC 6/i 925
Dell PERC H200 Integrated 600
Dell PERC H310 25
Dell (M710HD) PERC H200 Embedded 499
Dell (M910) PERC H700 Modular 975
Dell PERC H700 Integrated 975
Dell (M620) PERC H710 Mini 975
Dell (T620) PERC H710 Adapter 975
Dell (T620) PERC H710p 975
Dell PERC H810 975
HP Smart Array P220i 1020
HP Smart Array P400i 128
HP Smart Array P410i 1020
HP Smart Array P420i 1020
HP Smart Array P700m 1200
IBM ServeRAID-M5015 965
IBM ServeRAID-M5016 975
IBM ServeRAID-M5110 975
Intel C602 AHCI (Patsburg) 31 (per port)
Intel C602 SCU (Patsburg) 256
Intel RMS25KB040 600
LSI 2004 25
LSI 2008 25 / 600 (firmware dependent!)
LSI 2108 600
LSI 2208 600
LSI 2308 600
LSI 3008 600
LSI 9300-8i 600