FUD it!

In the last couple of weeks something stood out to me when it comes to the world of storage and virtualisation and that is animosity. What struck me personally is how aggressive some storage vendors have responded to Virtual SAN, and Server Side Storage in general. I can understand it in a way as Virtual SAN plays in the same field and they probably feel threatened and it makes them anxious. In some cases I even see vendors responding to VSAN who do not even play in the same space, I guess they are in need of attention. Not sure this is the way to go about to be honest, if I were considering a hyper(visor)-converged solution I wouldn’t like being called lazy because of it. Then again, I was always taught that lazy administrators are the best administrators in the world as they plan accordingly and pro-actively take action. This allows them to lean back while everyone else is running around chasing problems, so maybe it was a compliment.

Personally I am perfectly fine with competition, and I don’t mind being challenged. Whether that includes FUD or just cold hard facts is even besides the point, although I prefer to play it fair. It is a free world, and if you feel you need to say something about someone else product you are free to do so. However you may want to think about the impression you leave behind. In a way it is insulting to our customers. With our customers including your customers.

For the majority of my professional career I have been a customer, and personally I can’t think of anything more insulting than a vendor spoon feeding why their competitor is not what you are looking for. It is insulting as it insinuates that you are not smart enough to do your own research and tear it down as you desire, not smart enough to know what you really need, not smart enough to make the decision by yourself.

Personally when this happened in the past, I would simply ask them to skip the mud slinging and go to the part where they explain their value add. And in many cases, I would end up just ignoring the whole pitch… cause if you feel it is more important to “educate” me on what someone else does over what you do… then they probably do something very well and I should be looking at them instead.

So lets respect our customers… let them be the lazy admin when they want, let them decide what is best for them… and not what is best for you.

PS: I love the products that our competitors are working on, and I have a lot of respect how they paved the way of the future.

Updating LSI firmware through the ESXi commandline

I received an email this week from one of my readers / followers on twitter who had gone through the effort of upgrading his LSI controller firmware. He shared the procedure with me as unfortunately it wasn’t well documented. I hope this will help others in the future, I know it will help me as I was about to look at the exact same for my VSAN environment, thanks for sharing this Tom!

– copy / paste from Tom’s document –

We do quite a bit of virtualization and storage validation and performance testing in the Taneja Group Labs (http://tanejagroup.com/). Recently, we were performing some tests with VMware’s VSAN and due to some performance issues we were having with the AHCI controllers on our servers we needed to revise our environment to add some LSI SAS 2308 controllers and attach our SSD and HDDs to the LSI card. However our new LSI SAS controllers didn’t come with the firmware mandated by the VSAN HCL (they had v14 and the HCL specifies v18) and didn’t recognize the attached drives.  So we set about updating LSI 2308 firmware. Updating the LSI firmware is a simple process and can be accomplished from an ESXi 5.5 U1 server but isn’t very well documented. After updating the firmware and rebooting the system the drives were recognized and could be used by VSAN. Below are the steps I took to update my LSI controllers from v14 to v18. [Read more...]

VSAN for ROBO?

I noticed this new SuperMicro VSAN Ready Node being published last week. The configuration is potentially a nice solution for ROBO deployments, primarily due to the cost of the system.

When I did the math it came in around $ 3800,-. This is the configuration:

  • SuperMicro SuperServer 1018D-73MTF
  • 1 x Intel E3-1270 V3 3.5GHz- Quadcore
  • 32GB Memory
  • 5 x 1TB 7200 RPM NL-SAS HDD
  • 1 x 200GB Intel S3700 SSD
  • LSI 2308 Disk controller
  • 4 x 1GbE NIC port

It is a nice configuration that will allow for roughly fifteen 1 vCPU Virtual Machines with 3GB of memory and 60GB disk capacity per host. Personally I would use a different CPU and some more memory probably as that gives you a bit more headroom, especially during maintenance. The cost from a software point of view is socket based so you can increase memory and change the type of CPU with relative low cost impact. The SuperMicro server listed however is limited to the E3 CPU family and to 32GB but there are alternatives out there. (For instance the Dell R320 or maybe even the R210 etc)

From a software point of view the cost of this configuration is limited to 3 x VSAN license and 3 x vSphere. As VSAN even works with Essentials Plus and Standard you could leverage that to keep the cost down, but keep in mind that you won’t have DRS if you drop down to Standard or lower. Still sounds like a nice ROBO package to me, especially when you have many sites this could be a great way to create a standardized packaged solution.

Startup News Flash part 16

Number 16 of the Startup News Flash, here we go:

Nakivo just announced the beta program for 4.0 of their backup/replication solution. It adds some new features like: recovery of Exchange objects directly from compressed and deduplicated VM backups, Exchange logs truncation, and automated backup verification. If you are interested in testing it, make sure to sign up here. I haven’t tried it, but they seem to be a strong upcoming player in the backup and DR space for SMB.

SanDisk announced a new range of SATA SSDs called “cloudspeed”. They released 4 different models with various endurance levels and workload targets, of course ranging in sizes from 100GB up to 960GB depending on the endurance level selected. Endurance level ranges from 1 up to 10 full drive writes per day. (Just as an FYI, for VSAN we recommend 5 full drive writes per day as a minimum) Performance numbers range between 15k to 20k write IOps and 75 to 88K read IOps. More details can be found in the spec sheet here. What interest me most is the FlashGuard Technology that is included, interesting how SanDisk is capable of understanding wear patterns and workloads to a certain extend and place data in a specific way to prolong the life of your flash device.

CloudPhysics announced the availability of their Storage Analytics card. I gave it a try last week and was impressed. I was planning on doing a write up on their new offering but as various bloggers already covered it I felt there was no point in repeating what they said. I think it makes a lot more sense to just try it out, I am sure you will like it as it will show you valuable info like “performance” and the impact of “thin disks” vs “thick disks”. Sign up here for a 30day free trial!

30K for a VSAN host @theregister? I can configure one for 2250 USD!

I’ve been following the posts from the Register on VSAN and was surprised when they posted the cost of the hosts they configured: 30K each. With 3 at a minimum they concluded that for 90K you could buy yourself a nice legacy storage system. I don’t disagree with that to be honest… for 90K you can buy a nice legacy storage system. I guess you need to ask yourself first though what you will do with that 90K storage system by itself? Not much indeed, as you would need compute resources sitting next to it in order to do anything. So if you want to make a comparison, do not compare a full VSAN environment (or any other hyper-converged solution out there) to just a storage system at it just doesn’t make sense.

Now that still doesn’t make these hosts cheap I can hear you think, and again I agree with that. Although I have absolutely no clue where the 30K came from, and judging by the tweets this morning most people don’t know and feel it probably was overkill. Call me crazy, but I can configure a fully supported VSAN configuration for about 2250 USD (just HW) on the Dell website.

  • Dell T320
  • Intel Xeon E5-2420 1.90GHz 6 Core
  • Perc H310 Disk Controller
  • 32GB Memory
  • 1 x 7200RPM 1TB NL-SAS
  • 1 x 100GB Intel S3700 SSD (or dell equal drive)
  • 5 x 1GbE NIC Port

I would like to conclude that VSAN would be a lot cheaper than those legacy solutions, less than 7500 USD for 3 hosts is peanuts right?!? Yes I know, the above configuration wouldn’t fit many use cases (except for maybe a ROBO deployment where only a couple of VMs are needed) and that was the whole point of the exercise showing how pointless these exercises can be. You can twist these numbers anyway you like, and you can configure your VSAN hosts any way you like as long as the components (HDD/SSD/Controller) are on the VSAN HCL and the system is on the vSphere HCL. PS: Dear Register, next time you run through the exercise, you may want to post the configuration you selected… It makes things a bit clearer.

Selecting a disk controller for VSAN using the HCL

As this was completely unclear to me as well and I started a thread on it on our internal social platform I figured I would share this with you. When you go through the exercise of selecting a disk controller for VSAN using the VMware Compatibility Guide (vmwa.re/vsanhcl) you will see that there are 4 “features” listed. The four features describe how you can use your disk controller to manage the disks in your host. This is important as selecting the wrong disk controller could lead to unwanted side effects.

Let me list the four features and explain what they actually mean:

  • Virtual SAN – SAS
  • Virtual SAN – SATA
  • Virtual SAN Pass-Through
  • Virtual SAN RAID 0

Virtual SAN – SAS / SATA and Pass-through are essentially the same thing. Well not entirely as it is implemented in a different way, but the result is the same. What this does is serving the disks straight up to the hypervisor. This functionality literally passes the disk through to ESXi, and avoids the need to create a RAID set or volume for your disks. This is by far the easiest way to pull your disks in to a VSAN datastore if you ask me.

Virtual SAN RAID 0 means that in order to use the disks you will need to create a single disk RAID 0 set for each disk in your system. The downside is when using this that things like hot-swap will be impossible as your Disk (ID) is bound to the RAID 0 set. However there is also a positive thing, many of these disk controllers support things like encryption of data at rest and if your disks support this you could potentially use this. It should be noted however that as far as I know today this functionality has not been tested (extensively) and support could be an issue. However, I could see why one would want to buy a controller that offer this functionality to be future proof.

Then there is another aspect, I have been asked about this a couple of times already and that is the performance capability of the controller. As far as I have seen the HCL today consists of 3Gbps and 6Gbps controllers. In most cases there is little to no cost difference, so if supported I would always recommend to go with the faster controller. But there is another thing here that is often overlooked and that is the queue depth. Before you pull the trigger and decide to buy controller-A over controller-B you may want to verify what the queue depth is of both of them. In some cases, and especially the cheaper disk controllers, the queue depth is low (32) where others offer 256 and higher. Especially when you are building an environment where a lot of IO is expected these are things to take in to consideration, plus you wouldn’t want to buy a screaming fast SSD and then find out your bottleneck is the queue depth of your disk controller right?

<update>A very good point made by Tom Fenton, if you select a controller and are at the point of rolling out VSAN make sure you validate the firmware and the driver used. If you click on the “Model” you will be able to see those details. This also applies for SSDs and HDDs!</update>

I hope that helps,

Startup News Flash part 15

Number 15 of the Startup News Flash… What happened in the world of (storage / flash related) startup’s in the last couple of weeks? Not too much news, but I felt it was worth releasing anyway as other wise the below would be really old news.

One of the most interesting BC/DR startups of the last couple of years, if you ask me, just announced a new round of funding: 100 million. Investors include North Bridge, Greylock, Advanced Technology Ventures, Andreessen Horowitz, and Technology Crossover Ventures. For those who don’t know Actifio… Actifio offers what is commonly referred to as a “Data Copy Management” solution. It could be described as a solution which sits in between your storage solution and your hypervisor and can do things like: backup, cloning, replication, archiving etc. Really neat solution, with a brilliant super simple UI. Worth checking out if you are looking to improve your business continuity story!

A while back I wrote an introduction to SoftNAS. When doing that review there was one thing that stood out to me and that was that SoftNAS didn’t have a great availability story. I spoke with Rick Brady about that and he said that it would be one of the first things they would try to tackle in an upcoming release. In the just announced release SoftNAS introduces Snap HA. Snap HA provides an “active / passive” solution where when an issue arises ownership is transferred to the “passive” node which then of course becomes “active”. More details can be found in this blog post by Rick Brady. Awesome work guys!