Search Results for: sandisk

Software Defined Storage, which phase are you in?!

Working within R&D at VMware means you typically work with technology which is 1 – 2 years out, and discuss futures of products which are 2-3 years. Especially in the storage  space a lot has changed. Not just innovations within the hypervisor by VMware like Storage DRS, Storage IO Control, VMFS-5, VM Storage Policies (SPBM), vSphere Flash Read Cache, Virtual SAN etc. But also by partners who do software based solutions like PernixData (FVP), Atlantis (ILIO) and SANDisk FlashSoft. Of course there is the whole Server SAN / Hyper-converged movement with Nutanix, Scale-IO, Pivot3, SimpliVity and others. Then there is the whole slew of new storage systems some which are scale out and all-flash, others which focus more on simplicity, here we are talking about Nimble, Tintri, Pure Storage, Xtreme-IO, Coho Data, Solid Fire and many many more.

Looking at it from my perspective, I would say there are multiple phases when it comes to the SDS journey:

  • Phase 0 – Legacy storage with NFS / VMFS
  • Phase 1 – Legacy storage with NFS / VMFS + Storage IO Control and Storage DRS
  • Phase 2 – Hybrid solutions (Legacy storage + acceleration solutions or hybrid storage)
  • Phase 3 – Object granular policy driven (scale out) storage

<edit>

Maybe I should have abstracted a bit more:

  • Phase 0 – Legacy storage
  • Phase 1 – Legacy storage + basic hypervisor extensions
  • Phase 2 – Hybrid solutions with hypervisor extensions
  • Phase 3 – Fully hypervisor / OS integrated storage stack

</edit>

I have written about Software Defined Storage multiple times in the last couple of years, have worked with various solutions which are considered to be “Software Defined Storage”. I have a certain view of what the world looks like. However, when I talk to some of our customers reality is different, some seem very happy with what they have in Phase 0. Although all of the above is the way of the future, and for some may be reality today, I do realise that Phase 1, 2 and 3 may be far away for many. I would like to invite all of you to share:

  1. Which phase you are in, and where you would like to go to?
  2. What you are struggling with most today that is driving you to look at new solutions?

Disk Controller features and Queue Depth?

I have been working on various VSAN configurations and a question that always comes up is what are my disk controller features and queue depth for controller X? (Local disks, not FC based…) Note that this is not only useful to know when using VSAN, but also when you are planning on doing host local caching with solutions like PernixData FVP or SanDisk FlashSoft for instance. The controller used can impact the performance, and a really low queue depth will result in a lower performance, it is as simple as that.

** NOTE: This post is not about VSAN disk controllers, but rather about disk controllers and their queue depth. Always check the HCL before buying! **

I have found myself digging through documentation and doing searches on the internet until I stumbled across the following website. I figured I would share the link with you, as it will help you (especially consultants) when you need to go through this exercise multiple times:

http://forums.servethehome.com/index.php?threads/lsi-raid-controller-and-hba-complete-listing-plus-oem-models.599/

Just as an example, the Dell H200 Integrated disk controller is on the VSAN HCL. According to the website above it is based on the LSI 2008 and provides the following feature set: 2×4 port internal SAS, no cache, no BBU, RAID 0, 1 and 10. According to the VSAN HCL also provides “Virtual SAN Pass-Through”. I guess the only info missing is queue depth of the controller. I have not been able to find a good source for this. So I figured I would make this thread a source for that info.

Before we dive in to that, I want to show something which is also important to realize. Some controllers take: SAS / NL-SAS and SATA. Although typically the price difference between SATA and NL-SAS is neglectable, the queue depth difference is not. Erik Bussink was kind enough to provide me with these details of one of the controllers he is using as an example, first in the list is “RAID” device – second is SATA and third SAS… As you can see SAS is the clear winner here, and that includes NL-SAS drives.

mpt2sas_raid_queue_depth: int
     Max RAID Device Queue Depth (default=128)
  mpt2sas_sata_queue_depth: int
     Max SATA Device Queue Depth (default=32)
  mpt2sas_sas_queue_depth: int
     Max SAS Device Queue Depth (default=254)

If you want to contribute, please take the following steps and report the Vendor, Controller type and aqlength in a comment please.

  1. Run the esxtop command on the ESXi shell / SSH session
  2. Press d
  3. Press f and select Queue Stats (d)
  4. The value listed under AQLEN is the queue depth of the storage adapter

The following table shows the Vendor, Controller and Queue Depth. Note that this is based on what we (my readers and I) have witnessed in our labs and results my vary depending on the firmware and driver used. Make sure to check the VSAN HCL for the supported driver / firmware version, note that not all controllers below are on the VSAN HCL, this is a “generic” list as I want it to serve multiple use cases.

Generally speaking it is recommended to use a disk controller with a queue depth > 256 when used for VSAN or “host local caching” solutions.

Vendor Disk Controller Queue Depth
Adaptec RAID 2405 504
Dell (R610) SAS 6/iR 127
Dell PERC 6/i 925
Dell PERC H200 Integrated 600
Dell PERC H310 25
Dell (M710HD) PERC H200 Embedded 499
Dell (M910) PERC H700 Modular 975
Dell PERC H700 Integrated 975
Dell (M620) PERC H710 Mini 975
Dell (T620) PERC H710 Adapter 975
Dell (T620) PERC H710p 975
Dell PERC H810 975
HP Smart Array P220i 1020
HP Smart Array P400i 128
HP Smart Array P410i 1020
HP Smart Array P420i 1020
HP Smart Array P700m 1200
IBM ServeRAID-M5015 965
IBM ServeRAID-M5016 975
IBM ServeRAID-M5110 975
Intel C602 AHCI (Patsburg) 31 (per port)
Intel C602 SCU (Patsburg) 256
Intel RMS25KB040 600
LSI 2004 25
LSI 2008 25 / 600 (firmware dependent!)
LSI 2108 600
LSI 2208 600
LSI 2308 600
LSI 3008 600
LSI 9300-8i 600

Startup News Flash part 17

Number 17 already… A short one, I expect more news next week when we have “Storage Field Day”, hence I figured I would release this one already. Make sure to watch the live feed if you are interested in getting the details on new releases from companies like Diablo, SanDisk, PernixData etc.

Last week Tintri announced support for the Red Hat Enterprise Virtualization platform. Kind of surprising to see them selecting a specific linux vendor to be honest, but then again it probably also is the more popular option for people who want full support etc. What is nice in my opinion is that Tintri offers the exact same “VM Aware” experience for both platforms. Although I don’t see too many customers using both VMware and RHEV in production, it is nice to have the option.

CloudVolumes, no not a storage company, announced support for View 6.0. CloudVolumes developed a solution which helps you manage applications. They provude a central management solution, and the option to distribute and elimate the need for streaming / packaging. I have looked at it briefly and it is an interesting approach they take. I like how they solved the “layering” problem by isolating the app in its own disk container. It does make me wonder how this scales when you have dozens of apps per desktop, never the less an interesting approach worth looking in to.

Startup News Flash part 16

Number 16 of the Startup News Flash, here we go:

Nakivo just announced the beta program for 4.0 of their backup/replication solution. It adds some new features like: recovery of Exchange objects directly from compressed and deduplicated VM backups, Exchange logs truncation, and automated backup verification. If you are interested in testing it, make sure to sign up here. I haven’t tried it, but they seem to be a strong upcoming player in the backup and DR space for SMB.

SanDisk announced a new range of SATA SSDs called “cloudspeed”. They released 4 different models with various endurance levels and workload targets, of course ranging in sizes from 100GB up to 960GB depending on the endurance level selected. Endurance level ranges from 1 up to 10 full drive writes per day. (Just as an FYI, for VSAN we recommend 5 full drive writes per day as a minimum) Performance numbers range between 15k to 20k write IOps and 75 to 88K read IOps. More details can be found in the spec sheet here. What interest me most is the FlashGuard Technology that is included, interesting how SanDisk is capable of understanding wear patterns and workloads to a certain extend and place data in a specific way to prolong the life of your flash device.

CloudPhysics announced the availability of their Storage Analytics card. I gave it a try last week and was impressed. I was planning on doing a write up on their new offering but as various bloggers already covered it I felt there was no point in repeating what they said. I think it makes a lot more sense to just try it out, I am sure you will like it as it will show you valuable info like “performance” and the impact of “thin disks” vs “thick disks”. Sign up here for a 30day free trial!

Startup News Flash part 12

First edition of the 2014 of the Startup News Flash. I expect this year to be full of announcements, new rounds of funding, new products, new features and new companies. There are various startups planning to come out of stealth this year and all play in the storage / flash space, so make sure to follow this series!

On Tuesday the 14th of January Nutanix announced a new round of funding. Series D financing is co-led by Riverwood Capital and SAP Ventures, and the total amount is $101 million. The company has now raised a total of $172.2 million in four rounds of funding and has been valuated close to $ 1 billion. Yes, that is huge. Probably one of the most successful startups of the last couple of years. Congrats to everyone involved!

announced a rather aggressive program. The Register reported it here, and it is all about replacing NetApp systems with Tintri systems. In short: “The “Virtualize More with 50% Less” Program offers 50% storage capacity and rack space savings versus currently installed NetApp FAS storage to support deployed virtualization workloads”. I guess it is clear what kind of customers they are going after and who their primary competition is. Of course there is a list of requirements and constraints which the Register already outlined nicely. If you are looking to replace your current NetApp storage infrastructure I guess this could be a nice offer, or a nice way to get more discount.. Either way, you win.

SSD and PCIe flash devices are king these days, but SanDisk is looking to change that with the announcement of the availability of the ULLtraDIMM. The ULLtraDIMM is a combination of Diablo’s DDR3 tranlation protocol and SanDisk’s flash and controllers on top of a nice DIMM. Indeed, it doesn’t get closer to your CPU then straight on your memory bus. By the looks of it IBM is one of the first vendors to offer it, as they  recently announced that the eXFlash DIMM is an option for its System x3850 and x3950 X6 servers providing up to 12.8TB of flash capacity2. Early benchmarks showed write latency around 5-10 microsecond! I bet half the blogosphere just raised their hands to give this a go in their labs!