• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Storage

Startup intro: SolidFire

Duncan Epping · Jun 27, 2013 ·

This seems to becoming a true series, introducing startups… Now in the case of SolidFire I am not really sure if I should use the word startup as they have been around since 2010. But then again, it is not a consumer solution that they’ve created and enterprise storage platforms do typically take a lot longer to develop and mature. SolidFire was founded in 2010 by Dave Wright who discovered a gap in the current storage market when he was working for Rackspace. The opportunity Dave saw was in the Quality of Service area. Not many storage solutions out there could provide a predictable performance in almost every scenario, and were designed for multi-tenancy and offered a rich API. Back then the term Software Defined Storage wasn’t coined yet, but I guess it is fair to say that is how we would describe it today. This actually how I got in touch with SolidFire. I wrote various articles on the topic of Software Defined Storage, and tweeted about this topic many times, and SolidFire was one of the companies who consistently joined the conversation. So what is SolidFire about?

SolidFire is a storage company, they sell a storage systems and today they offer two models namely the SF3010 and the SF6010. What is the difference between these two? Cache and capacity! With the SF3010 you get 72Gb of cache per node and it uses 300GB SSD’s where the SF6010 gives you 144GB of cache per node and uses 600GB SSD’s. Interesting? Well to a certain point I would say, SolidFire isn’t really about the hardware if you ask me. It is about what is inside the box, or boxes I should say as the starting point is always 5 nodes. So what is inside?

Architecture

SolidFire’s architecture is based on a scale-out model and of course flash, in the form of SSD. You start out with 5 nodes and you can go up to 100 nodes, all connected to your hosts via iSCSI. Those 100 nodes would be able to provide you 5 million IOps and about 2.1 Petabyte of capacity. Each node that is added linearly scales performance and of course adds capacity. Of course SolidFire offers deduplication, compression and thin provisioning. Considering it is a scale-out model it is probably not needed to point this out, but dedupe and compression are cluster wide. Now the nice thing about the SolidFire architecture is that they don’t use a traditional RAID, this means that the long rebuild times when a disk fails or a node fails do not apply to SolidFire. Rather SolidFire evenly distributes data across all disk and nodes, so when a single disk fails or even a node fails rebuild time is not constraint due to a limited amount of resources but many components can help in parallel to get back to a normal state. What I liked most about their architecture is that it already closely aligns with VMware’s Virtual Volume (VVOL) concept, SolidFire is prepared for VVOLs when it is released.

Quality of Service

I already has briefly mentioned this, but Quality of Service (QoS) is one of the key drivers of the SolidFire solution. It revolves around having the ability to provide an X amount of capacity with an X amount of performance (IOps). What does this mean? SolidFire allows you to specify a minimum and maximum number of IOps for a volume, and also a burst space. Lets quote the SolidFire website as I think they explain it in a clear way:

  • Min IOPS – The minimum number of I/O operations per-second that are always available to the volume, ensuring a guaranteed level of performance even in failure conditions.
  • Max IOPS – The maximum number of sustained I/O operations per-second that a volume can process over an extended period of time.
  • Burst IOPS – The maximum number of I/O operations per-second that a volume will be allowed to process during a spike in demand, particularly effective for data migration, large file transfers, database checkpoints, and other uneven latency sensitive workloads.

Now I do want to point out here that SolidFire storage systems have no “form of admission control” when it comes to QoS. Although it is mentioned that there is a guaranteed level of performance this is up to the administrator, you as the admin will need to do the math and not overprovision from a performance point of view if you truly want to guarantee a specific performance level. If you do, you will need to take failure scenarios in to account!

One thing that my automation friends William Lam and Alan Renouf will like is that you can manage all these settings using their REST-based API.

(VMware) Integration

Ofcourse during the conversation integration came up. SolidFire is all about enabling their customers to automate as much as they possibly can and have implemented a REST-based API. They are heavily investing in for instance integration with Openstack but also with VMware. They offer full support for the vSphere Storage APIs – Storage Awareness (VASA) and are also working towards full support for vSphere Storage APIs – Array Integration (VAAI). Currently not all VAAI primitives are supported but they promised me that this is a matter of time. (They support: Block Zero’ing, Space Reclamation, Thin Provisioning. See HCL for more details.) On top of that they are also looking at the future and going full steam ahead when it comes to Virtual Volumes. Obvious question from my side: what about replication / SRM? This is being worked on, hopefully more news about this soon!

Now with all this integration did they forget about what is sitting in between their storage system and the compute resources? In other words what are they doing with the network?

Software Defined Networking?

I can be short, no they did not forget about the network. SolidFire is partnering with Plexxi and Arista to provide a great end-to-end experience when it comes to building a storage environment. Where with Arista currently the focus is more on monitoring the the different layers Plexxi seems to focus more on the configuration and optimization for performance aspect. No end-to-end QoS yet, but a great step forward if you ask me! I can see this being expanded in the future

Wrapping up

I had already briefly looked at SolidFire after the various tweets we exchanged but this proper introduction has really opened my eyes. I am impressed by what SolidFire has achieved in a relatively short amount of time. Their solution is all about customer experience, that could be performance related or the ability to automate the full storage provisioning process… their architecture / concept caters for this. I have definitely added them to my list of storage vendors to visit at VMworld, and I am hoping that those who are looking in to Software Defined Storage solutions will do the same as SolidFire belongs on that list.

How to register a Storage Provider using the vSphere Web Client

Duncan Epping · Jun 18, 2013 ·

I needed to register a Storage Provider for vSphere Storage APIs for Storage Awareness (VASA) today. I force myself to use the vSphere Web Client and it had me looking for this option for a couple of minutes. It actually was the second time this week I had to do this, so I figured if I need to search for it there will probably be more people hitting the same issue. So where can you register those VASA Storage Provider’s in the Web Client?

  • In your vSphere Web Client “home screen” click “vCenter”
  • Now in the “Inventory Lists” click “vCenter Servers”
  • Select your “vCenter Server” in the left pane
  • Click the “Manage” tab in the right pane
  • Click “Storage Provider” in the right pane
  • Click on the “green plus”
  • Fill out your details and hit “OK” just like the example below (VNX, block storage)
    registering a Storage Provider

I personally find this not very intuitive and would prefer to have it in the Rules and Profiles section of the Web Client, and when I do configure it… I should be able to configure it for all vCenter Server instances just by select all or individual vCenter Servers. Do you agree? I am going to push for this within VMware, so if you don’t agree, please speak up and let me know why :-).

Software Defined Storage articles to read…

Duncan Epping · Jun 4, 2013 ·

Last week there was a floodstorm of articles published around Software Defined Storage, and of course there was the SDS Tweetstorm caused by the NetApp chat (find a summary here and here). One thing that is clear from these articles, and the chat, is that everyone has its own spin around what Software Defined Storage is or should be. Every vendor takes the fairly highly level definition and then molds it in such a way to make it seem they offer a true Software Defined Storage solution today. Personally I will leave that up to you, the consumer, to decide if you agree with them or not… I know that we as VMware are certainly not claiming to have those capabilities today but we are working very hard to get there! Then again, some of my colleagues would argue that Storage DRS, Storage IO Control, Swap to SSD and vSphere Replication are part of that solution.

Anyway, I enjoyed reading these articles, especially back to back as some state the exact opposite around what SDS is. Now as a customer this isn’t necessarily a bad thing as it will offer you choice and various different strategic directions to achieve the same goal: reduce operational complexity and increase agility / time-to-market. I have linked all three articles below with a quote from the article, just take the time read them. Let me know which of the three concepts you liked most… or if you do not agree with either what SDS means to you.

  1. Nutanix – Software-Defined Storage, Our take!
    “A software-defined controller must not use any proprietary hardware. That means no dependence on special-purpose FPGA, ASIC, NVRAM, battery-backup, UPS, modem etc. Use dynamic HTTP-based tunnels instead of modems. Use inexpensive flash instead of ASIC or NVRAM. Use industry standard PCIe passthru if you must bypass the hypervisor.”
  2. NetApp – OK, Sure, We’ll Call it ‘Software-Defined Storage’
    “NetApp has been leading the way with storage virtualization for years. If you go back and look at some of our slide decks, as recently as 2011, we were calling Data ONTAP the “Storage Hypervisor,” but we stopped because, at the end of the day, it’s bigger than that. It’s the Control Plane AND the Data Plane. SVM’s (Vservers) are the virtualized part (i.e. Control Plane) and Data ONTAP’s inner workings, APIs, and inter-node communications, and ability to move data around within itself between nodes across a 10GbE highly-redundant cluster network, with little-to-no loss in performance”
  3. HDS – Software-Defined Storage is not about commodity storage
    “This has led some analysts to predict that storage functions and intelligence will shift to the server and purchasing will shift to commoditized hardware. On the contrary, storage hardware must become more intelligent to include the storage logic, which can be initiated or automated by events or polices that are defined by application or server software. This has already happened with VMware, as evidenced by the company’s motivation to off load server functions to storage systems through APIs like VAAI and VASA in order for VMware to be more efficient in supporting virtual machines. This requires more intelligence in the storage to support these APIs and provide visibility through vCenter.”

 

** The NetApp article is an article by Nick Howell on his personal blog and doesn’t necessarily 100% align with NetApp’s vision. Considering Nick works at NetApp in the Tech Marketing team it probably represents their view **

Evaluating SSDs in Virtualized Datacenters by Irfan Ahmad

Duncan Epping · Jun 3, 2013 ·

Flash-based solid-state disks (SSDs) offer impressive performance capabilities and are all the rage these days. Rightly so? Let’s find out how you can assess the performance benefit of SSDs in your own datacenter before purchasing anything and without expensive, time-consuming and usually inaccurate proofs-of-concept.

** Please note that this article is written by Irfan Ahmad, follow him on twitter and make sure to attend his webinar on the 5th of June on this topic, and vote for CloudPhysics  in the big data startup top 10. **

I was fortunate enough to have started the very first project at VMware that optimized ESX to take advantage of Flash and SSDs. Swap to Host Cache (aka Swap-to-SSD) shipped in vSphere 5. For those customers wanting to manage their DRAM spend, this feature can be a huge cost saving. It also continues to serve as a differentiator for vSphere against competitors.

Swap-to-SSD has the distinction of being the first VMware project to fully utilize the capabilities of Flash but it is certainly not the only one. Since then, every established storage vendor has entered this area, not to mention a dozen awesome startups. Some have solutions that apply broadly to all compute infrastructures, yet others have products that are specifically designed to address the hypervisor platform.

The performance capabilities of the Flash are indeed impressive. But they can cost a pretty penny. Marketing machines are in full force trying to convince you that you need a shiny hardware or software solution. An important question remains: can the actual benefit keep up with the hype? The results are mixed and worth reading through.

[Read more…] about Evaluating SSDs in Virtualized Datacenters by Irfan Ahmad

Is flash the saviour of Software Defined Storage?

Duncan Epping · May 22, 2013 ·

I have this search column open on twitter with the term “software defined storage”. One thing that kept popping up in the last couple of days was a tweet from various IBM people around how SDS will change flash. Or let me quote the tweet:

“What does software-defined storage mean for the future of #flash?”

It is part of a twitter chat scheduled for today, initiated by IBM. It might be just me misreading the tweets or the IBM folks look at SDS and flash in a completely different way than I do. Yes SDS is a nice buzzword these days. I guess with the billion dollar investment in flash IBM has announced they are going all-in with regards to marketing. If you ask me they should have flipped it and the tweet should have stated: “What does flash mean for the future of Software Defined Storage?” Or to make it even sound more marketing is flash the saviour of Software Defined Storage?

Flash is a disruptive technology, and changing the way we architect our datacenters. Not only did it already allow many storage vendors to introduce additional tiers of storage it also allowed them to add an additional layer of caching in their storage devices. Some vendors even created all flash based storage systems offering thousands of IOps (some will claim millions), performance issues are a thing of the past with those devices. On top of that host local flash is the enabler of scale-out virtual storage appliances. Without flash those type of solutions would not be possible, well at least not with a decent performance.

Since a couple of years host side flash is also becoming more common. Especially since several companies jumped in to the huge gap there was and started offering caching solutions for virtualized infrastructures. These solutions allow companies who cannot move to hybrid or all-flash solutions to increase the performance of their virtual infrastructure without changing their storage platform. Basically what these solutions do is make a distinction between “data at rest” and “data in motion”. Data in motion should reside in cache, if configured properly, and data in rest should reside on your array. These solutions once again will change the way we architect our datacenters. They provide a significant performance increase removing many of the performance constraints linked to traditional storage systems; your storage system can once again focus on what it is good at… storing data / capacity / resiliency.

I think I have answered the questions, but for those who have difficulties reading between the lines, how does flash change the future of software defined storage? Flash is the enabler of many new storage devices and solutions. Be it a virtual storage appliance in a converged stack, an all-flash array, or host-side IO accelerators. Through flash new opportunities arise, new options for virtualizing existing (I/O intensive) workloads. With it many new storage solutions were developed from the ground up. Storage solutions that run on standard x86 hardware, storage solutions with tight integration with the various platforms, solutions which offer things like end-to-end QoS capabilities and a multitude of data services. These solutions can change your datacenter strategy; be a part of your software defined storage strategy to take that next step forward in optimizing your operational efficiency.

Although flash is not a must for a software defined storage strategy, I would say that it is here to stay and that it is a driving force behind many software defined storage solutions!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Page 13
  • Interim pages omitted …
  • Page 53
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in