• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ssd

Startup News Flash part 5

Duncan Epping · Sep 10, 2013 ·

After the VMworld storm has slowly died down it is back to business again… This also means less frequent updates, although we are slowly moving towards VMworld Barcelona and I suspect there will be some new announcements at that time. So what happened in the world of flash/startups the last two weeks? This is Startup News Flash part 5, and it seems to be primarily around either funding rounds or acquisitions.

Probably one of the biggest rounds of funding I have seen for a “new world” storage company… $ 150 million. Yes, that is a lot of money. Congrats Pure Storage! I would expect an IPO at some point in the near future, and hopefully they will be expanding their EMEA based team. PureStorage is one of those companies which has always intrigued me. As GigaOm suggests this boosts will probably be used to lower the prices, but personally I would prefer a heavy investment in things like disaster recovery and availability. It is an awesome platform, but in my opinion it needs dedupe aware sync and a-sync replication! That should include VMware SRM integration from day 1 of course!

Flash is hot… Virident just got acquired by Western Digital for $685 million. Makes sense if you consider WD is known as the “hard disk” company, they need to keep on growing their business and the business of hard disks is going to be challenging in the upcoming years with SSD becoming cheaper and cheaper. Considering this is the second acquisition (sTec being the other) related to flash by WD you can say that they mean business.

I just noticed that Cisco announced they intent to acquire Whiptail for $415 million in cash. Interesting to see Cisco moving in to the storage space and definitely a smart move if you ask me.  With UCS for compute and Whiptail for storage they will be able to deliver the full stack considering they more or less already own the world of networking. Will be interesting to see how they integrate it in their UCS offerings. For those who don’t know, Whiptail is an all flash array (afa) which leverages a “scale out” approach, so start small and increase capacity by adding new boxes. Of course they offer most functionality other AFA vendors do, for more details I recommend reading Cormac’s excellent article.

To be honest, no one knew what to expect from this public VSAN Beta announcement. Would we get a couple of hundred registrations or thousands? Well I can tell you that they are going through the roof, make sure to register though if you want to be a part of this! Run it at home nested, run it on your test cluster at the office, do whatever you want with it… but make sure to provide feedback!

 

Startup News Flash part 4

Duncan Epping · Aug 27, 2013 ·

This is the fourth part already of the Startup News Flash, we are in the middle of VMworld and of course there were many many announcements. I tried to filter out those which are interesting, as mentioned in one of the other posts if you feel one is missing leave a comment.

Nutanix announced version 3.5  of their OS last week. The 3.5 release contains a bunch of new features, one of them being what they call the “Nutanix Elastic Deduplication Engine”. I think it is great they added this feature is ultimately it will allow you to utilize your flash and RAM tier more efficiently. The more you can cache the better right?! I am sure this will result in a performance improvement in many environment, you can imagine that especially for VDI or environments where most VMs are based on the same template this will be the case. What might be worth knowing is that Nutanix dedupe is inline for their RAM and flash tier and then for their magnetic disks is happening in the background. Nutanix also announced that besides supporting vSphere and KVM they also support Hyper-V as of now, which is great for customers as it offers you choice. On top of all that, they managed to develop a new simplified UI and a rest-based API allowing for customers to build a software defined datacenter! Also worth noting is that they’ve been working on their DR story. They’ve developed a Storage Replication Adapter which is one of the components needed to implement Site Recover Manager with array based replication. They also optimized their replication technology by extending their compression technology to that layer. (Disclaimer: the SRA is not listed on the VMware website, as such it is not supported by VMware. Please validate the SRM section of the VMware website before implementing.)

Of course an update from a flash caching vendor, this time it is Proximal Data who announced the 2.0 version of their software. AutoCache 2.0 includes role-based administration features and multi-hypervisor support to meet the specific needs of cloud service providers. Good to see that multi hypervisor and cloud is part of the proximal story soon. I like the Proximal aggressive price point. It starts at $999 per host for flash caches less than 500GB, which is unique for a solution which does both block and file caching. Not sure I agree with Proximal’s stance with regards to write-back caching and “down-playing” 1.0 solutions, especially not when you don’t offer that functionality yourself or were a 1.0 version yesterday.

I just noticed this article published by Silicon Angle which mentions the announcement of the SMB Edition of FVP, priced at a flat $9,999, supports up to 100 VMs across a maximum of four hosts with two processors and one flash drive each. More details to be found in this press release by PernixData.

Also something which might interest people is Violin Memory filing for IPO. It had been rumored numerous times, but this time it seems to be happening for real. The Register has an interesting view by the way. I hope it will be a huge success for everyone involved!

Also want to point people again to some of the cool announcements VMware did in the storage space, although far from being a startup I do feel this is worth listing here again: introduction to vSphere Flash Read Cache – introduction to Virtual SAN.

Introduction to vSphere Flash Read Cache aka vFlash

Duncan Epping · Aug 26, 2013 ·

vSphere 5.5 was just announced and of course there are a bunch of new features in there. One of the features which I think people will appreciate is vSphere Flash Read Cache (vFRC), formerly known as vFlash. vFlash was tech previewed last year at VMworld and I recall it being a very popular session. In the last 6-12 months host local caching solutions have definitely become more popular and interesting as SSD prices keep dropping and thus investing in local SSD drives to offload IO gets more and more interesting. Before anyone asks, I am not going to do a comparison with any of the other host local caching solutions out there. I don’t think I am the right person for that as I am obviously biased.

As stated, vSphere Flash Read Cache is a brand new feature which is part of vSphere 5.5. It allows you to leverage host local SSDs and turn that in to a caching layer for your virtual machines. The biggest benefit of using host local SSDs of course is the offload of IO from the SAN to the local SSD. Every read IO that doesn’t need to go to your storage system means resources can be used for other things, like for instance write IO. That is probably the one caveat I will need to call out, it is “write through” caching only at this point, so essential a read cache system. Now, by offloading reads, potentially it could help improving write performance… This is not a given, but could be a nice side effect.

Just a couple of things before we get in to configuring it. vFlash aggregates local flash devices in to a pool, this pool is referred too as a “virtual flash resource” in our documentation. So in other words, if you have 4 x 200 GB SSD you end up with a 800GB virtual flash resource. This virtual flash resource has a filesystem sitting on top of it called “VFFS” aka “Virtual Flash File System”. As far as I know it is a heavily flash optimized version of VMFS, but don’t pin me on this one as I haven’t broken it down yet.

So now that I know what it is and does, how do I install it, what are the requirements and limitations? Well lets start with the requirements and limitations first.

Requirements and limitations:

  • vSphere 5.5 (both ESXi and vCenter)
  • SSD Drive / Flash PCIe card
  • Maximum of 8 SSDs per VFFS
  • Maximum of 4TB physical Flash-based device size
  • Maximum of 32TB virtual Flash resource total size (8x4TB)
  • Cumulative 2TB VMDK read cache limit
  • Maximum of 400GB of virtual Flash Read Cache per Virtual Machine Disk (VMDK) file

So now that we now the requirements, how do you enable / configure it? Well as with most vSphere features these days the setup it fairly straight forward and simple. Here we go:

  • Open the vSphere Web Client
  • Go to your Host object
  • Go to “Manage” and then “Settings”
  • All the way at the bottom you should see “Flash Read Cache Resource Management”
    • Click “Add Capacity”
    • Select the appropriate SSD and click OK
      Introduction to vSphere Flash Read Cache aka vFlash
  • Now you have a cache created, repeat for other hosts in your cluster. Below is what your screen will look like after you have added the SSD.

Now you will see another option below “Flash Read Cache Resource Management” and it is called “Cache Configuration” this is for the “Swap to host cache” / “Swap to SSD” functionality that was introduced with vSphere 5.0.

Now that you have enabled vFlash on your host, what is next? Well you enable it on your virtual machine, yes I agree it would have been nice to enable it for a full cluster or for a datastore as well but this is not part of the 5.5 release unfortunately. It is something that will be added at some point in the future though. Anyway, here is how you enable it on a Virtual Machine:

  • Right click the virtual machine and select “Edit Settings”
  • Uncollapse the harddisk you want to accelerate
  • Go to “Flash Read Cache” and enter the amount of GB you want to use as a cache
    • Note there is an advanced option, at this section you can also select the block size
    • The block size could be important when you want to optimize for a particular application

Not too complex right? You enable it on your host and then on a per virtual machine level and that is it… It is included with Enterprise Plus from a licensing perspective, so those who are at the right licensing level get it “for free”.

Startup News Flash part 3

Duncan Epping · Aug 20, 2013 ·

Who knew so quickly after part 1 and part 2 there would be a part 3, I guess not strange considering VMworld is coming up soon and there was a Flash Memory Summit last week. It seems that there is a battle going on in the land of the AFA’s (all flash arrays), it isn’t about features / data services as one would expect. No they are battling over capacity density aka how many TBs can I cram in to a single U, not sure how relevant this is going to be over time, yes it is nice to have dense configurations, yes it is awesome to have a billion IOps in 1U but most of all I am worried about availability and integrity of my data.  So instead of going all out on density, how about going all out on data services? Not that I am saying density isn’t useful, it is just… Anyway, I digress…

One of the companies which presented at Flash Memory Summit was Skyera. Skyera announced an interesting new product called skyEagle. Another all-flash array is what I can hear many of you thinking, and yes I thought exactly the same… but skyEagle is special compared to others. This 1u box manages to provide 500TB of flash capacity, now that is 500TB of raw capacity. So just imagine what that could end up being after Skyera’s hardware-accelerated data compression and data de-duplication has done its magic. Pricing wise? Skyera has set a list price for the read-optimized half petabyte (500 TB) skyEagle storage system of $1.99 per GB, or $.49 per GB with data reduction technologies. More specs can be found here. Also, I enjoyed reading this article on The Register which broke the news…

David Flynn (Former Fusion-io CEO) and Rick White (Fusion-io founder) started a new company called Primary Data. The WallStreet Journal reported on this and more or less revealed what they will be working on:”that essentially connects all those pools of data together, offering what Flynn calls a “unified file directory namespace” visible to all servers in company computer rooms–as well as those “in the cloud” that might be operatd by external service companies.” This kind of reminds me of Aetherstore, or at least the description aligns with what Aetherstore is doing. Definitely a company worth tracking if you ask me.

One of the companies I did an introduction post on is Simplivity. I liked their approach to converged as it not only combines just compute and storage, but they also included backup, replication, snapshots, dedupe and cloud integration. They announced this week an update on their Omnicube CN-3000 platform and introduced two new platforms Omnicube CN-2000 and the Omnicube CN-5000. So what are these two new Omnicubes? Basically the CN-5000 is the big brother of the CN-3000 and the CN-2000 is its kid brother. I can understand why they introduced these as it will help expanding the target audience, “one size fits all” doesn’t work when the cost for “all” is the same and so the TCO/ROI changes based on your actual requirements, but in a negative way. One of the features that made SimpliVity unique that has had a major update is the OmniStack Accelerator, this is a custom designed PCIe card that does inline dedupe and compression. Basically an offload mechanism for dedupe and compression where others are leveraging the server CPU. Another nice thing SimpliVity added is support for VAAI. If you are interested in getting to know more, two white papers were released which are interesting to read: a deep dive by Hans de Leenheer and Stephen Foskett and one with a focus on “data management” by Howard Marks.

A bit older announcement, but as I spoke with these folks this week and they demoed their GA product I figured I would add them to the list. Ravello Systems developed a cloud hypervisor which abstracts your virtualization layer and allows you to move virtual machines / vApps between clouds (private and public) without the need to rebuild your virtual machines or guest OS’s. What I am saying is that they can move your vApps from vSphere to AWS to Rackspace without painful conversions every time. Pretty neat right? On top of that, Ravello is your single point of contact meaning that they are also a cloud broker. You pay Ravello and they will take care of AWS / RackSpace etc. of course they allow you to do stuff like snapshotting, cloning and create complex network configurations if needed. They managed to impress me during the short call we had, and if you want to know more I recommend reading this excellent article by William Lam or visit their booth during VMworld!

That is it for part 3, I bet I will have another part next week during or right after VMworld as press releases are coming in every hour at this point. Thanks for reading,

Startup News Flash part 2

Duncan Epping · Aug 13, 2013 ·

First part of the Startup News Flash was published a couple of weeks ago, and as many things have happened I figured I would publish another. At times I guess I will miss out on a news fact or a new company, if that happens don’t hesitate to leave a comment with your findings/opinion or just a link to what you feel is newsworthy! As mentioned in part 1 the primary focus of this article is Startup news / Flash related news. As you can see most flash related except for one.

Nimbus Data launched two brand new arrays: Gemini F400 / F600 arrays. These are all flash arrays, and bring something unique to the table for sure… and that is costs: price per useable gigabyte is $0.78. Yes, that is low indeed. How do they bring it down? Well of course by very efficient deduplication and compression. On top of that, by leveraging standard hardware and getting all smarts from software the price can be kept low. According to the press release these new arrays will be able to provide between 3TB and 48TB of capacity (I almost said disk space there…) and will be shipping end of this year! Although Nimbus declared Hybrid Storage officially dead, mainly because of the cost of Nimbus all flash solution (the F400 starts under US$60,000, the F600 starts under US$80,000.), I still think there is a lot of room for growth in that space and many customer will be interested in those solutions. My question yesterday on twitter was to Nimbus which configuration they did the math with to declare hybrid dead, because cost per gigabyte is one thing, the upfront investment to reach that price point is another. It will be interesting to see how they will do the upcoming 12-18 months, but it is needless to say that they will be going after their competition aggressively. Talking about competition….

Last year at VMworld I briefly stopped at the Tegile booth, besides the occasional tweet I kind of lost track until recent as Tegile just announced series C funding… Not pocket money I would say but a serious round, $35 million, led by Meritech Capital Partners and original stakeholder August Capital and strategic partners Western Digital and SanDisk.  For those who don’t know, Tegile is a storage company who sells both a hybrid and an “all-flash” solution and they have done this in an interesting modular fashion (all-flash placed in front of spinning disks = modular hybrid). Of course they also offer functionality like dedupe/compression and replication. Although I haven’t heard too much from them lately it is a booth I will surely stop by at VMworld. Again, there is a lot of competition in this space and it would be interesting to see an “All-flash / Hybrid Storage bake off”. Tegile vs Nimbus, Nimble vs Tintri, Pure Storage vs Violin…

Violin Memory just announced the 6264 flash Memory Array. This new all flash storage system can provide a capacity of 64 TiB/70.3 TB with a footprint of just 3U, and that is impressive if you ask me. On top of that, it can provide up to 1 million IOps and at a ultra low latency! Who doesn’t want to have 1 million IOps to its disposal right? (More specs to be found here.) To me though what was more exciting in this press release was the announcement of a management tool called Symphony. Symphony provides a single pane of glass for all your Violin devices (read more details here.) It provides a smart management interface that allows you to create custom dashboard, comprehensive reporting, tagging and filtering and of course they provide a RESTful API for you admins out there who love to automate things. Nice announcement from Violin Memory, and those already running Violin hardware I would definitely recommend evaluating Symphony as the video looks promising.

CloudPhysics just announced the Card Store is GA as of today (13th August 2013) and a new round of funding ($ 10 million) led by Kleiner Perkins Caufield & Byers. Previous investors the Mayfield Fund, Mark Leslie, Peter Wagner, Carl Waldspurger, Nigel Stokes, Matt Ocko and VMware co-founders also participated in this round. I would say an exciting day for CloudPhysics. Many have asked over the last year why have I always been enthusiastic about what they do? I think John Blumenthal (CEO) explains it best:

Our servers receive a daily stream of 80+ billion samples of configuration, performance, failure and event data from our global user base with a total of 20+ trillion data points to date. This ‘collective intelligence,’ combined with CloudPhysics’ patent-pending datacenter simulation and unique resource management techniques, empowers enterprise IT to drive Google-like operations excellence using actionable analytics from a large, relevant, continually refreshed data set.

If you are interested in testing their solution, sign up for a free trial  at cloudphysics.com. Pricing starts at $49/month per physical server, more details here. For those wondering what CloudPhysics has to do with flash, well they’ve got a card for that!

That was it for Part 2, hope you found it a useful round-up and I will expect to be able to publish another startup news flash within 2 weeks!

 

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in