• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

pernixdata

PernixData announcements at #VFD5

Duncan Epping · Jun 25, 2015 ·

Today PernixData presented at Virtualization Field Day 5. Excellent presentation by Satyam once again. This week I was fortunate to catch up with Frank Denneman to discuss what was going to be announced and what can be expected in the near future. I want to make it clear that there were no expectations given around release dates, don’t expect this to drop next week.

There were 4 key announcements:

  1. PernixData Architect – A better way to design, operate, and optimize data centers
  2. PernixData Cloud – Making enterprise IT more transparent
  3. PernixData FVP
    1. Freedom – Yes, “free” is the key word here!
    2. New features / functionality…

I am going to go at this in a different order than the deck, as I want to cover some of the changes with regards to FVP first. Satyam spoke about a new thing called “FVP Freedom“. FVP Freedom is a free version of FVP which can be used by anyone in any environment. Of course there are some constraints / limitations and these are:

  • Up to 128GB DFTM cluster for write through acceleration
  • Community support

However, you can use FVP Freedom for an unlimited number of hosts and unlimited number of VMs. Note that “DFTM” stands for Distributed Fault Tolerant Memory. This means that FVP Freedom gives you memory (read) caching only, no SSD caching. (128GB limit per cluster) I think this is huge, and it is a very smart way of getting people to test your solution and run it in production. So what can you do with 128GB? Well of course they tested this, and they were capable of increasing the VSImax users from 181 to 328 with that 128GB of memory on 2 hosts. You may wonder why they took this approach cause what does giving it away for free bring them? Well that will be obvious when you read the other announcements.

Besides a free version of FVP some enhancements to the current version were also announced. For me support for vSphere 6.0 and VVols were two major items. On top of that new “phone home” functionality is build in, which allows for better and pro-active support. What also stood out was the new stand alone UI. This means that you will be taken out of the Web Client to a standalone HTML5+JS based interface. You may wonder why they did this, that is where the two new product announcements come in to play. FVP is still a Windows installable by the way, I hoped they would announce an appliance which lowers complexity in terms of installation and management, but maybe next time who knows.

PernixData Architect was the first announcement. It is a piece of software that enables you to monitor your infrastructure (storage focussed of course) and make educated decisions based on the information and even recommendations provided. So what are we talking about in terms of metrics etc? PernixData Architect (for now?) is focussed on storage, not just from a cluster point of view, but also from a host level and virtual machine level. What is the latency a virtual machine is experiencing? How many IOPS does this VM do on average? What is the throughput? What is the read/write ratio? What are common block sizes? All the things you would like to know when designing, scaling, sizing your storage infrastructure and of course when using FVP.

Besides the details above you can for instance also see what the active working set is in your environment for any of your VMs. You can even get recommendations around how to configure FVP, you may have it set to write back but if you are mainly serving reads from cache you may want to change that for instance. It will also give you other recommendations for instance around networking etc.

You can imagine that with all the metrics and info they are gathering they will be able to provide you much more recommendations in the future. I can see those dashboards expanding fast, and I think it is valuable for everyone to understand how their workloads are behaving. On twitter some comments were made about vR Ops and CloudPhysics. Not a fair comparison as Pernix is focussed on “just” storage for now. Personally I hope they will start tying in other aspects like memory, cpu and networking as I don’t think customers want to be stuck with 2 or 3 monitoring solutions.

Now that you have all that data, what can you do with it? Well that is where PernixData Cloud comes in. PernixData Cloud can give you Insights in to how you are doing compared to others in the industry with similar environments, or even with different environments. Those running PernixData Architect can feed it in to the cloud analytics platform and do an analysis on it. But what if they don’t? How useful is this cloud analytics platform going to be? Well here is the catch, when you use FVP Freedom one of the requirements will be to upload your statistics and environmental details in to PernixData Cloud. So, what kind of data can you get out of it? Let me give you two visual examples as that shows immediately why this is valuable:

Both of the above examples demonstrate what PernixData Cloud Insights can give you. Data that is going help making purchasing decisions, and I can see how it could also be useful in the future for making design decisions. (Here is what others did to achieve X.) Best example is the top screenshot, not sure which flash device to buy? What are others buying? What can you expect out of it in terms of latency/throughput/IOPS? Cloud Insights will enable you to make educated decisions based on real life environments instead of based on fact-sheets which always appear to be misleading.

All in all, exciting news / announcements from PernixData at Virtualization Field Day 5. Nice work guys, and thanks Mr Denneman for taking the time to have a chat with me and thanks Mr Foskett for streaming the event live!

ScaleIO in the ESXi Kernel, what about the rest of the ecosystem?

Duncan Epping · Jan 6, 2015 ·

Before reading my take on this, please read this great article by Vijay Ramachandran as he explains the difference between ScaleIO and VSAN in the kernel. And before I say anything, let me reinforce that this is my opinion and not VMware’s necessarily. I’ve seen some negative comments around Scale IO / VMware / EMC, most of them are around the availability of a second storage solution in the ESXi kernel next to VMware’s own Virtual SAN. The big complaint typically is: Why is EMC allowed and the rest of the ecosystem isn’t? The question though is if VMware is really not allowing other partners to do the same? While flying to Palo Alto I read an article by Itzik which stated the following:

ScaleIO 1.31 introduces several changes in the VMware environment. First, it provides the option to install the SDC natively in the ESX kernel instead of using the SVM to host the SDC component. The V1.31 SDC driver for ESX is VMware PVSP certified, and requires a host acceptance level of “PartnerSupported” or lower in the ESX hosts.

Let me point out here that the solution that EMC developed is under PVSP support. What strikes me is the fact that many seem to think that what ScaleIO achieved is a unique thing despite the “partner support” statement. Although I admit that there aren’t many storage solutions that sit within the hypervisor, and this is great innovation, it is not unique for a solution to sit within the hypervisor.

If you look at flash caching solutions for instance you will see that some sit in the hypervisor (PernixData, SanDisk’s Flashsoft) and some sit on top (Atlantis, Infinio). It is not like VMware favours one over the other in case of these partners. It was their design, it was their way to get around a problem they had… Some managed to develop a solution that sits in the hypervisor, others did not focus on that. Some probably felt that optimizing the data path first was most important, and maybe even more important they had the expertise to do so.

Believe me when I say that it isn’t easy to create these types of solutions. There is no standard framework for this today, hence they end up being partner supported as they leverage existing APIs and frameworks in an innovative way. Until there is you will see some partners sitting on top and others within the hypervisor, depending on what they want to invest in and what skill set they have… (Yes a framework is being explored as talked about in this video by one of our partners, I don’t know when or if this will be release however!)

What ScaleIO did is innovative for sure, but there are others who have done something similar and I expect more will follow in the near future. It is just a matter of time.

VMware / ecosystem / industry news flash… part 3

Duncan Epping · Oct 10, 2014 ·

It has been a couple of weeks since the last VMware / ecosystem / industry news flash… but we have a couple of items which I felt are worth sharing. Same as with the previous two parts I will share the link, my thoughts around it and hope that you will leave a comment with your thoughts around a specific announcement. If you work for a vendor, I would like to ask to add a disclaimer mentioning this so that all the cards are on the table.

  • PernixData FVP 2.0 available! New features and also new pricing / packaging!
    Frank Denneman has a whole slew of articles describing the new functionality of FVP 2.0 in-depth. If you ask me especially the resilient memory caching is a cool feature, but also the failure domains is something I can very much appreciate as it will allow you to build smarter clusters! The change in pricing/packaging kind of surprised me, an “Enterprise” edition was announced and the current version was renamed to “Standard”. The SMB package was renamed to “Essentials Plus” which from a naming point of view now aligns more with the VMware naming scheme, which makes life easier for customers I guess. I have not seen details around the pricing itself yet, so don’t know what the impact actually is. PernixData has upped the game again and it keeps amazing me how fast they keep growing and at which pace they are releasing new functionality. It makes you wonder what is next for these guys?!
  • Nutanix Unveils Industry’s First All-Flash Hyper-Converged Platform and Only Stretch Clustering Capability!
    I guess the “all-flash” part was just a matter of time considering the price point flash devices have reached. I have looked at these configurations many times, and if you consider that SAS drives are now as expensive as decent SSDs it only makes sense. It should be noted that “all-flash” also means a new model, NX-9000, and this comes as a 2U / 2Node form factor. List price is $110,000 per node… As that is 220k per block and with a 3 node minimum 330K it feels like a steep price, but then again we all know that the street price will be very different. The NX-9000 comes with either 6x 800GB or 1.6TB flash device for capacity, and I am guessing that the other models will also have “all-flash” options as well in the future… it only makes sense. What about that stretched clustering? Well this is what excited me most from yesterdays announcement. In version 4.1  Nutanix will allow for up to 400KM of distance between sites for a stretched cluster. Considering their platform is “vm aware” it should be very easy to select which VMs you want to protect (and which you do not). On top of that they provide the ability to have two different hardware platforms in each of the sites. In other words you can run with a top of the line block in your primary site, while having a lower end block in your recovery site. From a TCO/ROI point of view this can be very beneficial if you have no requirement for a uniform environment. Judging by the answers on twitter, the platform has not gone through VMware vSphere Metro Storage Cluster certification yet but this is likely to happen soon. SRM integration is also being looked at. All in all, nice announcements if you ask me!
  • SolidFire announces two new models and new round of funding (82 million!)
    What is there to say about the funding that hasn’t been said yet. 82 million in series D says enough if you ask me. SolidFire is one of those startups which have impressed me from the very beginning. They have a strong scale-out storage system which offers excellent quality of service functionality, a system which is primarily aimed at the Service Provider market. Although that seems to slowly change with the introduction of these new models as their smallest model now brings a 100K entry point. Note that the smallest configuration with SolidFire is 4 nodes, spec details can be found here. As stated, what excites me most with SolidFire is the services that the systems brings: QoS, data reduction and replication / SRM integration.

Thanks, and again feel free to drop a comment / leave your thoughts!

Looking back: Software Defined Storage…

Duncan Epping · May 30, 2014 ·

Over a year ago I wrote an article (multiple actually) about Software Defined Storage, VSAs and different types of solutions and how flash impacts the world. One of the articles contained a diagram and I would like to pull that up for this article. The diagram below is what I used to explain how I see a potential software defined storage solution. Of course I am severely biased as a VMware employee, and I fully understand there are various scenarios here.

As I explained the type of storage connected to this layer could be anything DAS/NFS/iSCSI/Block who cares… The key thing here is that there is a platform sitting in between your storage devices and your workloads. All your storage resources would be aggregated in to a large pool and the layer should sort things out for you based on the policies defined for the workloads running there. Now I drew this layer coupled with the “hypervisor”, but thats just because that is the world I live in.

Looking back at this article and looking at the state of the industry today, a couple of things stood out. First and foremost, the term “Software Defined Storage” has been abused by everyone and doesn’t mean much to me personally anymore. If someone says during a bloggers briefing “we have a software defined storage solution” I typically will ask them to define it, or explain what it means to them. Anyway, why did I show that diagram, well mainly because I realised over the last couple of weeks that a couple of companies/products are heading down this path.

If you look at the diagram and for instance think about VMware’s own Virtual SAN product than you can see what would be possible. I would even argue that technically a lot of it would be possible today, however the product is also lacking in some of these spaces (data services) but I expect this to be a matter of time. Virtual SAN sits right in the middle of the hypervisor, the API and Policy Engine is provided by the vSphere layer, it has its own caching service… For now it isn’t supported to connect SAN storage, but if I want to I could even today simply by tagging “LUNs” as local disks.

Another product which comes to mind when looking at the diagram is Pernix Data’s FVP. Pernix managed to build a framework that sits in the hypervisor, in the data path of the VMs. They provide a highly resilient caching layer, and will be able do both flash as well as memory caching in the near future. They support different types of storage connected with the upcoming release… If you ask me, they should be in the right position to slap additional data services like deduplication / compression / encryption / replication on top of it. I am just speculating here, and I don’t know the PernixData roadmap so who knows…

Something completely different is EMC’s ViPR (read Chad’s excellent post on ViPR) and although they may not entirely fit the picture I drew today they are aiming to be that layer in between you and your storage devices and abstract it all for you and allow for a single API to ease automation and do this “end to end” including the storage networks in between. If they would extend this to allow for certain data services to sit in a different layer then they would pretty much be there.

Last but not least Atlantis USX. Although Atlantis is a virtual appliance and as such a different implementation than Virtual San and FVP, they did manage to build a platform that basically does everything I mentioned in my original article. One thing it doesn’t directly solve is the management of the physical storage devices, but today neither does FVP or Virtual SAN (well to a certain extend VSAN does…) But I am confident that this will change when Virtual Volumes is introduced as Atlantis should be able to leverage Virtual Volumes for those purposes.

Some may say, well what about VMware’s Virsto? Indeed, Virsto would also fit the picture but the end of availability was announced not too long ago. However, it has been hinted at multiple times that Virsto technology will be integrated in to other products over time.

Although by now “Software Defined Storage” is seen as a marketing bingo buzzword the world of storage is definitely changing. The question now is I guess, are you ready to change as well?

PernixData feature announcements during Storage Field Day

Duncan Epping · Apr 23, 2014 ·

During Storage Field Day today PernixData announced a whole bunch of features that they are working on and will be released in the near future. In my opinion there were four major features announced:

  • Support for NFS
  • Network Compression
  • Distributed Fault Tolerant Memory
  • Topology Awareness

Lets go over these one by one:

Support for NFS is something that I can be brief about I guess; as it is what it says it is. Something that has come up multiple times in conversations seen on twitter around Pernix and it looks like they have managed to solve the problem and will support NFS in the near future. One thing I want to point out, PernixData does not introduce a virtual appliance in order to support NFS or create an NFS server and proxy the IOs, sounds like magic right… Nice work guys!

It gets way more interesting with Network compression. What is it, what does it do? Network Compression is an adaptive mechanism that will look at the size of the IO and analyze if it makes sense to compress the data before replicating it to a remote host. As you can imagine especially with larger block sizes (64K and up) this could significantly reduce the data that is transferred over the network. When talking to PernixData one of the questions I had was well what about the performance and overhead… give me some details, this is what they came back with as an example:

  • Write back with local copy only = 2700 IOps
  • Write back + 1 replica = 1770 IOps
  • Write back + 1 replica + network compression = 2700 IOps

As you can see the number of IOps went down when a remote replica was added. However, it went up again to “normal” values when network compression was enabled, of course this test was conducted using large blocksizes. When it came to CPU overhead it was mentioned that the overhead so far has been demonstrated to be negligible.You may ask yourself why, it is fairly simple: the cost of compression weighs up against the CPU overhead and results in an equal performance due to lower network transfer requirements. What also helps here is that it is an adaptive mechanism that does a cost/benefit analyses before compressing. So if you are doing 512 byte or 4KB IOs then network compression will not kick in, keeping the overhead low and the benefits high!

I personally got really excited about this feature: DFTM = Distributed Fault Tolerant Memory. Say what? Yes, distributed fault tolerant memory! FVP, indeed besides virtualizing flash, can now also virtualize memory and create an aggregated pool of resources out of it for caching purposes. Or in a more simplistic way: what they allow you to do is reserve a chunk of host memory as virtual machine cache. Once again happens on a hypervisor level, so no requirement to run a virtual appliance, just enable and go! I would want to point out though that there is “cache tiering” at the moment, but I guess Satyam can consider that as a feature request. Also, when you create an FVP cluster hosts within that cluster will either provide “flash caching” capabilities or “memory caching” capabilities. This means that technically virtual machines can use “local flash” resources while the remote resources are “memory” based (or the other way around). I would avoid this at all cost personally though as it will give some strange unpredictable performance result.

So what does this add? Well crazy performance for instance…. We are talking 80k IOps easily with a nice low latency of 50-200 microseconds. Unlike other solutions, FVP doesn’t restrict the size of your cache either. By default it will make a recommendation of 50% unreserved capacity to be used per host. Personally I think this is a bit high, as most people do not reserve memory this will typically result 50% of your memory to be recommended… but fortunately FVP allows you to customize this as required. So if you have 128GB of memory and feel 16GB of memory is sufficient for memory caching then that is what you assign to FVP.

Another feature that will be added is Topology Awareness. Basically what this allows you to do is group hosts in a cluster and create failure domains. An example may make this a bit easier to grasp: Lets assume you have 2 blade chassis each with 8 hosts, when you enable “write back caching” you probably want to ensure that your replica is stored on a blade in the other chassis… and that is exactly what this feature allows you to do. Specify replica groups, add hosts to the replica groups, easy as that!

And then specify for your virtual machine where the replica needs to reside. Yes you can even specify that the replica needs to reside within its failure domain if there are requirements to do so, but in the example below the other “failure domain” is chosen.

Is that awesome or what? I think it is, and I am very impressed by what PernixData has announced. For those interested, the SFD video should be online soon, and those who are visiting the Milan VMUG are lucky as Frank mentioned that he will be presenting on these new features at the event. All in all, an impressive presentation again by PernixData if you ask me… awesome set of features to be added soon!

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in