CloudPhysics Storage Analytics and new round of funding

When I just woke up I saw the news was out… A new round of funding for CloudPhysics! CloudPhysics raised $15 million in a series C investment round, bringing the company’s total funding to $27.5 million! Congratulations folks, I can’t wait to see what this new injection will result in to. One of the things that CloudPhysics heavily invested in to the past 12 months has been the storage side of the house. In their SaaS based solution one of the major pillars today is Storage Analytics, along side General Health Checks and Simulations.

The Storage Analytics section is available as of today to everyone out there! It will allow you to monitor things like “datastore contention”, “unused VMs” and everything there is to know about capacity savings ranging from inside the guest to datastore level details. If you ever wondered how “big data” could be of use to you, I am sure you will understand when you start using CloudPhysics. Not just their monitoring and simulation cards are brilliant, the Card Builder is definitely one of their hidden gems. If you need to convince your management, than all you should do is show the above screenshot: savings opportunity!

Of course there is a lot more to it than I will be able to write about in this short post. In my opinion if you truly want to understand what they bring to the table, just try it out for free for 30 days here!

PS: How about this brilliant Infographic… from the people who taught you how to fight the noisy neighbour, they now show you how to defeat that bully!

**disclaimer: I am an advisor to CloudPhysics **

FW: Dear Clouderati Enterprise IT is different…

I hardly ever do this, posting people to a blog post… I was going through my backlog of articles to read when I spotted this article by my colleague Chuck Hollis. I had an article in my draft folder on the subject of web scale myself. Funny enough it so close to Chuck’s that there is no point in publishing it… rather I would like to point you to Chuck’s article instead.

To me personally, the below quote captures the essence of the article really well.

If you’re a web-scale company, IT doesn’t just support the business, IT is the business.

It is a discussion I have had on twitter a couple of times. I think Web Scale is a great concept, and I understand the value for companies like Google, Facebook or any other large organization in the need of highly scalable application landscape. But the emphasize here is on the application and its requirements, and it makes a big difference if you are providing support for hundreds if not thousands of applications which are not build in-house. If anyone tells you that because it is good for Google/Facebook/Twitter it must be good for you, ask yourself what the requirements are of your application. What does your application landscape look like today? What will it look like tomorrow? And what will be your IT needs for the upcoming years? Read more in this excellent post by Chuck, and make sure to leave a comment! Dear Clouderati Enterprise IT is different…

 

Tour through VMware vCloud Hybrid Service part 1

Last week I received an account for the VMware vCloud Hybrid Services through one of our internal teams. I wanted to play around with it just to see what it can do and how things work, but also to see what the user experience was like, basically a tour through VMware vCloud Hybrid Service. I received my username and a link to set a password via email and it literally took 3 seconds to get started after setting that password. First I was presented with was a screen that showed the regions I had to my disposal as shown below, 4 regions.

You may wonder why that matters, well it is all about availability… Of course each region individually will have done everything there is to be done when it comes to resiliency but what if a whole site blows up? Well that is where multiple regions come in to play. I just want to deploy a small virtual machine for now so I am going to select a random site… I will use Virginia. [Read more...]

If only the world of applications was as simple as that…

In the last 6-12 months I have heard many people making statements around how the application landscape should change. Whether it is a VC writing on GigaOm about how server CPU utilization is still low and how Linux Containers and new application architectures can solve this. (There are various reasons CPU util is low by the way, ranging from being memory constraints to storage perf constraints and by design) Or whether it is network administrators simply telling the business that their application will need to cope with a new IP-Addresses after a disaster recovery fail-over situation. Although I can see where they are coming from I don’t think the world of applications is as simple as that.

Sometimes it seems that we forget something which is fundamental to what we do.  IT as it exists today provides a service to its customers. It provides an infrastructure which “our / your customers” can consume. This infrastructure, to a certain point, should be architected based on the requirements of your customer. Why? What is the point of creating a foundation which doesn’t meet the requirements of what will be put on top of it? That is like building a skyscraper on top of the foundation of a house, bad things are bound to happen! Although I can understand why we feel our customers should change I do not think it is realistic to expect this will happen over night. Or even if it is realistic to ask them to change?

Just to give an example, and I am not trying to pick on anyone here, lets take this quote from the GigaOm article:

Server virtualization was supposed to make utilization rates go up. But utilization is still low and solutions to solve that will change the way the data center operates.

I agree that Server Virtualization promised to make the utilization rates go up, and they did indeed. And overall utilization may still be low, although it depends on who you talk to I guess or what you include in your numbers. Many of the customers I talk to are up to around 40-50% utilized from a CPU perspective, and they do not want to go higher than that and have their reasons for it. Was utilization the only reason for them to start virtualizing? I would strongly argue that it was not the only reason, there are many others! Reducing the number of physical servers to manage, availability (HA) of their workloads, transportability of their workloads, automation of deployments, disaster recovery, maintenance… the reasons are almost countless.

I guess you will need to ask yourself what all of these reasons have in common? They are non-disruptive to the application architecture! Yes there is the odd application that cannot be virtualized, but the majority of all X86 workloads can be, without the need to make any changes to the application! Clearly you would have to talk to the app owner as their app is being migrated to a different platform, but there will be hardly any work for them associated with this migration.

Oh I agree, everything would be a lot better when the application landscape was completely overhauled and magically all applications use a highly available and scalable distributed application framework. Everything would be a lot better when all applications magically were optimized for the infrastructure they are consuming, applications can handle instant ip-address changes, applications can deal with random physical servers disappearing. Reality unfortunately is that that is not the case today, and for many of our customers in the upcoming years. Re-architecting an application, which often for most app owners comes from a 3rd party, is not something that happens overnight. Projects like those take years, if they even successfully finish.

Although I agree with the conclusion drawn by the author of the article, I think there is a twist to it:

It’s a dynamic time in the data center. New hardware and infrastructure software options are coming to market in the next few years which are poised to shake up the existing technology stack. Should be an exciting ride.

Reality is that we deliver a service, a service that caters for the needs of our customers. If our customers are not ready, or even willing, to adapt this will not just be a hurdle but a showstopper for many of these new technologies. Being a disruptive (I’m not a fan of the word) technology is one thing, causing disruption is another.

Startup News Flash part 5

After the VMworld storm has slowly died down it is back to business again… This also means less frequent updates, although we are slowly moving towards VMworld Barcelona and I suspect there will be some new announcements at that time. So what happened in the world of flash/startups the last two weeks? This is Startup News Flash part 5, and it seems to be primarily around either funding rounds or acquisitions.

Probably one of the biggest rounds of funding I have seen for a “new world” storage company… $ 150 million. Yes, that is a lot of money. Congrats Pure Storage! I would expect an IPO at some point in the near future, and hopefully they will be expanding their EMEA based team. PureStorage is one of those companies which has always intrigued me. As GigaOm suggests this boosts will probably be used to lower the prices, but personally I would prefer a heavy investment in things like disaster recovery and availability. It is an awesome platform, but in my opinion it needs dedupe aware sync and a-sync replication! That should include VMware SRM integration from day 1 of course!

Flash is hot… Virident just got acquired by Western Digital for $685 million. Makes sense if you consider WD is known as the “hard disk” company, they need to keep on growing their business and the business of hard disks is going to be challenging in the upcoming years with SSD becoming cheaper and cheaper. Considering this is the second acquisition (sTec being the other) related to flash by WD you can say that they mean business.

I just noticed that Cisco announced they intent to acquire Whiptail for $415 million in cash. Interesting to see Cisco moving in to the storage space and definitely a smart move if you ask me.  With UCS for compute and Whiptail for storage they will be able to deliver the full stack considering they more or less already own the world of networking. Will be interesting to see how they integrate it in their UCS offerings. For those who don’t know, Whiptail is an all flash array (afa) which leverages a “scale out” approach, so start small and increase capacity by adding new boxes. Of course they offer most functionality other AFA vendors do, for more details I recommend reading Cormac’s excellent article.

To be honest, no one knew what to expect from this public VSAN Beta announcement. Would we get a couple of hundred registrations or thousands? Well I can tell you that they are going through the roof, make sure to register though if you want to be a part of this! Run it at home nested, run it on your test cluster at the office, do whatever you want with it… but make sure to provide feedback!