Last week on twitter Maish asked a question that got me thinking. Actually, I have been thinking about this for a while now. The question deals with how you design your infrastructure for the various types of workloads (pets and cattle). Whether your workload falls in the “pet” category or the “cattle” category. (If you are not familiar with the terms pets/cattle read this article by Massimo)
Pets vs. cattle – I think that people are mistaken as to what that exactly means for the underlying infrastructure.
— Maish Saidel-Keesing (@maishsk) October 5, 2014
I asked Maish what it actually means for your infrastructure, and at the same time I gave it some more thought over the last week. Cattle is the type of application architecture which handles failures by providing a distributed solution, it scales out instead of up typically, the VMs are disposable as they usually won’t hold state. With pets this is different, they typically scale up and resiliency is often provided by either a 3rd party clustering mechanism or the infrastructure under neath, in many cases contain state and recoverability is key. As you can imagine both types of workloads have different requirements of the infrastructure. Going back to Maish’s question, I guess the real question is if you can afford the “what it means for the underlying infrastructure”. What do I mean with that?
If you look at the requirements of both architectures, you could say that “pets” will typically demand more from the underlying infrastructure when it comes to resiliency / recoverability. Cattle will have less demands from that perspective but flexibility / agility is more important. You can imagine that you could implement two different infrastructure architectures for these specific workloads, but does this make sense? If you are Netflix, Google, Youtube etc then it may make sense to do this due to the scale they operate at and the fact that IT is their core business. In those cases “cattle” is what drives the business, and there are back-end systems. Reality is though that for the majority this is not the case. Your environment will be a hybrid, and more than likely “pets” will have the overhand as that is simply what the state of the world is today.
That does not mean they cannot co-exist. That is what I believe is the true strength of virtualization, it allows you to run many different types of workloads on the same infrastructure. Whether that is your Exchange environment or your in-house developed scale out web application which serves hundreds of thousands of customers does not make a difference to your virtualization platform. From an operational perspective the big benefit here is that you will not have to maintain different run books to manage your workloads. From an ops perspective they will look the same on the outside, although they may differ on the inside. What may change though is the services required for those systems, but with the rich ecosystem available for virtualization platforms these days that should not be a problem. Need extra security / microsegmentation? VMware NSX can provide the security isolation needed to run these applications smoothly. Sub milliseconds latency requirements? Plenty of storage / caching solutions out there that can deliver this!
Will the application architecture shift that is happening right now impact your underlying infrastructure? We have made these huge steps in operational efficiency in the last 5 years, and with SDDC we are about to take the next big step, and although I do believe that the application architecture shift will result in infrastructure changes lets not make the same mistakes we made in the past by creating these infrastructure silos per workload. I strongly believe that repeatability, consistency, reliability and predictability are key and this starts with a solid, scalable and trusted foundation (infrastructure).
Felipe Carballo says
I believe the same! Different types of workloads can coexist in the same infrastructure.
Joseph griffiths says
This is a constant issue for me. Everything is a pet making operational efficiency hard to achieve. It seems that everyone including me try to drive customers to cattle, but what if they want or have to be pets. How can we architect for pets and still achieve greater efficiency. Perhaps I have to look how to architect around pets just like VMware did with ha. Great read.
techblowhard says
The term is derived from the idea that you care about pets, but you really don’t care about cattle.
What most Data center conversations center around today is the application. The physical plumbing is the cattle. We shouldn’t care if a server or a disk or a switch fails. Having a data center built on cattle is how public cloud providers deliver IT today and how the rest of the world will tomorrow. SDDC and webscale have assured that.
Duncan Epping says
Good point, I guess cattle is moving upwards and towards the application stack… the question then remains if your infrastructure foundation needs to change. If you ask me you should not.
Brandon says
I imagine the call out of Maish isn’t well received, he probably cringed reading your post. I did and it wasn’t my tweet. One who tweets best back it up I guess! I found it distracting from the overall content of the post… Otherwise, very interesting topic.
I work as a cloud provider / managed infrastructure, and it is all cattle… more like feral cats. No idea what the servers do, and likely NOT designed to be treated as cattle, in fact they’re remotely someone else’s pets. In this scenario, we’ve chosen to compromise between scaling out and up, and providing services to help others redesign applications to be treated as cattle. Perhaps they don’t take you up on your offer, then it is then their fault during a failure scenario (like a host failing) compared to you having to be responsible both ways. The technologies we’re using like vCNS and NSX provide the secure multi-tenancy to allow both types of customer — in fact you might have one tenant using hacking tools to pen test application design and another tenant doing something needing to be very secure. They can run on the same infrastructure together just fine if it is designed correctly.
Duncan Epping says
Not sure why you feel I am calling him out… It certainly was not my intention to give anyone that idea. I very much respect Maish’s opinion, and have enjoyed his work on anything devops related. It is just that his tweet started a train of thought and I felt it was worth including to show what sparked it. If anything, I liked how Maish posed a question that made people (or me at least) think about a topic.
So to be clear, I wasn’t taking a shot at maish. It is just a topic which has kept me busy for a couple of weeks and this tweet sparked a train of thought.
maishsk says
Brandon I appreciate you defending my honor, but I do not see this as a “Call out” or a “shot”. It was a thought that apparently both Duncan and I have on our minds, and I just voiced that on Twitter. Yes there is a blog post in the works on this subject with some additional insight from me on the subject.
Thanks Duncan for picking up the tweet and starting this discussion.
Justin Jones says
While I believe both pets and cattle can run on the same Infrastructure, there are some Infrastructure types that can support both well (vSphere) and some that really only do well with cattle (Amazon EC2)
Prime example of this- Console support- if somebody fat fingers a network config or makes a dumb setting on an EC2 instance, you are done- you cannot SSH/RDP/VNC into the instance- without network connectivity you are dead in the water. Terminate the instance and start a new one.
For Pets, Console access is an absolute must IMO.