<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

The Tenth Rule of Cloud-based Infrastructure

I thought I wanted to share some thoughts about the genesis of Fairwinds (f.k.a. ReactiveOps) – about what we do and why we do it. I’ll start with a little history.

Back in college, I launched a two-person tech startup, where I wore many hats – programmer, systems administrator, database administrator. Early on, I learned to appreciate the business value of great infrastructure.

After I graduated, I worked as a programmer, got my MBA and subsequently became the CTO of a number of startups that used Heroku, a hands-off cloud Platform-as-a-Service infrastructure in a box. Like many companies that had migrated to the cloud, these startups used Heroku because it has a great web interface, no servers to manage and good DUX (developer user experience). Infrastructure is challenging stuff, and Heroku provides many services that would be tough to replace with human effort. However, eventually the costs of Heroku outweighed the benefits, and we would run up against the platform’s one-size-fits-all limitations.

Companies often want to run a Heroku-like environment on their own platform, and the startups I founded were no exception. In each case, we’d replace Heroku with in-house DevOps engineers. It always turned into an expensive mess.

Maybe you’re familiar with Internet pioneer Philip Greenspun’s tenth rule of programming: “Any sufficiently complicated C or Fortran program contains an ad-hoc, informally specified, bug-ridden, slow implementation of half of Common Lisp.” In other words, the flexibility and extensibility of Common Lisp make it the bar against which other programming languages are measured.

I have my own tenth rule (I don’t have any others) – Any sufficiently complicated cloud-based infrastructure contains an ad-hoc, informally specified, bug-ridden, slow (and more expensive) implementation of Heroku.

The 80/20 Sweet Spot

Before open source frameworks like Ruby on Rails, companies built lots of code unrelated to business operations. It’s not unlike choosing a templating language or database library – while important, these activities aren’t usually related to core business challenges.

Ruby on Rails adopts many conventions and techniques (MVC, ActiveRecord) that its creator, David Heinemeier Hansson (DHH), didn’t create himself. His strategic, integrated approach paid off, and today it would be unthinkable to start a web or SaaS app from scratch without using an established framework like Ruby on Rails – all the more so since Ruby on Rails has an 80/20 sweet spot, meaning it works well for most apps. Now that Ruby on Rails exists, why build your own Ruby on Rails?

In 2015 ReactiveOps Was Born

Good infrastructure is a prerequisite for great applications and great DUX, but, like Ruby on Rails, most companies don’t need to own their own infrastructure architecture and code.

When I started ReactiveOps almost two years ago, we set out to create a Ruby on Rails-like framework for AWS on the cloud – to stitch together the best open source components and write as little of our own code as possible, thus providing something greater than the sum of its parts. Our goal was to offer that infrastructure to small to mid-sized companies, enabling them to leverage the platform rather than hiring an in-house DevOps team and building their own.

And so we built Omnia, an immutable infrastructure-based approach that used different technologies like Ansible, Terraform, and Packer to create push-button environments. Our clients got the infrastructure they needed without the limitations of Heroku or the headache and cost of hiring an in-house DevOps staff.

While our clients had success using Omnia, it wasn’t scalable, and it didn’t have great DUX. For example, if our customers’ engineers wanted to, say, upgrade Ruby, they had to learn all of the tools or ask us to take care of it. There was no way those developers could deploy an application without our help.

That’s when we learned about a tool that changed everything …

Enter Kubernetes

About a year after we started ReactiveOps, we re-evaluated a tool called Kubernetes, an open-source platform for automating the deployment, scaling, and management of application containers across clusters of hosts. Kubernetes’ infrastructure promised to enable companies to deploy applications rapidly at scale and roll out new features easily while also using only the resources needed.

At first we were skeptical. While our team has decades of experience in implementing tried-and-true Amazon Machine Image (AMI)-based approaches, we lacked a conceptual understanding of containers. To further complicate matters, the way containers worked and behaved was largely undefined. Would dockerization and containerization really pay off for most companies?

Kubernetes 1.2, released in April 2016, included features geared more toward general-purpose usage. It was being touted as the next big thing, and we decided to put it to the test. We quickly discovered that Kubernetes is an elegant, structured, real-world solution to containerization at scale that solves key challenges that other technologies didn’t address. It has over 1.6 million lines of code by over 1,000 contributors – it has many orders of magnitude more functionality than we could build. There was no realistic way we could compete against it.

That’s when we made the difficult yet strategic decision to replace Omnia with Kubernetes.

Kubernetes Is the Future

Kubernetes is a phenomenal next-generation framework that can run any workload. It provides great DUX, the rate of innovation is accelerating and it has the commitment of Google behind it, so it’s not going anywhere. With Kubernetes, our clients can have their own Heroku running in their own Google cloud, AWS or on-prem environment. They can even deploy their own applications into it without our help.

Like Ruby on Rails, Kubernetes includes a lot of smart architectural decisions that facilitate the structuring of applications within containerization. And many things remain in your control. For example, you can decide how to set up, maintain and monitor different Kubernetes clusters, as well as how to integrate those clusters into the rest of your cloud-based infrastructure.

What We Do Best

Some of our competitors have built their own infrastructure to compete with Kubernetes, much as we used to do. Others have built their own proprietary distributions of Kubernetes, then sell enterprise support licenses. Still others are custom trial-and-error Ops shops that strive to solve individual client challenges the hard way – without first starting with a set of tools and libraries designed to work together.

We use Kubernetes. While the decision to abandon Omnia wasn’t easy, we weren’t wedded to our own implementation. And because we didn’t answer to external investors, we had the freedom to make the right business decision at the right time.

How We Work

We start with a 1-2 week deep-dive infrastructure audit where we look at our clients’ systems and applications to determine if their app(s) will work in Kubernetes. If so, we launch a scalable, outcome-based implementation. We traditionally get our clients up and running on Kubernetes within 2-4 months.

Our focused approach is to build a Platform-as-a-Service solution with Kubernetes as the core. Our solution is based on standard technologies so you get commercial off-the-shelf tools amalgamated in a thoughtful way. The business value we provide is our exceptional DevOps services and expertise.

It’s our job to give our clients great infrastructure. Kubernetes enables us to do what we do best, which frees up our clients to focus on doing what they do best.