Fairwinds | Blog

What Problems Does Kubernetes Solve?

Written by Danielle Cook | May 11, 2020 9:31:17 PM

This series is intended for engineers new to Kubernetes and GKE. It provides a basic overview of Kubernetes, definitions  and a quick start for building a Kubernetes cluster in GKE  and a workshop for building your first multi-tier webapp. If you are looking for more in-depth Kubernetes best practices and help, get in touch.

Before we get into building your first GKE cluster, it’s important to understand a few things about Kubernetes. 

What is Kubernetes?

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. Source: Kubernetes.io 

How did Kubernetes come about? 

In 2005, Google introduced the Borg System. It started as a small project with two or three people working on it. It was a large scale cluster management and resource scheduling system that introduced: 

  • Admission control - what work is allowed to be scheduled in a cluster
  • Bin packing with over-commitment - how do we get many systems and processes running on a single node without interference 
  • Process level resource isolation - if you schedule a container on a single node, how do you ensure the process requirements doesn’t interfere with the process requirements of another

Kubernetes made all of this declarative. That means you can take your workloads you need to schedule, define them in a YAML file, submit them to an API and the API tells you if it was able to schedule the work. 

Kubernetes as we know it today

In 2014, Google introduced Kubernetes as an open source version of the Borg system. In 2015, Google joined with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF). That same year the first KubeCon event was held and the CNCF started quarterly release cycles for Kubernetes. 

What Problems Does Kubernetes Solve?

Users expect applications and services to be available 24/7

When you work with a container orchestrator like Kubernetes, you can schedule nodes or processes across many machines, many different times. This allows you to lose a node or process without seeing a disruption in uptime of your service.

Developers expect to deploy code multiple times a day with no downtime

If you are a systems operator, developers want you to give them the opportunity to deploy code multiple times a day. Kubernetes allows you to implement smart processes and schemes for rolling updates without downtime.

Companies desire more efficient use of cloud resources

Rather than having a single process running on a single cloud node that you pay for 24/7, you can schedule many processes on a single node. Your cloud nodes can also recognize when new processes cannot be scheduled and thus require new resources, or alternatively, when resources are idle and need to be spun down. Kubernetes offers easy ways to toggle this elasticity. 

Fault tolerant and self-healing infrastructure improves reliability

Kubernetes provides reliability. If a container or entire node goes down, resources or a single process will be rescheduled by Kubernetes on a healthy node.

Automated horizontal scaling in the node and container (pod) scope

Kubernetes allows you to bring up new nodes and automatically add them to your cluster. If a single service is resource constrained, Kubernetes will detect this and bring up new instances to handle the additional load. 

Next we'll go into Kubernetes architecture basics and definitions.