<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Tell us more

Blog

Why Fixing Kubernetes Configuration Inconsistencies is Critical for Multi-tenant and Multi-cluster Environments

Image of construction cranes

In most cases, organizations pilot Kubernetes with a single application. Once successful, these organizations commit to Kubernetes across multiple apps, development and ops teams. Often a self-service model, DevOps and infrastructure leaders will have many users across many different clusters building and deploying. 

While the self-service model removes barriers between application teams and infrastructure, managing cluster configuration becomes unwieldy fast as workloads may be inconsistently configured or manually deployed. Without guardrails, there are likely to be discrepancies in configurations across containers and clusters which can be challenging to identify, correct and keep consistent. This misconfiguration happens when users copy and paste YAML configurations from online examples like StackOverflow or other dev teams, workloads are over-provisioned to “just get things to work” or if there are no existing processes to verify configurations.

63% of deployments allow their workloads to run as a root user
Source: Fairwinds Insights findings across hundreds of production grade clusters

The negative consequences of configuration inconsistencies are too important to ignore: 

  • Security vulnerabilities: Misconfiguration can lead to privilege escalation, vulnerable images, images from untrusted repositories or containers running as root. 69% of companies have had a security incident due to a Kubernetes misconfiguration. Source: The Enterprises Project.

  • Efficiency: Cost may creep up too high as too many resources are being used or stale workloads are not being cleaned up. 45% of containers use <30% of requested Memory (inefficiency). Source: DataDog Container Report

  • Reliability: There may be reliability problems around liveness or readiness probes, or scalability issues (not scaling enough or scaling too frequently due to a lack of PodDisruptionBudgets or HorizontalPodAutoscalers) causing downtime in your app or service.

29% of deployments lack readiness probes
Source: Fairwinds Insights findings across hundreds of production grade clusters

Manually identifying these misconfigurations is highly error-prone and can quickly overwhelm ops teams with code review. 

Technical impact

  • Unresolved container vulnerabilities

  • Over-permissioned deployments

  • Missing health probes

  • Inappropriate resource requests and limits

Business impact

  • Increased security risk 

  • Application downtime 

  • Cloud cost overruns

  • Non-compliance

Why Enforcing Consistent Configuration Patterns is Essential to Kubernetes 

As teams deploy Kubernetes across the organization, it becomes impossible for DevOps teams to manually write or review every Dockerfile and Kubernetes manifest going into their clusters. This is where enforcing configuration pattern becomes essential to Kubernetes success. Without a consistent set of standards, engineering teams will inevitably create security vulnerabilities, overconsume compute resources, and introduce noisy workloads. DevOps teams then quickly burn out responding to pages and putting out fires, with little time left over to make material improvements to the infrastructure. Upgrading can also become a major problem. Organizations waste money on avoidable misconfigurations and unplanned interruptions.

How to Enforce Consistency with Kubernetes Policies

When managing multi-cluster environments with a team of engineers, creating consistency requires you to establish Kubernetes policies to enforce security, efficiency and reliability. So how do you enforce consistency? A critical tool for enabling consistent configuration is with Kubernetes policies. Policies and policy enforcement tooling allow you to define operational guardrails for deployments, as well as specific compliance and security requirements. These policies are generally referred to as Policy-as-Code, enabling platform engineering and operations teams to collaborate with other stakeholders around a common set of rules, patterns, and best practices for IT and engineering teams to follow.

We created a paper that provides an overview of the challenges with policy enforcement in Kubernetes and the negative consequences of not enforcing policy. It offers details on what policies are essential and how to create them, and information on how to enforce policies. 

This paper is ideal for engineering and DevOps leaders managing multi-user, cluster and tenancy environments. 

Best Practices for Kubernetes Policy Enforcement Read More