<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Poor Kubernetes Configuration Can Impact the Security of Your Clusters in Surprising Ways

Misconfigurations in the world of Kubernetes happen all the time. And they are a big deal, as improper configuration affects important factors like container security, efficiency, and reliability. These three high-level constructs are inextricably linked to successful Kubernetes deployments—and notably, they are also the three main areas impacted by erroneous configurations. When problems around security, efficiency, and reliability are not consciously addressed with best practices, key elements of Kubernetes ownership like cost efficiency, user experience, and overall performance are seriously affected. 

The good news is, all three of these important areas can be continually improved and managed simply by focusing on proper configuration. This is the key to running happy and secure Kubernetes clusters. 

Read our newest white paper: The Good, The Bad & The Misconfigured

Eliminate Kubernetes misconfigurations to reduce risk and improve reliability and cloud efficiency.

The Case for Configuration

In truth, the majority of organizations have not figured out how to maintain effective Kubernetes configurations. The numbers from our recent Kubernetes Benchmark Report tell us only 35% of businesses have managed to correctly configure their workloads using best practices like liveness and readiness probes. These health probes are a critical need for reliable workloads. Without them, Kubernetes is not able to effectively self-heal or perform zero-downtime deployments. Working without these probes in place can create all sorts of problems around reliability—and it’s all the result of poor configuration practices. 

One of the core features of Kubernetes is its ability to automatically scale resources up or down in response to demand. However, without hints as to how much CPU and memory an application is expected to use, Kubernetes is forced to make uneducated guesses about how to scale applications. Setting requests and limits for compute resources is technically optional (Kubernetes will happily accept workloads that lack these settings) but neglecting them can lead to issues with reliability or to painful amounts of overspending.

Further, even small configuration errors can create big security holes if not addressed quickly and effectively. As an example, containers running with more security privileges than needed, such as root-level access, have become a common vulnerability. Under certain configurations, containers may be able to access the host node, allowing them to access or even modify other applications in the cluster. And because these configurations are not set by default, they must be established as policy by security teams. 

The worst part is, configuration problems grow increasingly painful over time, consuming tons of time and compounding security risk. At first, what seems like a few small issues quickly becomes full-on Kubernetes chaos in the shape of security vulnerabilities, reliability issues and wasted dollars. 

Finding Solutions

Configuration validation, also known as Infrastructure as Code (IaC) scanning, can be done through manual code review with small teams running one or two Kubernetes clusters. But problems emerge when organizations get larger, with numerous development teams deploying to multiple clusters. Platform and security leaders, along with DevOps teams, can quickly lose visibility and control into what is happening in their Kubernetes clusters. This reality demonstrates the need for automation and policies to enforce consistency and provide the appropriate guardrails across the enterprise. 

How misconfigurations are minimized depends somewhat on the size of the organization. Large ones may find it difficult to manually check each security configuration to assess risk, while smaller ones may not. Because defaults in Kubernetes tend to be naturally open and not secure, it’s important to avoid using these default settings until all security implications (and how they impact the overall risk tolerance) are readily understood. 

Helpful guidance and a useful framework for hardening an environment can be found in various objective, consensus-driven security guidelines for Kubernetes software, such as the CIS Benchmark. When these best practices are paired with risk-based policies integrated into the CI/CD pipeline, container security gets a lot better. Commits or builds that do not meet minimum security requirements can be halted. 

Rely on Insights

Protecting Kubernetes clusters and workloads at runtime, to ensure security, efficiency and reliability, demands a multi-pronged approach using defense-in-depth. Part of this solution comes from finding a SaaS governance service ownership platform, like Fairwinds Insights, with the ability to establish effective guardrails governance, streamline development and operations and provide a better (and safer) user experience. 

Get Kubernetes security, cost allocation and avoidance, compliance and guardrails in one platform for free with Fairwinds Insights.

Because misconfigurations happen so often, building a stable, reliable, and secure cluster only happens when industry best practices are followed. And this level of governance only comes through a trusted partner, well-versed in the process of unifying teams, simplifying complexity, and building on Kubernetes expertise to save time, reduce risk and configure confidently.

See how Fairwinds Insights reduces your Kubernetes risk!