To truly understand and appreciate the potential risks in your Kubernetes clusters, you need a strong set of best practices for securing sensitive workloads. Aside from simply following actionable recommendations, businesses must find ways to access things like control variabilities, security monitoring and auditing considerations. The truth is, there is no right or wrong path to Kubernetes success. Yes, ineffective and potentially dangerous paths exist, but so too do several good ones. The path you decide to follow should be the one that addresses the needs and priorities of your business.
Ask yourself a few questions:
Once you have answers, and likely several more, you will have the information you need to decide the best way to implement Kubernetes, creating a process and clarifying tasks and priorities. As a result, you will be better equipped to explore the inventory of choices and best practices at your disposal.
Today, organizations are still facing the top three most common (and dangerous) threats:
Although the world of cloud native technologies and Kubernetes is still relatively new, the essential core business challenge remains the same. Organizations have to figure out how to accelerate development speed while also maintaining robust security practices. These two business objectives are still vying for equal attention in the container space.
Obviously, the best way to protect a cluster is to keep people (and their risky behavior) out of the Kubernetes cluster altogether. But considering engineers need to interact with the cluster itself—and customers need to interact with the application the cluster is running—optimizing Kubernetes security ain’t easy.
Kubernetes will not secure your application code. And it won’t prevent your developers from introducing bugs and flaws that result in security problems. What Kubernetes can do is limit the blast radius of an attack. This means proper security controls can restrict how far a bad actor can get once inside your cluster. If you have the right security policy, this attacker will be stuck and unable to access containers, applications or the larger cluster. But if the container is running as root, has access to the host’s filesystem or has some other security flaw, the cluster is well… you know. That’s why a well-configured Kubernetes deployment offers an extra layer of much-needed security.
Let’s take a look at the very best security practices out there right now for Kubernetes users:
The easiest way to take down a site is to overload it with traffic with a DoS attack. Sometimes this is intentional and sometimes it’s not—hard to tell from the inside. A major benefit, Kubernetes lets applications scale up or down in response to these traffic fluctuations. This means a well designed ingress policy, one that is configured to set limits on how much traffic a user can consume, adds an extra layer of security to mitigate these risks.
Keeping your Kubernetes version up to date is critical, as old versions can become stale and riddled with security holes. Add-ons installed in your cluster can enhance functionality, but they also increase the size of your attack surface. Staying up to date on bug fixes and new releases is key. Every time a new release comes out, it needs to be tested, on both internal and staging clusters. Roll updates out slowingly while monitoring possible problems, making course corrections along the way. And don’t forget to keep the underlying Docker image current as well, for all your applications. Base images also go stale and need to be periodically updated.
Providing admin permissions is a big deal, as it allows a person or application to do basically anything, including deleting an entire Kubernetes deployment. Applications that don’t need control over the cluster don’t need admin privileges. Kubernetes provides RBAC to manage access to different cluster resources. Reduce the potential for risk by managing your permissions carefully and relying on the principle of least privilege.
Instead of managing access to cluster resources, like RBAC, network policy focuses on communication inside your cluster. While a given workload might need to talk to a database and a handful of microservices, the workload itself won’t require access to every other application in the cluster. Be sure to write a strict network policy that limits communications to unnecessary parts of the cluster and manages cluster ingress and egress. With these rules in place, potential risk is significantly reduced.
You can tie RBAC to the cloud provider’s authentication mechanism through workload identification. The built-in authentication mechanism in Kubernetes, to manage access to resources outside the cluster (like databases), fits the bill. Workload identity is key to security because it handles all the permissioning under the hood using short-lived credentials. No need to manage and potentially expose access keys.
Kubernetes empower IaC workflows by encoding all infrastructure choices in YAML, Terraform and other configuration formats. This ensures your infrastructure is 100% reproducible, even if a cluster disappeared overnight. Just remember, applications need access to secrets, such as credentials, API keys, admin passwords and other bits of sensitive information to function properly. Don’t be tempted to check these credentials into your IaC repository because this exposes them to anyone with access to the Git repository. Instead, encrypt your secrets and safely check them into your repository so you can use a single key to unlock your IaC respiratory and have a perfectly reproducible infrastructure.
The bottom line is, applications are constantly changing, which means security is too. Kubernetes mitigates the severity of attacks by giving you the ability to optimize security settings in accordance with best practices. This means the biggest security mistake you can make is not following them!
Fairwinds Insights, our security and governance platform, can help. It automates security at scale, integrates shift left policies and reduces risk to empower innovation. It’s available for free! Get it here. To learn more about these best practices, read our comprehensive white paper on the subject.