<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

K8s Clinic: How to Run Kubernetes Securely and Efficiently

Why Organizations Choose Kubernetes

With the adoption of containers, software packaging is increasingly shifting left, which means (depending on your organization) that developers are taking on responsibility for the containerization of applications. Developers may also be responsible for some parts of Kubernetes configuration. As that process shifts left, developers need support to make the right decisions for the organization in order to run Kubernetes securely and efficiently.

Many companies are adopting cloud native technologies to deliver speed to market. For businesses seeking to compete in today's marketplace, it’s important to ship new features and meet customer needs where they are — and increasingly those needs are being met through software.

Key Challenges

For all the benefits gained from cloud native technologies, moving to containers and Kubernetes doesn’t come without potential challenges. According to a recent Cloud Native Computing Foundation (CNCF) survey, there are three key challenges that typically emerge during this type of transformation.

CNCF: What are your challenges in using/deploying containers?

Source: CNCF SURVEY 2020

Tied for first place, complexity and the cultural change involved in moving to cloud native technologies. These types of changes often mean changing the development process and potentially shifting some of that responsibility to different teams, forcing engineers to learn new concepts and Ops engineers to adapt to an “everything as code” mindset.

The third challenge is related to security considerations with cloud native technologies. We're dealing with new concepts and technical considerations that change how you think about security, especially when you run containers and Kubernetes technology in the cloud, or if you’re using it in a multi-cloud or hybrid cloud scenario. The complexity around all of this causes security teams to take a step back to really understand the new threat landscapes with cloud native technology.

Security needs to be a partner with dev and DevOps, and so they not only have to come up to speed on the new changes, they also have to get visibility into where those risks may be. The new types of questions that emerge when it comes to the actual container technology itself, such as understanding what known vulnerabilities (Common Vulnerabilities and Exposures (CVEs) are in those containers and understanding the ways Kubernetes can be configured to be insecure, unreliable, or inefficient.

New Decision Points & Complexities

Moving to Kubernetes and containers introduces a lot of new decision points; last year, an article highlighted that 69% of reported Kubernetes incidents were actually related to misconfigurations. To successfully deliver products to market, you need to have a collaborative environment to resolve misconfiguration issues quickly. Remember: everything in Kubernetes is configuration-driven and security is not built in by default.

Organizational complexity is another important factor that comes into play. There are different personas involved along the way, and they each have different questions that have to be answered, so let's put ourselves in their shoes:

  • Developers: paid to write code, build new features, and ship applications to production. They need to know enough about Kubernetes and containers to continue to do their job well and get applications out to customers.
  • Site reliability engineers (SREs): need to make sure that the applications are reliable and stable. SREs also need to make sure that the applications are configured using best practices and have health probes and health checks enabled so that the applications can run reliably in production.
  • Security teams: need to know whether the organization is running vulnerable versions of containers and whether applications are configured to be secure.
  • Vice Presidents of Engineering: need secure, reliable infrastructure to support the next wave of growth.

In these environments you need to build processes and put guard rails in place in order to meet the needs of these different personas.

Technical Implications for Security & Efficiency

For all these teams, configurations are a consideration as they seek to build and deliver applications and services to market. What kind of technical implications impact security and efficiency for organizations moving to containers and Kubernetes? There are a few different layers in the stack where you need to look out for misconfigurations.

  • Containers: where your application is packaged up with an operating system. Be on the lookout for known vulnerabilities being packaged up, whether that's at the operating system level or inside the applications that are being put into that container.
  • Deployment configurations: this may be a Kubernetes YAML or a Helm chart. Watch out for misconfigurations at this level. Make sure that CPU and memory settings are being set, you're setting liveness and readiness probes for your application, and there aren't unnecessary security permissions added to those deployments.
  • Kubernetes cluster: can be misconfigured to be publicly accessible on the internet. Make sure you have role-based access control in place. There are also many add-ons that need to be kept up to date, such as ingress and certificate management.

How Policy & Governance Can Help

You can help prevent common misconfigurations from being deployed by using policy and governance. Implement policy to check for security misconfigurations, such as vulnerabilities in underlying Kubernetes clusters and add-ons. It’s important to scan and monitor the infrastructure constantly to find and patch new vulnerabilities as necessary. Policies and governance can also help you with cost optimization by ensuring the efficiency of your resource usage, for example, checking CPU and memory settings to make sure that your applications have enough compute resources, but aren’t consuming more than necessary.

When you create guardrails that prevent mistakes from being pushed to production, you can also give feedback at the right times to the developers and service owners who are making these decisions about configuration. A few examples of ways you can use policies to create guardrails include only allowing images from trusted repositories, ensuring CPU and memory requests are set, and requiring health probes. There are different ways to implement policy and governance and make your policies stick, and your choice may depend on the size of your organization, the maturity level of your Kubernetes environment, and other considerations. Regardless of how you proceed, you’ll need visibility across teams and clusters and a way to effectively and consistently manage policies in order to run Kubernetes securely and efficiently.

Learn more, watch the webinar on demand: How To Run Kubernetes Securely And Efficiently

See how Fairwinds Insights reduces your Kubernetes risk!