As I work with organizations of all sizes, one theme I run into is that visibility does not equal action. Kubernetes offers a lot of power, but also requires a lot of configuration. Just because you may have visibility into this, doesn’t mean you can action it. Today I’m going to talk about Kubernetes remediation at scale and the difference between visibility and action.
The entire cloud native community is built on open source technologies. We are using Kubernetes, Prometheus, etcd, Helm, Linkerd, Open Policy Agent amongst many other graduated and incubating projects.
These open source technologies are great and we are happy to have developed a number of open source tools at Fairwinds to help the community. The challenge becomes when you are running these tools across multiple clusters. Are you doing it consistently? Are you configuring the tools correctly? Open source at scale can be a challenge so while you may use these tools to gain visibility, you’ll most likely be spending wasted time linking the datasets across multiple clusters and workloads. Even then though, that’s just the visibility step.
When you are running a multi-cluster environment and potentially running it across multiple-clouds, how do you know what’s happening? Lots of platform engineers and DevOps leaders we speak with spend time auditing clusters to identify if there are Kubernetes misconfigurations or security concerns. Those people spend time manually checking and potentially fixing issues or worse, become the “Kubernetes help desk.” Often in multi-cluster environments, they’ll be running multiple overlapping tools or building different dashboards just to figure out what is happening. However it’s time consuming and not everything is always done well or correctly.
The visibility that’s needed when you are running Kubernetes at scale is important, but even more important is the ability to make remediation's at scale.
Engineering time is stretched thin. There is no time for another tool that doesn’t integrate into an engineer's daily workflow. What’s needed is a Kubernetes governance solution that provides visibility, remediation advice and integrates into existing how engineers and developers already work.
Fairwinds Insights is a Kubernetes governance platform that does exactly that. It scans at CI/CD and continuously scans in any Kubernetes cluster to identify where there are misconfigurations, security risks, wasted cloud spend or performance challenges. The power in Fairwinds Insights is its integrations with third party tools that your team already uses.
For example, when the log4j CVE was announced, the Fairwinds Insights platform scanned containers, and was able to identify CVEs including log4j. If a container was at risk, Insights created an Action Item and increased the severity to critical - representing the risk associated with log4j. But what if the security engineer or lead developer didn't log in that day? How would they know about the vulnerability?
That’s why Insights integrates natively with Jira, Slack and a number of other tools. When an issue that is severe or critical is identified, users can set rules so that Action Items by cluster or namespace are turned into Jira tickets or Slack notifications. Engineers and developers can then easily, without changing the way they work, remediate the issue (Insights provides remediation advice that will appear in the Jira ticket or Slack ticket).
Interested in using Fairwinds Insights? It’s available for free! Learn more here.
This step turns an issue into action which is essential for any organization that wants to run Kubernetes at scale. The goal is to convert issues into manageable bite sized steps, whatever the issue might be: reducing security risk, optimizing cost or providing guardrails.
Employing Kubernetes governance software like Fairwinds Insights not only turns visibility into action, but it unites DevOps and development teams. No longer does the DevOps team need to action like a help desk. Instead, they are arming developers with the information they need to remediate issues within their applications and clusters. It truly embraces the Kubernetes service ownership model where developers can “code it, ship it and own it.”