For the past eight years, I’ve split my time helping enterprises build application security programs and developing security products that make it easier for developers and security teams to work towards a common mission.
I generally refer to my past experience as “everything to the left of test”, which means it’s largely focused on developer tooling and continuous integration (CI). Maybe you call that the “DevSec” part of DevSecOps. However, now that I’ve joined Fairwinds, I’ve taken a dip into all-things continuous deployment (CD) and operations. My new bias is influenced by Fairwinds’ core mission - which is to help businesses become successful with the transition to cloud native technologies like containers and Kubernetes.
The core mission continues
The biggest realization, having come full circle with DevSecOps, is seeing the continuation of the core business challenge: accelerating development velocity, while maintaining reliability, scalability and security.
Even in the world of Kubernetes and containers, these two business objectives are still in tension. Kubernetes is making its way through the technology adoption lifecycle and starting to become a mainstream solution. Mainstream means striking a balance - abstracting just enough of the infrastructure layer so developers can deploy freely, without losing important governance and risk controls.
Application security is further along the curve than Kubernetes. We know this since it’s become an essential part of any enterprise development process. SaaS-based platforms have become the de-facto way to integrate testing into CI/CD pipelines, but also provide centralized governance and reporting that AppSec programs needs. This cloud-based, self-service, fully integrated approach has enabled security teams to meet developers where they work without losing visibility.
History repeats itself
Kubernetes deployments - which manage how stateless microservices run in a cluster - can still cause headaches for development, security, and ops. While it has become easier for development teams to create a cluster and integrate monitoring tools, the process of configuring and managing deployments is still complicated.
For a development team who is new to Kubernetes, or even infrastructure-as-code, it’s easy to neglect some critical pieces of a deployment configuration. For example, deployments may seem to work just fine without readiness and liveness probes in place, or without resource requests and limits. And from a security perspective, it’s not obvious when a deployment is over-permissioned with root access.
Fortunately, there may be some lessons that can be learned from the application security space.
Fairwinds recently launched two open source projects - Fairwinds Polaris and Fairwinds Goldilocks - which tackle this challenge. Fairwinds Polaris helps engineers keep their deployment manifests aligned with best practices, detecting issues related to security, networking, container images, and more. Fairwinds Goldilocks saves engineering time by recommending resource requests and limits (essentially, CPU and memory settings) for Kubernetes deployments. Both of these open source tools fit nicely at the hand-off from development to production, providing developers with a feedback loop before they release.
Our SaaS platform, Fairwinds Insights, aims to solve the “single pane of glass” requirement for the enterprise. We see Fairwinds Insights as an essential way to provide visibility into Kubernetes cluster and deployment configurations - so improvements can be prioritized and planned across multiple teams with ease.
This new era of Kubernetes deployment validation enables development teams to move fast, but also avoid costly security or reliability problems in production. Ensuring applications are not only built securely, but deployed securely, is why DevSecOps is so important for cloud native application development. Let me know your experience and what you think!