<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Now Available: Fairwinds Polaris 4.0 — Policy for Kubernetes Resources

Fairwinds Polaris has reached version 4.0, with some awesome new features! (For those keeping score, we jumped past 2.0 and 3.0 pretty quickly due to some breaking changes).

We originally wrote Polaris as a way to help Kubernetes users avoid common pitfalls when deploying their workloads. Over the course of managing hundreds of clusters for dozens of organizations, the Fairwinds SRE team kept seeing the same mistakes over and over: resource requests and limits going unset, liveness and readiness probes being ignored, and containers requesting completely unnecessary security permissions. These are sure-fire ways to create headaches down the line — from outages to cost overruns and even security breaches. We saw Polaris as a way to encode all our battle scars into a single configuration validator that could benefit the entire Kubernetes community.

As Polaris has grown from a Dashboard to an Admission Controller (to prevent these resources from making it into the cluster), and now a CI tool (to prevent them from even making it into the infrastructure-as-code repo), we’ve gotten more and more requests to implement new checks, such as whether an Ingress is using TLS or whether a Deployment has a PodDisruptionBudget attached to it.

Run Polaris in multiple clusters, track results over time and integrate with Slack, Datadog, and Jira with Fairwinds Insights, software to standardize and enforce development best practices. It's free to use! Compare Polaris and Insights.

In order to better satisfy these needs, we’ve implemented three major new features in ourCustom Check functionality:

  • The ability to check non-workload kinds, such as Ingresses, Services, and ClusterRoles
  • The ability to reference other fields inside the schema
  • The ability to cross-check resources, for example, ensuring a Deployment has a corresponding PodDisruptionBudget

Supporting Non-workload Kinds

Polaris was initially designed to check the workloads running in a cluster, for example, Pods and anything that creates Pods, such as Deployments, CronJobs, and StatefulSets. This is where we saw the most painful mistakes being made and was a natural place to start.

However, as teams started to deploy Polaris and see the value of having controls around workload configuration, they saw a natural potential to check other Kubernetes resources. For example, some companies have internal or regulatory requirements regarding the use of TLS for Ingress and want to check that every Ingress object has TLS enabled.

Adding support for new resource types took a little refactoring. Initially we only had to retrieve a fixed set of resource types, so we were able to use the nicely typed client-go functions such as Deployments(``""``).List(). But supporting arbitrary kinds requires us to utilize the dynamic client, which requires a lot more care due to the lack of type safety.

To get you started, we’ve implemented a check to ensure Ingress objects are using TLS. If there are other checks you have in mind, you can add them to your own Polaris configuration, or even better, open a PR to contribute them back to the core repo!

Supporting Self-references

JSON Schema is a really intuitive and powerful way to validate resources, but when compared to more programmatic frameworks, such as OPA, there’s a tradeoff: JSON Schema is simpler, but it can’t do some of the more complex things OPA can do.

In particular, Polaris 2.0 had no way of self-referencing. For example, one thing you might want to check is that the app.kubernetes.io/namelabel matches the metadata.name field. OPA can do this pretty easily:

package fairwinds 

labelMustMatchName[result] {
input.metadata.labels["app.kubernetes.io/name"] != metadata.name
result := {
"description": "label app.kubernetes.io/name must match metadata.name",
}
}

To support this in Polaris, we’ve added some basic templating to our JSON Schema support:

successMessage: Label app.kubernetes.io/name matches metadata.name
failureMessage: Label app.kubernetes.io/name must match metadata.name
kinds:
- Deployment
schema:
  '$schema': http://json-schema.org/draft-07/schema
  type: object
  properties:
    metadata:
      type: object
      properties:
        labels:
          type: object
          required: ["app.kubernetes.io/name"]
          properties:
            app.kubernetes.io/name:"{{ metadata.name }}"

While this still doesn’t give all the flexibility that OPA does, it allows us to tackle the majority of the use cases that Polaris 2.0 wasn’t able to solve for.

Supporting Cross-Resource References

One of the first and most common requests we’ve gotten is the ability to check that Deployments have an associated PodDisruptionBudget or HorizontalPodAutoscaler. These resources are critical for ensuring that Deployments scale properly and are an important part of most organization’s deployment strategy, so it’s a natural thing to want to check for.

The challenge here is that Polaris checks are defined using JSON Schema. This is great for single resources — we just validate the JSON returned by the Kubernetes API against the check’s schema. But In order to support cross-referencing, we had to do a few things:

  • Provide a way to access non-controller resources (✅ above)
  • Template out certain fields, so that, e.g., the name of the Deployment can be asserted in the PodDisruptionBudget (✅ above)
  • Provide syntax for defining multiple schemas within the same check

Without further ado, here’s the check we created to ensure a PDB is attached to all deployments, which uses all three of the new features:

successMessage: A PodDisruptionBudget is attached
failureMessage: Should have a PodDisruptionBudget
kinds:
- Deployment
schema:
  '$schema': http://json-schema.org/draft-07/schema
  type: object
  properties:
    metadata:
      type: object
      properties:
        labels:
          type: object
          required: ["app.kubernetes.io/name"]
          properties:
            app.kubernetes.io/name: \{\{ metadata.name \}\}
additionalSchemas:
  PodDisruptionBudget:
    '$schema': http://json-schema.org/draft-07/schema
    type: object
    properties:
      metadata:
        type: object
        properties:
          name:
            type: string
            const: {{ metadata.name }}-pdb
      spec:
        type: object
        properties:
          selector:
            type: object
            properties:
              matchLabels:
                type: object
                properties:
                  app.kubernetes.io/name:
                    type: string
                    pattern: {{ metadata.name }}

A few things to note here:

First, the kinds field tells Polaris which resources to associate this check with. That is, if the check above fails, you’ll see an ❌ next to the relevant Deployment (not next to a PDB).

Next, the schema field works like it always did, checking against the main resource.

Finally, the additionalSchemas field is a map from Kind to a JSON Schema. In the check above, Polaris will look through all the PDBs in the same namespace, and try to find one that matches the schema. If it doesn’t find anything, the check will fail.

Conclusion

We’re excited about the latest Polaris release and the new features and capabilities we’ve added. Today, Polaris has more than 10,000 users spanning all industries. If you’re interested in managing Polaris across a fleet of clusters, collaborating across teams, or tracking findings over time, take a look at Fairwinds Insights, our platform for continuous Kubernetes security monitoring and governance.

We also hope Polaris users will join our new Fairwinds Open Source Software User Group. We’re excited about your contributions to Polaris and working together to validate and enforce Kubernetes deployments.

See how Fairwinds Insights reduces your Kubernetes risk!