<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Check Out the Specifics of Our New Goldilocks Upgrade

The original goal of our open source project, Goldilocks, was to provide a dashboard utility for identifying a baseline in Kubernetes resource requests and limits. To provide recommendations, we use the Vertical Pod Autoscaler (VPA), a controller stack that contains a recommendation engine assessing the current resource usage of your pods so as to provide guidelines. 

The Goldilocks dashboard provides a visualization of the VPA recommendations so you can visit a service in your cluster and see two types of recommendations, depending on what Quality of Service (QoS) classes you need for deployments. QoS class is a Kubernetes concept that determines the order in which pods are scheduled and evicted, and Kubernetes itself assigns the QoS class to pods. 

Because getting resource requests and limits just right is an ongoing challenge for most organizations, we continue to regularly refine Goldilocks, offering periodic updates of our changes. We recently performed some significant upgrades to Goldilocks and are excited to share these improvements with our open source community. 

Try Fairwinds Insights to get the benefits of Goldilocks at enterprise scale.

See how Insights and Goldilocks compare. 

What’s new with Goldilocks?

Pull requests #373 and #376 brings multi-controller support to Goldilocks. Before this update, Goldilocks could only create VPA objects for deployments. However, with these new pull requests in place, Goldilocks can now support the creation of VPA objects for any higher level workload controller that uses a standard pod template specification: spec.template.spec.containers. This change greatly expands the number of workloads that Goldilocks can report on, which leads to more recommendations for workload resources in your cluster. 

Very rarely would a Kubernetes cluster have Pods only created by deployments. DaemonSets—and to a lesser extent, StatefulSets—make up a significant portion of workloads. Goldilocks would not make recommendations for containers created by these types of workloads—until now. 

image of namespace details - Goldilocks

What are the specifics of the pull requests?

The first pull request (#373) is the backend change which updates the controller to watch for pod creation—and to determine the parent workload of that pod. Then, if that parent object has the proper annotation (or is inside a namespace with a proper label that goldilocks looks for), the VPA will be created. 

This capability flips the previous method on its head because before, we were only watching for created deployments. By watching for pods and then inferring the parent controller, we can cover many more controller types, including those no one has dreamed up yet. That is, assuming they follow the pod template specification mentioned above. 

The second of the two PR's mentioned (#376) ensures the dashboard part of Goldilocks actually shows the recommendations, as some code only looked for deployments. 

How can you contribute to Goldilocks?

Goldilocks is open source and available on GitHub. We are committed to improving its ability to handle large clusters with hundreds of namespaces and VPA objects. In the summer of 2021, we also changed how Goldilocks is deployed to include a VPA sub-chart you can use to install both the VPA controller and the resources for it. On that note, we plan to continue our improvements in all our open source projects and welcome your contributions! 

Goldilocks is also part of our Fairwinds Insights platform, which provides multi-cluster visibility into your Kubernetes clusters so you can configure your applications for scale, reliability, resource efficiency and container security. Interested in using Fairwinds Insights? It’s available for free! Learn more here.

Join our open source community and check out next meetup on Dec 14, 2021. Join us to win some Fairwinds treats!

See how Fairwinds Insights reduces your Kubernetes risk!