<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

2023 Kubernetes Benchmark Report: The State of Kubernetes Workload Costs

Organizations continue to move to the cloud. In fact, according to Flexera’s 2022 Tech Spend Pulse, 65% of respondents place cloud and cloud migrations as a top priority for the next year. Digital transformation is more important than ever for most (74%) and that initiative is driving an increased willingness to move to the cloud. And yet managing cloud and Kubernetes costs remains an ongoing challenge for most organizations according to the CNCF’s FinOps for Kubernetes report, “Insufficient – or Nonexistent – Kubernetes Cost Monitoring is Causing Overspend.” So what does that look like when analyzing the efficiency of Kubernetes workloads? 

Using data from 2022, which included over 150,000 workloads, Fairwinds analyzed current trends and how they compare to the previous year to create the 2023 Kubernetes Benchmark Report. While Kubernetes adoption continues to grow, best practices remain challenging for many organizations. Lack of alignment with best practices results in cloud cost overruns, increased security risks, and lack of reliability for cloud apps and services.

To make sure your Kubernetes cluster is as efficient as possible, you need to set resource limits and requests correctly. If you set your memory requests and limits too low on an application, for example, Kubernetes will kill your application for violating its limits. On the other hand, if you set your limits and requests too high, you are over-allocating resources, which will result in a higher cloud bill. 

So, when it comes to cost efficiency, are organizations trending in the right direction? Let’s dive into the data to find out. 

CPU Requests and Limits 

There’s good news when it comes to setting CPU requests and limits. The benchmark data shows that 72% of organizations are only setting up to 10% of their workload limits too high. Only one percent of organizations had 91=100% of their workloads impacted by CPU limits that were set too high. 

Graph showing 72% of organizations are setting up to 10% of their workload limits too high and 94% of organizations are setting 0-10% of workload limits too low

As far as CPU limits being set too low goes, 94% of organizations are setting just 0-10% of workload limits too low. 

Memory Limits too High 

Similar to the benchmark report released in 2022, organizations have memory limits set too high for nearly 50% of workloads. This year, though, there has been an increase in the percentage of workloads impacted. Only 3% of organizations had 51-60% of workloads impacted when we look at the data collected in 2021. That number has grown significantly this year. Now, 30% of organizations have at least 50% of workloads impacted by memory limits set too high. That equates to a lot of wasted cloud resources. Adjusting these memory limits so they are aligned to workload needs will help you take control and reduce your inflated cloud bill. 

Memory Limits too Low 

Interestingly, it looks like a lot of organizations (67%, down slightly from 70% last year) are still setting memory limits too low on at least 10% of their workloads. While the number of workloads impacted is relatively low, it is important to remember that setting these memory limits too low reduces the reliability of clusters. Once again, you can rightsize these limits for your applications to make sure they don’t fail under pressure. Additional benefit: adjusting these properly helps you make sure that you are not wasting your cloud resources. 

Graph showing 67% of organizations are setting memory limits too low on at least 10% of their workloads

Out-of-Memory errors (OOMKills) happen. To reduce the potential impact of an OOMKill, Fairwinds Insights includes a tool that detects and reports OOMKills. That way, you know when one happens so you can respond quickly. The tool can also increase memory to avoid downtime if you choose to set it up to allow that. 

Memory Requests too Low 

Unsurprisingly, application reliability gets really challenging, really fast when memory requests are set too low. The good news is that analysis of workloads from 59% of organizations show that this issue only impacts up to about 10% of their workloads, which is similar to what the data analysis from 2021 showed. 

If you want to avoid efficiency (and reliability) issues related to memory requests that are set too high or too low, look for tools that analyze usage and make suggestions to help you adjust your memory requests appropriately. Goldilocks is an open source tool that helps you determine what changes you need to make. If you are running multiple clusters across multiple teams, try the Fairwinds Insights platform. Our free tier is perfect for environments up to 20 nodes, two clusters, and one repo.

Memory Requests too High 

In this area, 34% of organizations had set memory requests too high on at least 10% of their workloads. Unfortunately, 82% of organizations are now setting memory requests too high on workloads, a significant increase. The data analysis showed that more workloads are impacted by too high memory limits than in the previous year, which is an unwelcome trend. 

Takeaways on Kubernetes Workload Costs 

While most of the efficiency related Kubernetes workload settings didn’t vary significantly from the previous year, they aren’t trending in as positive a direction as you might hope. 2023 is already shaping up to be an interesting year, focused on more careful analysis of cost and attention to where and how to adjust cloud spend to achieve the maximum reliability for the lowest spend. As organizations increasingly shift more apps and services to the cloud, it’s important to keep a close eye on how much resources workloads are consuming. Read the full report to learn more about these results and gain insight into trends in security and reliability. 

Read the complete Kubernetes Benchmark Report today.