<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Are you Having These 5 Disagreements about Kubernetes? You Should Be!

As we head into 2023, it’s a great time to think about how you are using your tech stack, particularly Kubernetes. Many organizations jumped into Kubernetes in 2020, spurred to greater cloud adoption by the global pandemic. If you’re one of them, you probably started with a trial project before moving more of your applications onto the cloud, and you may not have looked back since. Now that things have settled down a bit, this is a great time to think more carefully about how you’re using Kubernetes and to give some thought to these five disagreements (or discussions) you may have missed in the rush.

1. Giving your CEO access to your Kubernetes environment

If you have a one person organization, or if you are self-employed and are a Kubernetes consultant, you probably should have access to your Kubernetes environment. In almost every other situation, it's probably a bad idea.

Especially in a small startup, permissions can be a very touchy subject. You may have started out with everyone in the company building applications or services and deploying to Kubernetes. But that’s not sustainable long term. Maybe your CEO is a former CTO and loves to have their hand in the game, but it really isn’t ideal as your company grows. As the CEO, you don’t need to have your hands in every bit of the infrastructure, including Kubernetes. Many CTOs don’t need that level of access either. 

Your developers, on the other hand, really need to understand the environment they are deploying into and how it works. The better an understanding of Kubernetes your development team has, the better they are going to do when writing code to deploy into that environment. It will help them to deploy more stable applications. It can also help your teams incorporate Kubernetes security into the development process and shift it further left, making your applications more secure. And because your developers understand the environment where the apps are actually running, it will improve how they code and support a Kubernetes service ownership approach. Having a thoughtful discussion about who should — and who should not — have access to your Kubernetes environment (and why) is an important step in increasing Kubernetes maturity in your organization.

2. Deploying all services into your Kubernetes environment

Some organizations take the approach of only using services and tools that were built internally, and then they may want to put all of those things into Kubernetes. That may work in some large organizations, like Netflix or Google, and a lot of open source tools have come out of their work, which is excellent for the open source community. As a general rule, however, it makes the most sense to use the tools that are best for the job. If the service or tool is not your core competency, not something that makes your company money, you probably shouldn't spend a lot of time building, maintaining, and running it yourself. That becomes a distraction from the work that you should be doing to make your organization more successful.

There are already a lot of solutions out there to run your databases, your queues, your email service. And they are backed by large, experienced teams. You should probably be using those purpose-built services because they work. For example, there's a substantial difference between your MySQL running in your Kubernetes cluster and the way that Amazon runs a relational database with RDS, because that's what the team there is focused on doing well - setting up, operating, and scaling a relational database in the cloud quickly and easily. If you find yourself solving new problems such that you can build it so it works better for the thing that you're trying to build, then it might be appropriate to deploy that service into your Kubernetes environment. For the vast majority of companies, especially the small- to medium-sized ones, that rarely makes sense.

3. Putting your developers in charge of *everything* — all the way through to production

The first thing you do when you onboard a new developer isn’t to give them access to everything, hopefully. There are certainly advantages of having developers take ownership over their applications, workload services, et cetera. And really, your developers are in the best position to know how things need to be configured, how the code works, and how they will work once deployed. But to make the developers effective, you also need to have additional tools to help them through that process. 

As we all know, Kubernetes can be a complex environment, and it requires either a lot of training to bring developers up to speed on how to use it efficiently, or you need to enable them with tooling. Tooling can put the right guardrails in place so that the developers are following best practices when deploying services into Kubernetes. 

If you hire an engineer who can do everything — they know Kubernetes, observability, CI/CD, front end and back end development, and databases, and have them own everything all the way through to production — that would be fairly unusual. If you do have one person, though, and everyone goes to them for everything, as the only person who knows how to solve (all of the) problems, they are going to have a really hard time doing everything well. If, instead, you enable your developers to deploy and own their own services all the way to production, they can be responsible for all the parts that they can really understand, while others can be responsible for helping them do that in a reasonable and responsible way.

4. Letting your CTO retain access to SSH into individual containers

If you are letting your CTO SSH into individual containers in production, please stop. No one should be able to exec into a container in a production environment. The goal of Kubernetes is to create immutable infrastructure, where components are replaced rather than changed. If you are:

  • Writing secure containers that are distroless or from scratch
  • Deploying containers that are not bloated 
  • Using good tooling around your cluster for observability
  • Shipping your logs somewhere

Then you shouldn't need to SSH into production containers at all.

Remember, Kubernetes isn’t about keeping one machine running and healthy forever and ever. In Kubernetes, you can log in or kill a pod and spin up another one using Kube control. There’s no reason for you to worry about each individual container. That’s a very important part of the paradigm shift that Kubernetes enables. In the past, far too much of the troubleshooting process involved turning a machine off and on again. Well, Kubernetes does that for you automatically to keep things running smoothly. In some organizations, you may need to be able to SSH in due to the size of the organization and your own unique deployment and environment, which is why it’s an important discussion or disagreement to have. For most organizations, however, you don’t need the CTO to retain SSH access into individual containers. 

5. Forgetting about your cloud cost

Conventional wisdom says that you don’t need to worry about your cloud cost, claiming that it is always cheaper than owning your own infrastructure. And you may be in a position where your sales are going great, your apps and services are scaling as needed, and it’s ok for your cloud costs to rise right along with your revenue. However, it’s still important to understand the cost of your cloud spend. 

Getting to understand the unit costs of your footprint — whether that is by cluster, by customer, by revenue, or another measure — is important. While it can be hard to get to that level of granularity, particularly in a shared, multi-tenant environment, you need to know when your unit cost is increasing and whether the rate of increase seems appropriate. 

The beauty of cloud is that it is easy to spin up new workloads, but getting visibility is not. Can you tell if you have rogue workloads? Did someone hack in and start up bitminer? Getting a tool that can show you what your Kubernetes clusters, workloads, namespaces, and so on, are costing you is critical to helping you understand what you are spending and how you are spending it in Kubernetes (and help you right-size clusters).

Solving Kubernetes Challenges

Whether you are new to Kubernetes or have already been working with it for quite some time, you know that it is complex software that offers a lot of flexibility and scalability. Still, it can be easy to forget to pause to think about how you have things set up or who has access to what. You should be having the discussions I outlined in this post, because very little of Kubernetes usage and deployment is black and white. There is rarely a right or wrong answer to almost anything. As it relates to your Kubernetes environment, no matter how hot this technology is, how widely adopted it is, how much everybody is picking up Kubernetes, remember that not everything in Kubernetes is always a great idea all of the time.

Watch the webinar5 Disagreements You Should be Having About Kubernetes and How to Solve Them to enjoy the entertaining discussion between Bill Ledingham (CEO), Kendall Miller (Technology Evangelist), Elisa Hebert (VP, Engineering Operations) and Andy Suderman (CTO).