Recently I answered Kubernetes questions submitted by the community. Here is a quick summary of the questions and answers discussed around tooling, Kubernetes basics and managed container services.
Helm is an incubator project in the CNCF and is maintained by the Helm community. Helm Charts allows you to define, install and upgrade Kubernetes applications. The latest version, Helm 3.0.0 released in November, improved in many ways. While it’s not necessarily needed, it’s popular because it makes managing applications on Kubernetes much easier.
Without Helm, you’ll need to template your manifests and keep track of changes from one deployment to another. With Helm, you create a chart and then release that chart. Helm then keeps track of every release change. When you solely deploy with Helm, if a deployment goes wrong, Helm provides a roll back command so you can quickly get back to a working version.
Part of the benefits of Helm is the ability to use community charts like nginx-ingress, cert-manager, external-dns. If you create a chart, you can share it with the world.
CI/CD follows a similar process independent of the tool you use. You’ll need to set up implementation pipelines and will need to decide between Jenkins vs. GitLab vs. CircleCI. We use rok8s-scripts, an open source framework for building GitOps workflows with Docker and Kubernetes. By adding rok8s-scripts to your CI/CD pipeline, you can build, push, and deploy your applications using the set of Kubernetes best practices.
Absolutely, Terraform speeds up development by allowing teams to treat their infrastructure the same as code. Terraform is declarative and readable, which lowers the barrier to entry. It is cloud agnostic and supports dozens of different APIs. This allows teams to mix and match your development based on their application.
While we could spend hours on this topic, here are a few main benefits:
With Kubernetes on your local machine you can a) run it for free and b) test changes quickly. Whatever you do on your local machine, you’ll be able to reproduce across clusters.
While understanding the basics is important, if you keep as much of the legwork in your pipeline, then your developers will only need to know basic
kubectlcommands. You maintain the pipeline, developers ship the code.
Like any new technology, Kubernetes is complex the first time you look at it. It can be a major paradigm shift for organizations. However, once the processes to deploy and monitor are in place, the benefits are significant. While there is definitely a learning curve, the best way to get over the curve is to take Kubernetes in chunks. Start by standing up a cluster, move on to deploying a small application, and then start to build in automation. You can also work with Kubernetes advisory services to get your team up to speed.
First, make sure you understand how to build and run a Docker container. Once you are comfortable, start with the Kubernetes documentation. It’s a good way to get from zero to application deployment. In addition, if you care about the ins and outs of Kubernetes, read Kelsey Hightower’s Kubernetes the Hard Way.
Microservices run inside of Kubernetes (although Kubernetes isn’t exclusively for microservices). Kubernetes can run almost anywhere. Amazon EKS is a managed Kubernetes service (as well as GKE and AKS). EKS does a great job at creating a basic cluster. The benefit of EKS is that Amazon manages the Kubernetes control plane, so operators just need to worry about EC2 compute instances. To get to production you still have to manage cluster add-ons and monitoring separately, regardless of the Kubenetes hosting provider.
If you are using ECS and you don’t have any problems with your application, don’t move. However, if you see a function in Kubernetes that you can’t do with ECS, start to evaluate.
There are two main arguments to adopt Kubernetes that ring true for any container service. First, if you want to manage your services better and second if you want to avoid vendor (ECS/AWS) lock-in by (more) freely moving your application.
In addition, a major benefit of adopting Kubernetes is the community. Kubernetes runs on every service provider or locally. With ECS, you can only run your app on Amazon. You don’t get the benefits of the community manifests, helm charts, K8S automation, etc.
EKS abstracts the Kubernetes API and control plane, so operators don’t need to worry much about maintaining cluster uptime. However, there are a few small trade offs. EKS doesn’t follow the official Kubernetes release cycle, so you’ll typically be a few versions behind the latest version. You’ll also be unable to configure the control plane (turning on/off Kubernetes API features). This is true for all Kubernetes service providers (EKS, GKE, AKS, etc). EKS is great at handling basic cluster deployments, but as you move to production, you’ll have to manage Kubernetes add-ons like monitoring, logging, and ingress controllers. This is Fairwinds’ bread and butter. We help EKS users manage their infrastructure so they can focus on running their applications.
You can listen to the rest of our Q&A session. We cover more questions including: