So you've decided Kubernetes as a container orchestration system is the right move for your application. Congratulations and welcome. You've made the right decision and joined the likes of many successful companies with the most impressive applications.
As you might know — container orchestration is difficult. With countless variables, different configuration parameters and monthly vulnerabilities, it's not a walk in the park. Some companies even end up stuck in cloud-native limbo and never actually ship an application. But that won't be you. You've come to the right place.
Here are a few essentials for implementing a Kubernetes container orchestration system:
The first thing you'll want to do is make sure you have a Docker image for each of your applications. This will make your code highly portable. If you have stateless applications wrapping them in a Docker image shouldn't be too difficult. However, if your applications track long-term state in memory or on disk, you might need to re-architect. This is the difficult part; separating application state from application logic.
Next, you'll want to set up your orchestration architecture. This is also going to be difficult as you'll have to make some decisions blind. Here, you should certainly begin with a battle-tested stack rather than trying to implement a bespoke solution. Remember, you're starting down the path toward successful container orchestration — you're going to want to plan for it.
You should prepare a strategy for:
Monitoring and Alerting
Versioning and Updates
This way, when variables present, you'll be ready.
Now that you have your application containerized and a cluster to deploy into you'll want to define your orchestration rules. These rules should specify CPU and memory requirements, ingress rules and a scaling policy. This should prepare you for variables like lateral scaling and outside access to your cluster. Further you'll want specify rules around access to secrets and environment variables. All of this can be done in YAML files for Kubernetes but you'll also want to look into Helm charts.
Next, you'll need a process to deploy your applications. You should set up a strategy for deploying automatically through a CD pipeline (everytime someone tags a branch in GitHub). Initially, it might be fine to have an ops engineer manually deploy every time you want to make a change, but a strategy will prove worthwhile over time.
To make sure everything is meshing well and working together, you'll want a container registry to hold the container images, a CI pipeline to test, build, and push images, and a CD process to deploy them to your cluster.
Finally, you'll want to monitor your application. Your cluster is an ever-changing environment which means you'll want to keep an eye on it to ensure its health. Set up notifications likes Prometheus and Datadog that will let you know if your application is having trouble scaling or goes down. But, be mindful that these notifications need rules of their own — or they'll wake you up in the middle of the night. It's important to note here that there are Kubernetes vendors like Fairwinds that can handle this monitoring (and so much more) for you.