<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Kubernetes: 10 Technical Transformation Steps

Fairwinds released the Kubernetes Maturity Model tool to help people self-identify their maturity with Kubernetes, understand gaps in their environment and gain insights into enhancing and improving their Kubernetes stack.

Phase two of the Kubernetes Maturity Model focuses on transformation. In this stage you are focused on shifting workloads into Kubernetes including containerizing your applications if they aren’t already running in containers. To successfully navigate this technical transformation, there are 10 main steps you will need to undertake. This represents a high level overview of each step, and remember you should be prepared to spend significant time on each step in this process.

  1. Deep dive and project plan - Whether you are on-prem, in a datacenter or have already moved to the cloud, your first step is to take a deep dive into your existing stack. You’ll want to investigate all aspects of the stack from underlying networking, infrastructure, configurations, secret management to how you deploy applications and their dependencies. You’ll want to determine your technology requirements when moving to Kubernetes. This step helps you avoid missing an important requirement. Based on this deep dive, you can put together a project plan that is your roadmap into migration.
  2. Application containerization - Your application may already be containerized in which case you are ready to move to step three. If not, you’ll need to break down your application based on the twelve-factor app methodology. This is vital as you’ll need your application to live through destruction (your container may be killed at any moment). You need to be able to cleanly stand your application and containers back up. In this step, we advise you to extract your secrets and configuration from your build artifact. Kubernetes is ephemeral so by doing this, you’ll maintain your standards and security and simply inject secrets and configuration at container runtime.
  3. Build cloud infrastructure - You’ll need to determine your cloud provider: AWS, GCP, Azure or a managed Kubernetes service like EKS, GKS or AKS. If you select a managed Kubernetes service, you’ll have less work when building your Kubernetes infrastructure. You’ll need to set up the underlying cloud configuration, VPC, security groups, authentication and authorization, etc. as part of this step.
  4. Build Kubernetes infrastructure - In step four, there are design considerations to account for to avoid making choices that could require time consuming cluster rebuilds or network and cost implications. Some considerations include: How many clusters should you have in what regions and with how many availability zones (AZs)? How many separate environments, clusters and namespaces are needed? How services should communicate or discover with one another? If security will be at the VPC, cluster or pod level? Your focus should be on repeatability. You’ll want to utilize infrastructure as code (IaC) so that you can build your clusters in a way to do it over and over again. In this step, be careful about your configuration options using your deep dive / project plan from step one to ensure you don’t miss application requirements.
  5. Write YAML or Helm charts - At Fairwinds, we call this step application Kubernating. This is where you define your Kubernetes resources to get them into your cluster. Here you can write Kubernetes resource YAML files, however most are now using Helm charts for deploying applications into Kubernetes. You will write YAML or Helm charts specifically for your container images, templates for config maps, secrets or any special application requirements.
  6. Plumb in external cloud dependencies - Your application will have external dependencies like its key store, libraries, databases or other assets. Kubernetes isn’t a great place for these dependencies to live. You’ll want to manage your stateful dependencies outside of Kubernetes. You can for example standup databases in tools like Amazon RDS and then plumb it into Kubernetes. Your application can then run in a pod in Kubernetes and talk to those dependencies.
  7. Define Git workflow - A major benefit of Kubernetes is the ability to deploy code in a repeatable way without human intervention. You’ll commit code to source code generally via Git, which will kick off events and merge with branches that move those changes to a non-production cluster. You will then test and QA your code and merge to the master. This will deploy your code to staging or production. At this stage you are simply defining what your Git workflow looks through i.e. when a developer pushes code, what happens in Kubernetes?
  8. Build your CI/CD pipeline - Once you’ve defineid your Git workflow, you’ll set up your CI/CD platform using automation tools like Jenkins or CircleCI. This will turn your defined workflow into an actual build pipeline.
  9. Non-production testing - After you’ve completed step 1-8 for your monolithic application or microservices architecture, you’ll deploy to non-production. Here you’ll play with the application to ensure it runs, has enough resources and limits, test that your secrets are getting in correctly and people can access the application. You’ll test what happens if you kill your pods. Essentially you’ll kick the tires before moving to production. If you are running a monolith application, you’ll get through this stage much faster. If you are deploying a microservices application architecture, you will complete steps 1-8 for each service and deploy to non-production. Once all services are there, you can see how they work together to ensure once in production, your application works.
  10. Production promotion - Finally, once you have tested your application thoroughly in non-production and are happy with it, as long as your production environment is built the same as your staging, you can deploy and send traffic to your application. Here you’ll simply change your load balancer or DNS. With DNS, you can fail back if required.

In phase two, you’ll also look at cleaning up technical debt, make decisions around tooling, investigate productivity gains vs. losses, and start to look at flexibility vs. controls.

Learn more about the transformation stage of the Kubernetes Maturity Model as well as investigate all stages of the model.

View the Kubernetes Maturity Model