<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">
Back to all phases

Phase 1: Transform

How do I set up Kubernetes infrastructure and shift workloads?

See next phase
Introduction

Transform is the stage where you move to Kubernetes. In this phase, you will verify your foundational knowledge and understanding by deploying your first clusters and workloads. In the transformation phase, you should feel prepared on the basics, but at the same time may lack the expertise necessary to complete the phase.

You will spend a lot of time in the transformation phase. It covers your initial implementation, migration and learning curve as you undertake some key activity. As you adopt Kubernetes, don’t be fooled by “up and running” articles. There is a functionality gap between setting up your clusters and being production-ready. You may find it helpful to undertake a Kubernetes proof-of-concept in this phase or to work with Kubernetes experts to ensure you are setting up your first clusters to meet the demands of your workloads.

Adopting New Language and Architecture

As you adopt Kubernetes, you will not simply prepare, you will start to practice and understand the language, architecture and workload requirements. 

Language learning

From the preparation phase, you will understand some terminology, but during transformation you will need to adopt all Kubernetes terminology. Some essential concepts will include terms like nodes, pods, namespaces, ReplicaSets, controllers and many more. And while it is important to train on these beforehand, you will better understand them as you are exposed to each function in practice. 

Application architecture and understanding requirements 

You’ll want to map your application architecture to the new cloud native context. Doing so will help you to discover requirements and uncover dependencies for your application so you can embrace containers successfully. It will also enable you to revisit historical assumptions and decisions made previously. For example:

Old World

New World

SSH to server

Deploy by code and immutable infrastructure

Dedicated workloads

Self-healing and ephemeral workloads

Add server to scale

Add a pod or node to scale

Configuration management for deployment

Containerization for deployment

Workload understanding 

You will understand the different types of workloads running in a cluster - Deployments, Jobs, CronJobs, ReplicaSets, DaemonSets, and Pods.

Technical Transformation

In phase one, you will start running workloads in Kubernetes. To successfully navigate technical transformation, there are 10 main steps you will undertake. This represents a high level overview of each step you’ll take. Be prepared to spend significant time on each step in this process. 

  1. Deep dive and project plan - Whether you are on-prem, in a datacenter or have already moved to the cloud, your first step is to take a deep dive into your existing stack. You’ll want to investigate all aspects of the stack from underlying networking, infrastructure, configurations, secret management to how you deploy applications and their dependencies. You’ll want to determine your technology requirements when moving to Kubernetes. This step helps you avoid missing an important requirement. Based on this deep dive, you can put together a project plan that is your roadmap into migration.
  2. Application containerization - Your application may already be containerized in which case you are ready to move to step three. If not, you’ll need to break down your application based on the twelve-factor app methodology. This is vital as you’ll need your application to live through destruction (your container may be killed at any moment). You need to be able to cleanly stand your application and containers back up. In this step, we advise you to extract your secrets and configuration from your build artifact. Kubernetes is ephemeral so by doing this, you’ll maintain your standards and security and simply inject secrets and configuration at container runtime.
  3. Build cloud infrastructure - You’ll need to determine your cloud provider: AWS, GCP, Azure or a managed Kubernetes service like EKS, GKS or AKS. If you select a managed Kubernetes service, you’ll have less work when building your Kubernetes infrastructure. You’ll need to set up the underlying cloud configuration, VPC, security groups, authentication and authorization, etc. as part of this step.
  4. Build Kubernetes infrastructure - In step four, there are design considerations to account for to avoid making choices that could require time consuming cluster rebuilds or network and cost implications. Some considerations include: How many clusters should you have in what regions and with how many availability zones (AZs)? How many separate environments, clusters and namespaces are needed? How services should communicate or discover with one another? If security will be at the VPC, cluster or pod level? Your focus should be on repeatability. You’ll want to utilize infrastructure as code (IaC) so that you can build your clusters in a way to do it over and over again. In this step, be careful about your configuration options using your deep dive / project plan from step one to ensure you don’t miss application requirements.
  5. Write YAML or Helm charts - At Fairwinds, we call this step application Kubernating. This is where you define your Kubernetes resources to get them into your cluster. Here you can write Kubernetes resource YAML files, however most are now using Helm charts for deploying applications into Kubernetes. You will write YAML or Helm charts specifically for your container images, templates for config maps, secrets or any special application requirements.
  6. Plumb in external cloud dependencies - Your application will have external dependencies like its key store, libraries, databases or other assets. Kubernetes isn’t a great place for these dependencies to live. You’ll want to manage your stateful dependencies outside of Kubernetes. You can for example standup databases in tools like Amazon RDS and then plumb it into Kubernetes. Your application can then run in a pod in Kubernetes and talk to those dependencies.
  7. Define Git workflow - A major benefit of Kubernetes is the ability to deploy code in a repeatable way without human intervention. You’ll commit code to source code generally via Git, which will kick off events and merge with branches that move those changes to a non-production cluster. You will then test and QA your code and merge to the master. This will deploy your code to staging or production. At this stage you are simply defining what your Git workflow looks through i.e. when a developer pushes code, what happens in Kubernetes?
  8. Build your CI/CD pipeline - Once you’ve defineid your Git workflow, you’ll set up your CI/CD platform using automation tools like Jenkins or CircleCI. This will turn your defined workflow into an actual build pipeline.
  9. Non-production testing - After you’ve completed step 1-8 for your monolithic application or microservices architecture, you’ll deploy to non-production. Here you’ll play with the application to ensure it runs, has enough resources and limits, test that your secrets are getting in correctly and people can access the application. You’ll test what happens if you kill your pods. Essentially you’ll kick the tires before moving to production. If you are running a monolith application, you’ll get through this stage much faster. If you are deploying a microservices application architecture, you will complete steps 1-8 for each service and deploy to non-production. Once all services are there, you can see how they work together to ensure once in production, your application works.
  10. Production promotion - Finally, once you have tested your application thoroughly in non-production and are happy with it, as long as your production environment is built the same as your staging, you can deploy and send traffic to your application. Here you’ll simply change your load balancer or DNS. With DNS, you can fail back if required.
Clean Up Debt

As you implement Kubernetes, you will have a chance to clean up technical debt accrued in your existing systems. Technical debt can be summarized as any workflow, process, code, or piece of hardware that consistently pulls attention away from fulfilling your mission. This could include upgrades or updates that have been put off, code bugs that have been worked around, old version dependencies or incorrect configurations.

Naturally, as you move to Kubernetes, you can evaluate where this technical debt exists so that you don’t replicate it in your new environment.

Productivity Gains vs. Losses

Kubernetes offers the opportunity to change and improve how your team collaborates and delivers your app or services. As you transform, your team will change as well.

Kubernetes is a big change. You’ll be adopting a new way of working and each person on your team will learn the technology differently. In the beginning, there will be productivity hits as you and your team become comfortable with the technology. Be patient in this stage because the long term productivity gains will outweigh short term losses.

Tooling

You’ll need to decide how you make decisions about tooling including: 

  • How do you decide what problems need a tool solution?
  • Who is responsible for making that decision?
  • How do you vet tools?

For open source tools, you’ll want to evaluate that each tool is updated regularly and that there is enough community support to avoid tools becoming outdated. Spend time during this stage answering these questions in collaboration with the developers using Kubernetes.

Balance Ultimate Flexibility with Controls

A benefit of Kubernetes is the ability for developers to commit code without involving the ops team. What’s required is good policies to ensure security, reliability and efficiency gains. The challenge is that as you set up your first clusters, you need to strike a balance between giving developers ultimate flexibility or controlling the infrastructure. Will you open Kubernetes to your developers in the beginning as to not stunt productivity; or will you lock down the system? What configurations will you use to restrict changes? You must consider how you will balance developer needs with corporate policy.

Challenges

During transformation, you are likely to be challenged with: 

  1. Becoming overwhelmed by complexity
  2. Decision paralysis due to the vast Kubernetes ecosystem
  3. A lack of in-house expertise or loss of talent
  4. Uncertainty around Kubernetes best practices / execution 

These challenges are the usual suspects for any IT environment. With Kubernetes the same holds true, but there is even less talent or expertise in this space. If these challenges arise, you can seek help from managed Kubernetes services. 

In addition, there are existing behaviors on your team. Take time to understand what behaviors are critical, what is open to change, what is historical that is no longer required and where complexity exists that can be simplified.

Outcomes
  • You will invest a lot of time in the transformation phase and build your knowledge base
  • You will have identified which applications to use in Kubernetes
  • You will have considered holistic solutions vs. a patchwork of open source and proprietary technology 
  • You will understand Kubernetes fundamentals, strengths and weaknesses
  • You will have undertaken a successful Kubernetes proof of concept, so you will be able to make the decision of whether to proceed or not

How Mature Is your Kubernetes Deployment? Get the eBook

Download Maturity Model