- How We Can Help
- About Us
Tell us more
At Fairwinds, we take an opinionated approach on how to implement ci cd pipelines when Docker and Kubernetes are involved. In the following sections, I will outline the guiding principles of our opinions, and then lay out a typical CI/CD workflow that we use. I’ll also share some code that will enable you to give it a shot if you want. None of these principles are brand new or earth-shattering, but the combination makes for a repeatable, sustainable, and safe CI/CD pipeline that many of our customers love.
Docker allows you to rewrite a tag when you push a container. A common example of this is the latest tag. This is a dangerous practice because it presents a possible scenario where you don’t know what code is running in your container. We instead take the approach that all tags are immutable, so we tie them to an immutable, unique value that can be found in our codebase — the commit ID. This directly connects a container to the code that it was built from and erases any doubt that we know exactly what is inside that container.
In line with the first principle, the immutable container that we have deployed to a staging/dev/QA environment and then tested (hopefully) should be the same exact container that gets deployed to production. This eliminates any concern that something might change when deployed to production. We handle this by triggering a release to production using a git tag and releasing the container with the commit ID of the tag.
Since a container should be considered an immutable artifact of our build process, configuration should never be “baked in.” Configuration should be stored in a configmap in the Kubernetes cluster, and secrets should be stored, you guessed it, in Kubernetes secrets. This allows the deployment of the container to be configured per environment without making changes to the container itself.
At Fairwinds, we use Helm to template all of the yaml that gets deployed to Kubernetes. This allows the easy configuration of multiple environments and their config. In addition, it makes it easier to keep track of logical groupings or “releases.”
All CI/CD pipelines should be triggered by Git operations. This falls in line with a normal developer workflow, and makes the pipeline more accessible to developers. In addition, it works well with the previous concept of immutability being tied to a commit ID.
We strongly discourage the use of what a former co-worker referred to as “artisanal hand-crafted build machines” that contain a random assortment of dependencies and tools. If you build inside the container, you can control all dependencies and tools in your code and never have to worry about which “builder” is running your pipeline. Docker’s multistage builds make it easy to do this while also keeping your deployed runtime image as small as possible.
Docker images can be cached and re-used. We recommend the use of this feature as much as possible in order to keep build times to a minimum. The cache images should be pushed to a repository or the caching features of your CI/CD system should be used to maximize efficiency.
I’m going to lay out step-by-step a normal pipeline that we would set up for a customer. It takes into account all of the principles in the previous section, and will provide a path from code to production “In X easy steps!”
The process goes like this:
Looking at the process, we can see several “loops.” These are feedback loops that can be used to improve quality, test code, etc. The goal is to make these small and fast so that we can release code that is tested and manageable.
If you like the way we do CI/CD at Fairwinds, you can have your very own pipelines just like ours! We use a set of bash scripts that do all of this for us. We just have to create some config files and we get a setup very similar to the one in this article. The repo is called rok8s-scripts (pronounced “rocket scripts”), and as always, PRs are accepted.