<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Tell us more

Blog

Operate Kubernetes in a Multi-Cluster, Multi-Cloud World

How to Kubernetes Operate Kubernetes in a Multi-Cluster, Multi-Cloud World

When operating multiple Kubernetes clusters across multiple cloud accounts, how can you work efficiently while minimizing accidental changes? In this post, I’ll share some best practices and open-source tooling that the team at Fairwinds find to be valuable as Kubernetes practitioners. You can also view an accompanying video for a narrated visual.

Kubectl

While installing kubectl, the Kubernetes command-line interface, configure shell completion for Bash or Zsh. This allows you to press Tab to complete kubectl commands and Kubernetes resource names, such as Deployments or Pods. Be sure the bash-completion prerequisite package is installed. The above link to the kubectl installation document includes details for setting up its shell completion.

laptop:~$ kubectl get pod [tab]app-7f5fd6f95c-[tab twice]
app-7f5fd6f95c-bnkwj app-7f5fd6f95c-lg9pg
laptop:~$ kubectl get pod app-7f5fd6f95c-b[tab]nkwj

The above example uses tab to help complete a kubectl get pod command. Since two pods share the same “7f...5c” string in their name, pressing tab twice shows both pods. I then typed “b” as a tie-breaker character and pressed tab again to complete the name of the pod I want.

It’s also possible to tab-complete command names I might forget, such as kubectl api-resources.

Switching Between Clusters and Namespaces

Nothing impacts the trajectory of your day quite like accidentally making a change in a different Kubernetes cluster - “Oh no, I thought I was removing that Loadbalancer Service in staging!”

The Kubernetes client configuration, used by kubectl, groups access parameters into contexts. Each context refers to a cluster, user, and the current default namespace. You may have multiple contexts in a Kubernetes client configuration file, representing separate staging and production Kubernetes clusters.

The kubectl command can be used to switch between multiple contexts defined in a single Kubernetes client configuration (KubeConfig) file, as described in the Kubernetes documentation about multi-cluster access. However, the kubectx and kubens tools facilitate more easily switching between contexts and setting the current namespace. For Microsoft Windows users who do not have Bash or ZSH, these tools directly support Windows with a relatively recent rewrite in Golang.

Separate Kubernetes Contexts

Using separate KubeConfig files per environment, requires a more explicit action to use a particular context or cluster. Set the KUBECONFIG environment variable to the KubeConfig file you want to use. This avoids having all of your clusters accessible at the same time by default. It is also helpful togive Kubernetes contexts meaningful names, and to display the current context in the shell prompt - more on that below.

Extend Kubectl

The kubectl command canbe extended using plugins - for more tips on how to super-charge your kubectl command-line, see our blog post about kubectl plugins and the krew plugin manager, and check out these Krew plugins:

  • rbac-lookup - Easily find roles and cluster roles attached to any user, service account, or group
  • Who-can - Show which subjects have RBAC permissions
  • Cert-manager - Help manage resources managed by the cert-manager cluster add-on

Managing Different Versions of Tools

Upgrading things is a constant way-of-life, and you may have different versions of Kubernetes, Helm, or other infrastructure management tooling such as Terraform. Ideally the version of kubectl matches the version of the Kubernetes API for each of your clusters, and everyone on your team uses the same version of Helm to manage cluster addons or applications. A version manager such as asdf, manages multiple versions of command-line tools, and languages like Python, Node, or Ruby. An asdf tool versions file can be included at the top of project or repository directories, to configure which version of tools are used based on the current working directory. The asdf tool uses plugins to support many tools and languages.

laptop:~$ asdf plugin add kubectl
laptop:~$ asdf list all kubectl
…..
1.19.2
1.19.3
1.20.0-alpha.1
1.20.0-alpha.2
…..
laptop:~$ asdf install kubectl 1.19.3
Downloading kubectl from https://storage.googleapis.com/kubernetes-release/release/v1.19.3/bin/darwin/amd64/kubectl
laptop:~$ cd /path/to/project/directory
laptop:~$ asdf local kubectl 1.19.3
laptop:~$ cat .tool-versions
kubectl 1.19.3

 

The above example adds the asdf plugin for kubectl, lists available versions of kubectl, installs version 1.19.3, and configures a project directory to use that version of kubectl.

Asdf also supports using environment variables to configure which version of tools are used, if your workflow lends itself to altering your environment instead of relying on the current working directory. For example, set the ASDF_KUBECTL_VERSION environment variable to “1.19.3”.

Multiple Cloud Accounts

Sometimes it's necessary to inspect the compute instance of a Kubernetes node, potentially across multiple environments which may use different cloud accounts, projects, or subscriptions. The cloud command-line interfaces for AWS, Google, and Azure support controlling the configuration using environment variables. These environment variables can be changed based on your current working directory using the direnv tool, which sets environment variables when you “cd” into a directory.

Amazon Web Services

Configure a profile per environment using the aws configure --profile … command, and use that profile via the --profile command-line argument or the AWS_PROFILE environment variable.

laptop:~$ aws configure --profile production
AWS Access Key ID [None]: xxxxx
AWS Secret Access Key [None]: yyyyy
Default region name [None]: us-east-2
Default output format [None]: text
laptop:~$ aws --profile production s3 ls
…..
laptop:~$ export AWS_PROFILE=production # Use this profile while the variable is set

Google Cloud

Create a separate configuration per environment, using the gcloud config configurations create command. Use gcloud config set commands to set properties like Google account and project. Specify which configuration is active via the CLOUDSDK_ACTIVE_CONFIG_NAME environment variable.

laptop:~$ gcloud config configurations create qa
Created [qa].
Activated [qa].
laptop:~$ gcloud config set account ivan@fairwinds.com
Updated property [core/account].
laptop:~$ gcloud config set project qa                
Updated property [core/project].
WARNING: You do not appear to have access to project [qa] or it does not exist.
laptop:~$ export CLOUDSDK_ACTIVE_CONFIG_NAME=qa # Use this configuration while the variable is set
laptop:~$ gcloud auth login
…..
laptop:~$ gcloud auth application-default login # Optional, if using tools like Terraform

Azure Cloud

Use a separate directory for command-line configuration files for each environment. Set the AZURE_CONFIG_DIR environment variable, then use the az login and az account set commands to configure the Azure command-line for that environment.

laptop:~$ export AZURE_CONFIG_DIR=~/infra/test/.azure
laptop:~$ az account show # demonstrates this is a new config
Please run 'az login' to setup account.
laptop:~$ az login
…..
laptop:~$ az account set --subscription xxxxx

Where am I? Reflecting Things in Your Prompt

The fast and flexible Starship prompt customization tool can display Kubernetes and cloud information in your shell prompt, to help stay aware of which Kubernetes cluster and cloud account is active. The Kubernetes component displays the current context, and optionally the current namespace as defined in your KubeConfig file. There are also AWS and gcloud components to include information about those clouds. The environment variable and custom command components could be used to provide additional context, including the active Azure subscription.

Please see the accompanying video, and its example Starship configuration. We hope this helps you to operate your Kubernetes clusters across multiple clouds.