<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=521127644762074&amp;ev=PageView&amp;noscript=1">

Terraform and EKS: a Step-by-Step Guide to Deploying Your First Cluster

In a four part series, we’ve discussed Terraform and why it’s important, and provided a step-by-step guide to Terraform with AKS and GKE. Now we run through EKS.

Using infrastructure as code to manage Kubernetes allows you to declare infrastructure components in configuration files that are then used to provision, adjust and tear down infrastructure in various cloud providers. Terraform is our tool of choice to manage the entire lifecycle of Kubernetes infrastructure. You can read about the benefits of Terraform here

This blog provides a step-by-step guide on how to get started with Terraform and EKS by deploying your first cluster with infrastructure as code. 

Prerequisites

Steps

Create a directory for the project like terraform-eks. Next, set up an ssh key pair in the directory with this command: ssh-keygen -t rsa -f ./eks-key.

We will now set up several Terraform files to contain the various resource configurations. The first file will be named provider.tf. Create the file and add these lines of code:

provider "aws" {
  version = "~> 2.57.0"
  region  = "us-east-1"
}

Now, create a file called cluster.tf. This will include our modules for a virtual network, cluster and node pool. First we’ll add a locals block, with a variable for the cluster name that can be used in different modules:

locals {
  cluster_name = "my-eks-cluster"
}

Next, we’ll set up the network for the cluster using Fairwinds’ AWS VPC module. Please note that the module is hardcoded to a /16 cidr block and a subnet with a /21 cidr block.  You can learn more about the module at its repo: https://github.com/FairwindsOps/terraform-vpc.

module "vpc" {
  source = "git::https://git@github.com/reactiveops/terraform-vpc.git?ref=v5.0.1"

  aws_region = "us-east-1"
  az_count   = 3
  aws_azs    = "us-east-1a, us-east-1b, us-east-1c"

  global_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }
}

Finally, we’ll add in module for the cluster itself. We will actually use a community-supported Terraform AWS module:

module "eks" {
  source       = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=v12.1.0"
  cluster_name = local.cluster_name
  vpc_id       = module.vpc.aws_vpc_id
  subnets      = module.vpc.aws_subnet_private_prod_ids

  node_groups = {
    eks_nodes = {
      desired_capacity = 3
      max_capacity     = 3
      min_capaicty     = 3

      instance_type = "t2.small"
    }
  }

  manage_aws_auth = false
}

Once the cluster.tf file is complete, initialize Terraform by running terraform init. Terraform will generate a directory named .terraform and download each module source declared in cluster.tf. Initialization will pull in any providers required by these modules, in this example it will download the aws provider. If configured, Terraform will also configure the backend for storing the state file. The output will look something like this:

$  terraform init
Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
- Downloading plugin for provider "random" (hashicorp/random) 2.3.0...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.11.3...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

After Terraform has been successfully initialized, run terraform plan to review what will be created:

$  terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

module.eks.data.aws_partition.current: Refreshing state...
module.eks.data.aws_caller_identity.current: Refreshing state...
module.eks.data.aws_iam_policy_document.cluster_assume_role_policy: Refreshing state...
module.eks.data.aws_ami.eks_worker: Refreshing state...
module.eks.data.aws_ami.eks_worker_windows: Refreshing state...
module.eks.data.aws_iam_policy_document.workers_assume_role_policy: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # module.eks.data.null_data_source.node_groups[0] will be read during apply
  # (config refers to values not yet known)
 <= data "null_data_source" "node_groups"  {
      + has_computed_default = (known after apply)
      + id                   = (known after apply)
      + inputs               = {
          + "aws_auth"        = ""
          + "cluster_name"    = "my-eks-cluster"
          + "role_CNI_Policy" = (known after apply)
          + "role_Container"  = (known after apply)
          + "role_NodePolicy" = (known after apply)
        }
      + outputs              = (known after apply)
      + random               = (known after apply)
    }

...

Plan: 59 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Please note that this snippet has been been edited to cut down on the size of this article.

As shown in the example above, Terraform will add 59 resources, including a network, subnetwork (for pods and services), EKS cluster and a managed node group.

After the plan is validated, apply the changes by running terraform apply. For one last validation step, Terraform will output the plan again and prompt for confirmation before applying. This step will take around 15-20 minutes to complete.

To interact with your cluster, run this command in your terminal:

aws eks --region us-east-1 update-kubeconfig --name my-eks-cluster

Next, run kubectl get nodes and you will see two worker nodes from your cluster!

$  kubectl get nodes
NAME                           STATUS   ROLES    AGE     VERSION
ip-10-20-66-245.ec2.internal   Ready       3m16s   v1.16.8-eks-fd1ea7
ip-10-20-75-77.ec2.internal    Ready       3m16s   v1.16.8-eks-fd1ea7
ip-10-20-85-109.ec2.internal   Ready       3m15s   v1.16.8-eks-fd1ea7

That’s it for our series on setting up Kubernetes clusters in multiple cloud providers using Terraform. If your cluster isn’t required, run terraform destroy. Have fun as you continue your Kubernetes journey!

Resources

Test Drive Fairwinds Insights for Free Security, Cost and Developer Enablement In One