Originally posted November 13, 2019; Updated February 28, 2023
In our How-to-Kube series, we’d like to begin by covering the pod basics. Like service, volume, and namespace, a pod is a basic Kubernetes object. A pod is a set of one or more containers scheduled on the same physical or virtual machine that acts as a unit. When you create a declaration of the pod, you can define this with any number of containers that live inside of the pod.
Pods also share a network internally - a private network shared whenever a pod is scheduled across all the containers inside the pod. They can also share filesystem volumes. Similar to Docker that uses -volumes-from, it’s the same concept with Kubernetes when running multiple containers inside a pod. You can share ephemeral or copy-on-write style storage from within the pod.
Typically you won’t create a pod directly - instead you’ll create a higher level object like a Deployment or StatefulSet that includes a pod specification (see below).
A deployment is an abstraction to the pod. It allows you to have extra functionality and control on top of the pod to say how many instances of a pod you want to run across nodes or if you want to define your rolling update strategy (for example only roll one pod at a time, wait 30 seconds in between). This allows you to control your deployments based on your requirements in order to have zero downtime as you bring up a new process and deprecate old ones.
Deployments offer the following functionality:
Here you can see this deployment is called “webapp”. It desires two replicas to exist.
In this post, you’ll learn how to create a pod in Kubernetes using the nginx image, view the YAML which describes the pod and then delete the pod that you’ve created. We’ll be using the Minikube tool that enables you to run a single-node Kubernetes cluster on your laptop or computer.
For more help on getting started with Kubernetes, read our series intended for engineers new to Kubernetes and GKE. It provides a basic overview of Kubernetes, architecture basics and definitions and a quick start for building a Kubernetes cluster and building your first multi-tier webapp.
To begin, you need to launch a Kubernetes cluster (in GKE). Once you’re in the Kubernetes sandbox environment, make sure you’re connected to the Kubernetes cluster by executing kubectl get nodes in the command line to see the cluster's nodes in the terminal. If that worked, you’re ready to create and run a pod.
To create a pod using the nginx image, run the command kubectl run nginx --image=nginx --restart=Never. This will create a pod named nginx, running with the nginx image on Docker Hub. And by setting the flag --restart=Never we tell Kubernetes to create a single pod rather than a Deployment.
Once you hit enter, the pod will be created. You should see pod/nginx created appear in the terminal.
You can now run the command kubectl get pods to see the status of your pod. To view the entire configuration of the pod, just run kubectl describe pod nginxin your terminal.
The terminal will now display the YAML for the pod, starting with the name nginx, its location, the Minikube node, start time and current status. You will also see in-depth information about the nginx container, including the container ID and where the image lives.
If you scroll all the way to the bottom of the terminal, you’ll see the events that have occurred in the pod. In the case of this tutorial, you’ll see that the pod was started, created, the nginx image was pulled successfully and been assigned to this node in Minikube.
The action of deleting the pod is simple. To delete the pod you have created, just run kubectl delete pod nginx. Be sure to confirm the name of the pod you want to delete before pressing Enter. If you have completed the task of deleting the pod successfully, pod nginx deleted will appear in the terminal.
Pods are a vital unit for understanding the Kubernetes object model, as they represent the processes within an application. In most cases pods serve as an indirect way to manage containers within the Kubernetes. In more complex use cases, pods may encompass multiple containers that need to share resources, serving as the central location for container management.
For more Kubernetes best practices, get this guide. It walks you through many issues you’ll face and how you can configure Kubernetes to avoid mistakes.
One big area of concern for Kubernetes is a lack of visibility and consistent policy enforcement across multiple clusters and dev teams. As you begin your Kubernetes journey, you should consider Kubernetes guardrails - how will you get your team to use Kubernetes safely? Doing so early will ensure you do not introduce configuration drift where there are no establish internal standards for Kube configurations. As you play, check out some Kubernetes security considerations:
Just as you should consider guardrails, you should also consider Kubernetes best practices. By learning best practices as you learn Kubernetes, you’ll be well positioned to fully evaluate the platform and scale it.
Polaris, an open source project, runs a variety of Kubernetes best practice checks to ensure that pods and controllers are configured properly. Using this open source project, you can evaluate Kubernetes and avoid problems in the future.
Another question that often arises when starting with Kubernetes is how to size your applications. Open source project, Goldilocks, lets you rightsize your applications and get it “just-right.” It helps you identify a starting point for resource requests and limits.