Kubernetes organizes applications into pods, which form the basic building blocks of any workload. In our How-to-Kube series, we began by covering the pod basics. Similar to service, volume, and namespace, a pod is a basic Kubernetes object. A pod is a set of one or more containers scheduled on the same physical or virtual machine that acts as a unit. For instance, a pod might host both an application server and a helper container responsible for collecting logs or metrics. When you create a declaration of the pod, you can define this with any number of containers that live inside of the pod.
Containers within a pod share the same network (localhost), but each pod gets a unique IP address. They can also share filesystem volumes. Similar to Docker, which uses -volumes-from, Kubernetes uses the same concept when running multiple containers inside a pod. You can share ephemeral or copy-on-write style storage from within the pod. Note that ephemeral volumes are deleted when the pod terminates, while persistent volumes can keep data beyond the pod’s lifecycle. To use persistent storage, create a PersistentVolumeClaim (PVC). PVCs let you request storage for pods without tying them to a specific storage provider, making your workloads more portable and reliable.
Pods are rarely created directly outside of simple tests or debugging; almost all production workloads use Deployments, StatefulSets, or DaemonSets for management and resilience (see below). Directly created pods are ephemeral and aren’t restarted by Kubernetes controllers if they fail. Deployments automatically replace failed pods, roll out configuration changes, and scale replicas.
Most pods run a single container, but multi-container pods are used when containers must share networking and storage tightly. Sidecars run alongside the main container to provide supporting features, ambassadors mediate external communication, and init containers prepare the environment before the app starts. Only use multiple containers when containers must be closely coupled.
A Deployment is an abstraction. It allows you to have extra functionality and control on top of the pod to say how many instances of a pod you want to run across nodes or if you want to define your rolling update strategy (for example, only roll one pod at a time, then wait 30 seconds in between). This allows you to control your deployments based on your unique requirements in order to have zero downtime as you bring up a new process and deprecate old ones.
Deployments offer the following functionality:
In this post, you’ll learn how to create a pod in Kubernetes using the nginx image, view the YAML that describes the pod, and then delete the pod that you created. We’ll be using the Minikube tool, which enables you to run a single-node Kubernetes cluster on your laptop or computer.
To begin, you need to launch a Kubernetes cluster (in GKE). Once you’re in the Kubernetes sandbox environment, make sure you’re connected to the Kubernetes cluster by executing kubectl get nodes in the command line to see the cluster's nodes in the terminal. If that worked, you’re ready to create and run a pod.
To create a pod using the nginx image, run the command kubectl run nginx --image=nginx --restart=Never. This will create a pod named nginx, running with the nginx image on Docker Hub. And by setting the flag --restart=Never we are telling Kubernetes to create a single pod rather than a Deployment.
Once you hit enter, the pod will be created. You should see the message pod/nginx created displayed in the terminal. (While the above command is great for learning, in real-world projects pods are usually defined in YAML files for version control and repeatability.)
You can now run the command kubectl get pods to see the status of your pod. To view the entire configuration of the pod, just run kubectl describe pod nginx in your terminal.
The terminal will now display the YAML for the pod, starting with the name nginx, its location, the Minikube node, start time, and current status. You'll also see in-depth information about the nginx container, including the container ID and where the image lives. Pay particular attention to the pod’s status, start time, container state, and recent events at the bottom. This helps with troubleshooting and monitoring.
If you scroll all the way to the bottom of the terminal, you can see the events that have occurred in the pod. In the case of this tutorial, you’ll see that the pod was started, created, the nginx image was pulled successfully, and the image has been assigned to this node in Minikube.
The action of deleting the pod is simple. To delete the pod you created, just run kubectl delete pod nginx. Be sure to confirm the name of the pod you want to delete before pressing Enter. If you completed the task of deleting the pod successfully, the message pod nginx deleted will display in the terminal.
Pods are an important unit for understanding the Kubernetes object model, as they represent the processes within an application. In most cases, pods serve as an indirect way to manage containers within Kubernetes. In more complex use cases, pods may encompass multiple containers that need to share resources, serving as the central location for container management.
One big area of concern for Kubernetes is a lack of visibility and inconsistent policy enforcement across multiple clusters and dev teams. A pod’s securityContext defines security settings such as whether containers can run as root, or whether privilege escalation is allowed.
Example (YAML):
spec:
containers:
- name: main
image: nginx:latest
securityContext:
allowPrivilegeEscalation: false
Namespaces and Role-Based Access Control (RBAC) are industry-standard for organizing apps and tightening security. Organize resources using namespaces and restrict access using RBAC policies to improve security and manageability, especially in shared clusters. For production security, implement NetworkPolicies to control communication between pods and namespaces.
Always set imagePullPolicy: Always
in pod specs to ensure your pods get the latest images, and regularly scan your container images for security vulnerabilities using tools such as Trivy or Clair, and keep base images patched.
Example:
containers:
- name: main
image: nginx:latest
imagePullPolicy: Always
Additional Resources:
Add liveness and readiness probes to each pod. Liveness probes check if your app is stuck and should be restarted; readiness probes signal when your pod is ready to receive traffic. This helps Kubernetes maintain a healthy application.
Example:
Resource: How to Identify Missing Readiness Probes in Kubernetes
You can also use node affinity, anti-affinity, and topology spread constraints to distribute pods for resilience and high availability.
Consider using policy-as-code tools like Kyverno, Open Policy Agent (OPA), and Polaris to enforce organizational policies and best practices automatically in your CI/CD pipelines. These tools can run a variety of Kubernetes best practice checks to ensure that pods and controllers are configured properly. Using these open source projects, you can spot common mistakes before deploying, such as missing resource limits.
Another challenge that often comes up for those just starting with Kubernetes is how to size your applications. Goldilocks is another open source project that recommends initial CPU and memory values so you can avoid under-provisioning or unnecessary costs. Set resource requests and limits for each pod to ensure predictable performance and reliability, especially in multi-tenant clusters. Tools like Goldilocks help you find appropriate values.
For dynamic workloads, enable the Horizontal Pod Autoscaler (HPA) to automatically adjust the number of pods based on actual CPU or memory demand. But avoid running Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) in enforce mode on the same workload to prevent conflicts—instead, use VPA in recommendation mode for setting requests, and HPA for scaling.
Store configuration values in ConfigMaps, and any credentials or keys in Kubernetes Secrets, instead of hardcoding them into your pod specifications and don't put sensitive secrets in environment variables if possible .
For production settings, use Pod Disruption Budgets to limit the number of pods down at once during maintenance and upgrades, ensuring high availability.
Label all resources consistently to make service discovery and monitoring easier. Regularly audit your clusters for unusual activity and keep all base images and dependencies up to date to minimize risk. Set up logging and monitoring (using tools like Prometheus and Grafana) to ensure ongoing security, performance, and cost control.
As you begin your Kubernetes journey, consider putting Kubernetes guardrails in place to help ensure your team is using Kubernetes safely. Doing so early in your Kubernetes journey can help you ensure that you do not introduce configuration drift due to a lack of internal standards for Kubernetes configurations.
Just as guardrails are important for ensuring consistent K8s deployments that align with your internal policies, Kubernetes best practices are also essential for optimal performance. By following best practices as you learn Kubernetes, you’ll be well positioned to evaluate the technology and scale it effectively.
Originally posted November 13, 2019; Updated September 9, 2025