In my last blog post, I focused on why Docker is beneficial to use and why it offers significant value in a broad range of use cases. In part II of this series, I’ll focus on why using it is much less difficult than you might think.
Below, I’ll share some basic commands and examples to show you how easy Docker is to use. To get you started, you’ll need Docker running on your system. Go here and follow the instructions that match your system. When you’re ready, we’ll begin with the most common Docker function, docker run
.
Docker runs processes in isolated containers (processes that run on a local or remote host). When you execute the command docker run, the container process that runs is isolated – it has its own file system, its own networking and its own isolated process tree separate from the host.
The basic docker run
command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]
Here’s a very simple example that runs a redis image:
docker run redis
If you don’t already have the redis image on your host, it will pull the image for you. Watch as each of the file system layers is downloaded in parallel then a container running redis starts. Ctrl-C and run the command again, and you’ll see how fast subsequent runs are now that the image has already been downloaded.
Here’s another more complex example showing how to run a Python 3.0 (python3) image:
docker run -p 8080:8080 python:3 python3 -m http.server 8080
Let’s break down the command above:
-docker run: run a docker container
-p 8080:8080
: map port 8080 inside the container to port 8080 outside the container.
-python:3
: use the python image with the tag 3. In this case the image:tag
combination indicates that this image is running python3.
-python3 -m http.server 8080
: run this command inside the container.
The command runs the python3 binary, loads the http.server
module and creates a web server available on port 8080. The -p
option from above, which maps the port, should allow you to access http://localhost:8080
and see the directory listing inside the running container.
If you weren’t already running Python3 on your development environment, how many steps would it take to get the equivalent of that one command?
There are plenty more options and commands, but these will get you started using Docker quickly:
-docker images
: show what images are your host
-docker ps
: show what containers
-docker ps -a
: show all containers that are running or have been run and are not yet removed
-docker stop <container id>
: stop a running container
-docker rm <container id>
: remove container
-docker rmi <image>
: remove image
To start a container in detached mode, use the -d
flag. By design, containers started in detached mode exit when the root process used to run the container exits.
docker run -d -p 6379:6379 redis
will start a redis container, run it in the background and make it available at localhost:6379
.
Use docker ps
to find the container ID, docker stop
to stop it and docker rm
to remove it.
To publish or expose a port, use the -p
flag.
-docker run python:3 python3 -m http.server 8080
-http://localhost:8080/
-http://localhost:8080/
In the first example above, you’re not opening the port in the container (you can’t access the Python server you’re running), whereas the second example demonstrates network address translation (you can access the server).
To bind mount a volume, use the -v
flag.
Here’s an example showing the use of these three flags:
docker run -d -v /docker/redis/data/:/data/ -p 3306:3306 redis
In this case, the path ‘/data’ inside the container is mapped to /docker/redis/data
on your host computer. This is one way to save state between docker runs.
The docker logs
command fetches the logs of a container.
The command takes this form:
docker logs [OPTIONS] CONTAINER
The –it
tag enables you to interact with and open a shell on the running container. It creates an interactive and attached terminal.
Here’s an example showing how to run an interactive shell in an Ubuntu image:
docker run -it ubuntu bash
A Docker image is a collection of file system layers and amounts to a fixed starting point. When you run an image, it creates a container. The set of changes made from the initial image becomes an additional file system later, and the container may be committed, at which point it becomes a new image. A repository is a place where your images are stored.
The docker pull
and docker push
commands will enable you to interact with your repository to do a Docker pull or Docker push. Use docker pull
to download a container image from a registry into a local cache so you can start containers based on the image. Use docker push
to share your images to the Docker Hub registry or to a self-hosted registry.
An image is usually defined with a Dockerfile, which contains all of the commands you could call on the command line to assemble an image. Every image starts from a base image, such as ubuntu (a base Ubuntu image).
docker build . -t <whatever you want to name the image>
Docker builds images by reading the instructions from the Dockerfile. This part is important: each docker command in the Dockerfile represents one file system layer. Each time you build the image, it will use the cached layers of previous builds. Any line in the Dockerfile that changes invalidates the cache for ALL commands (and layers) below it.
What does that mean?
Let’s assume you have a simple project. You want to put a simple_script.sh
into a Docker image, and simple_script.sh
relies on telnet. You need to copy the script into the image and install telnet.
If our Dockerfile is:
-FROM ubuntu
-COPY simple_script.sh /
-RUN apt-get update
-RUN apt-get install -y vim
-CMD “bash simple_script.sh”
Then every time you update simple_script.sh
, the build process will run apt-get update
and apt-get install
again. That will get old really fast, so instead you can reorganize your Dockerfile to be:
-FROM ubuntu
-RUN apt-get update && apt-get install -y vim
-COPY simple_script.sh /
-CMD “bash simple_script.sh”
You have achieved two things. First, by using the &&
operator and combining the apt-get update
and apt-get install
commands, you have reduced the number of file system layers by one. Second, when you change simple_script.sh
and run docker build
now, the apt commands will be cached, and subsequent builds will only need to add the simple_script.sh
. This speeds development, which makes everyone a bit happier.
You might still be hesitant to use Docker. It’s understandable. To get over any perceived hurdles, start small, and start simple. Begin with simple processes, and use those processes in confined ways – and remember that you want to be running one process per container.
Docker is a really great tool. I hope you’ll experiment with the commands above and decide to take advantage of all of the benefits Docker offers.