Kevin Ashcraft

Linux & Radio Tutorials

Docker Basics

Docker is a container system that allows you to run operating-system-level virtualization. There are many advantages to using containers as opposed to virtual machines, some of which are guaranteed environments so development and production always match, the ability to quickly scale a service by adding more containers, and the ability to store the server configuration alongside your codebase.

In this tutorial we'll walk through the basics of using Docker, from running an ad-hoc container to defining a container with a Dockerfile, using docker-compose to setup a development environment and finally preparing for production.

Install Docker

For Mac, Windows, or Ubuntu.

Ad-hoc Containers

docker run -it ubuntu bash

You can run any published container (anything on docker hub) with this command. The container will be downloaded (in layers), then launched and used to execute the command.

This command runs bash on the latest version of Ubuntu, giving you access to the command line. The -it tells Docker to run an interactive tty, basically displaying the output of the command and allowing you to add additional input.

Postgres Example

docker run postgres
docker ps
docker exec -it $container_id psql -U postgres

Another example would be running a Postgres container. This one is a bit more complicated because postgres must be initiated before being usable. The first command downloads and starts the postgres container, the next (to be ran in another window) will show all of the currently running containers (the one you just started should be at the top), and finally the third command will launch psql in the container. exec is like run, but used on already running containers.



FROM node:latest

RUN mkdir /app


COPY package*.json ./
COPY ./webpack.config.js ./

RUN npm install


CMD ["npm","run","dev"]

A Dockerfile is used to define a container. FROM states what container to start building from, RUN is used to execute one-time commands on that container, WORKDIR defines the working directory, COPY will copy files and directories from your host to the container, EXPOSE will define ports that can be accessed, and CMD states the command that the container is built to execute.

This file creates a new container built from the latest official nodejs container, it copies package.json and then installs all of the dependencies listed before running the npm run dev command. You could access the running services via localhost:8080 and localhost:3001.

The reason that we don't copy the entire node_modules directory is because we want to keep the container as light as possible, so giving it the instructions to build the dependencies is better than just copying them all (otherwise when you start moving this thing from devel to testing or production, it'd have to move all of the modules as well instead of just the instructions on how to install them).


Docker-Compose provides a convenient way to launch a multi-container environment by defining which containers to use, along with volumes, environmental variables, and port forwarding.


version: '3'
    build: .
      - 8080:8080
      - 40801:3001
      - postgres
      - ../src:/app/src
      - ../.babelrc:/app/.babelrc
      - ../package.json:/app/package.json
      - ../node_modules:/app/node_modules
      dockerfile: Dockerfile-postgres
      - postgres_data:/var/lib/postgresql/data
    image: redis:3.2.11


The docker-compose.yml file defines the containers and their interactions. In this file we're launching three containers, a nodejs app for development, a postgres database, and a redis database.

Most of this is self-explanatory, for example the depends_on property states that the app container depends on the postgres one and should be started after it, the ports property shows which host ports to bind to which container ports. One part that should be highlighted is the volumes since we didn't cover it before. Volumes are files/directories on the host machine to be mounted inside of the container. In this example, all of the node_modules and src directories are mounted, along with .babelrc and package.json. That way if any file is changed on the host, it's also changed inside of the container (for example, install a new module and it'll be available to the app).

These environment variables are passing already existing variables on the host. They could also be explicitly defined such as NODE.env=development.

Production Considerations

docker build -t myapp:latest .
docker push myapp:latest

Generally speaking, the first step in using containers for production is first publishing them to a container repository. These two commands will build a container and then publish it on Docker Hub. From there they'd be retrieved and ran by your production server. This is why we want to keep everything as lightweight as possible.

One nice way to go about this to use multi-stage builds to create a container with only the essential files.


FROM node:9.4 as builder

RUN mkdir /app/src -p


COPY package*.json ./
COPY ./webpack.config.js ./
COPY .babelrc ./
COPY ./src ./src/

RUN npm install
RUN npm run site:build

FROM nginx:1.13

COPY --from=builder /app/dist/site /usr/share/nginx/html


In this example we're first building a container with all of the source files and package.json, and then we're installing all of the node modules. This results in a large amount of data, most of which is not needed and only there to build the static distribution files.

The second FROM command tells Docker to create another container, and that it will eventually discard the first. In the second container, the --from=builder flag is showing that the files are being copied from the initial container (you could name it anything, btw).

This will result in a single nginx container with only the static site files, making it nice and light, easy to transport.