Tips for writing docker files - Basic setup, Caching, root user, services organization, networking and resource limits
Docker has revolutionized how developers build and deploy applications, being one of the most popular container engine . Docker has support for different programming languages and runs native on linux, as opposed to virtual machines that mimics an entire operational system, docker containers run on linux namespaces, removing the overhead that virtual machines have, for example, the boot time. The virtual machine needs time to boot, while docker is a service that starts on the host operational system.
As opposed to the official best practices  on writing docker files, the goal here is to share tips on approaches in how to write the docker files, this is not a beginners guide in how docker works or how to use it.
For context, this post is somehow connected to the playlist  I’ve built to keep track of topics that are related to docker.
NOTE 2: if you are interested in the nodejs docker image (how it is build) have a look at the official git repository 
Docker images and services
In this section, the focus is to go through the tips around building a docker image and the docker compose services.
1. Make the basic setup with standard image
Starting to build docker images requires steps and previous knowledge on the docker platform and ad least to understand a few commands such as RUN, COPY and FROM. Based on those commands, the generated image can be big or small, it depends.
Docker hub offers ready to use images without the need to build one, and they are classified as (from the biggest to the smallest): standard image, slim and alpine.
Dockerfile with standard image:
FROM node:12 # <--- standard image, and also the bigger compared to the next two version WORKDIR /var/www/app COPY . . RUN npm install && npm run build EXPOSE 5000 CMD npm run serve
Dockerfile with slim image:
FROM node:12-slim # <--- slim image, smaller, but also has less dependencies installed by default WORKDIR /var/www/app COPY . . RUN npm install && npm run build EXPOSE 5000 CMD npm run serve
Dockerfile with alpine image:
FROM node:12-slim # <--- slim image, the smallest, but also it has some drawbacks such as missing needed dependencies by the code WORKDIR /var/www/app COPY . . RUN npm install && npm run build EXPOSE 5000 CMD npm run serve
Usually the setup using the basic image is faster as it comes with almost everything to run the program. on the other side though, the alpine version has almost nothing to run the program, it has just the core, nothing else. Which in many cases will make the program to not run, depending on the dependencies.
Caching in docker is used to avoid refetching dependencies over and over again even when they don’t change. To avoid that, the docker layer system can be used to trick the engine .
FROM node:12-slim WORKDIR /var/www/app COPY package*.json ./ # <--- Caches the npm dependencies RUN npm install && npm run build COPY . . EXPOSE 5000 CMD npm run serve
3. The root user
The root user is the default user in which the container runs, which makes easier the process to set up permissions to access files or to setup configurations. Usually this is a bad practice , the container should not run with the root user due security issues  .
Though for the process to set up the docker image this can be a bit harder, given the fact that setting up a different user with less permission can difficult the image setup.
If no user is given (as for example, the last 3 dockerfiles shown in the previous section),
docker will build it using
root, which of course has security issues. To fix this issue
docker offers the flag
FROM node:12-slim USER node # <!--- specify the user for docker to build and run the image WORKDIR /var/www/app COPY package*.json ./ COPY . . RUN npm install && npm run build EXPOSE 5000 CMD npm run serve
This tip relies on the same approach as the previous one, first, make it work with the root user, then start to trick around permissions with a specific user.
4. Separate concerns, avoid building different services into one image
As a best practice the recommended way to build containers is: one container equals to one process. Which can avoid problems when it comes to managing them as  describes in the section “Decouple applications”.
5. setup docker file first, and then move to docker compose (if needed)
Usually, docker compose is the next step when building services to use with docker, though developers tend to skip the first step which is to understand how the image works and then move on to compose.
6. Networking and sharing hosts
Docker creates its own network interface, which in turn containers communicate between each other. Therefore, there are scenarios in which this behavior is not desired. For example, a database. As the database holds state (the data) usually it is used an external provider (RDS, mongodb atlas etc).
By default the container can’t access external ports, which in turn will
block the database connection. There are two possible options for that, using
network flag or using the
# using --network flag docker run --rm --network=host nginx
There is a side effect using the network flag, which will ignore the docker network created automatically by docker and the container will run as if it were in the host. Impacting the port that the application run and therefore prevents the possibility of blue-green deployments , which requires two instances of the same app running, each on its specific port.
add-host gives the flexibility needed to overcome the port issue. The
flag maps a specific host to a IP, the following example maps the localhost
to be the host.
# using --add-host flag docker run --rm --add-host=localhost:192.168.1.102 nginx
This section focus on the docker compose only.
1. Different docker compose files for different environments
Docker compose files are used to compose the container orchestration, therefore sometimes it is needed to use different behavior based on the environment that the application is one. For example, in development mode, the database container might be needed, but in production it might not be the case.
For that, it is possible to create different docker compose files for each environment. For example, for development, staging and production we might have:
- staging and production:
It is also possible to share code among each docker file, which might make
sense to create a
docker-compose.yml as the base for the two files previously
2. CPU and memory limit
Sometimes we want to limit the resources of a given image, it might fit a scenario in which is needed to measure the application performance, or, in an environment with constrained resources.
docker-compose offer two properties for that, mem_limit and cpus.
mem_limit is a hard limit, meaning the container will not try to consume
more memory even if available. On the other hand, the cpus is based on the
cores that the machine has.
version: '2' services: webserver: image: nginx mem_limit: 1024mb # memory ram setting cpus: 0.8 # cpu setting ports: - 80:80 - 443:443 testable: build: args: context: ./webapp user: 'node'
- B. Burns, J. Beda, and K. Hightower, Kubernetes: up and running: dive into the future of infrastructure. O’Reilly Media, 2019.
- DockerHub, “Best practices for writing Dockerfiles” [Online]. Available at: https://docs.docker.com/develop/develop-images/dockerfile_best-practices. [Accessed: 23-May-2020]
- M. Marabesi, “Docker,” 2020 [Online]. Available at: https://youtube.com/playlist?list=PLN7yVcqYnDlX7EzsleJ1jD_D0q-cYIP7r. [Accessed: 2020-12AD]
- P. Srivastav, “A comprehensive tutorial on getting started with Docker!,” 2021 [Online]. Available at: https://docker-curriculum.com. [Accessed: 01-Apr-2021]
- M. Marabesi, “Docker 101 - Getting started,” 2017 [Online]. Available at: https://www.slideshare.net/marabesi/docker-101-getting-started. [Accessed: 03-May-2021]
- Nodejs, “docker-node,” 2021 [Online]. Available at: https://github.com/nodejs/docker-node. [Accessed: 11-Apr-2021]
- Amigoscode and T. with Nana, “Docker and Kubernetes - Full Course for Beginners,” 2020 [Online]. Available at: https://youtu.be/Wf2eSG3owoA?t=6332. [Accessed: 03-May-2021]
- L. Tal and Y. Goldberg, “10 best practices to containerize Node.js web applications with Docker,” 2021 [Online]. Available at: https://snyk.io/blog/10-best-practices-to-containerize-nodejs-web-applications-with-docker. [Accessed: 13-Jan-2021]
- B. Fisher, “Docker and Node.js Best Practices from Bret Fisher at DockerCon,” 2019 [Online]. Available at: http://www.youtube.com/watch?v=Zgx0o8QjJk4. [Accessed: 23-May-2020]
- B. Fisher, “Top 4 Tactics To Keep Node.js Rockin’ in Docker,” 2021 [Online]. Available at: https://www.docker.com/blog/keep-nodejs-rockin-in-docker. [Accessed: 30-Jul-2019]
- Marcus, “How to do Zero Downtime Deployments of Docker Containers,” 2019 [Online]. Available at: https://coderbook.com/@marcus/how-to-do-zero-downtime-deployments-of-docker-containers. [Accessed: 2020-Jun-11AD]
Table of contents
- Docker images and services
- Docker compose