Table of contents
Modern Docker Best Practices - Multi-Stage Builds, Optimization, Security, and Deployment
The content here is under the Attribution 4.0 International (CC BY 4.0) license
Join Our Community
Connect with developers, architects, and tech leads who share your passion for quality software development. Discuss TDD, architecture, software engineering, and more.
→ Join SlackDocker has revolutionized how developers build and deploy applications, becoming the industry standard for containerization (Burns et al., 2019). Unlike virtual machines that require full operating systems, Docker containers leverage Linux namespaces and cgroups, providing lightweight isolation with minimal overhead and instant startup times.
This guide covers modern Docker best practices for building efficient, secure, and optimized container images. Beyond the fundamentals, we’ll explore multi-stage builds, image layer optimization, security scanning, and production-ready Dockerfile patterns that reduce image size and improve build performance.
For context, this post is somehow connected to the playlist (Marabesi, 2020) I’ve built to keep track of topics that are related to docker.
NOTE: if you are starting with docker, have a look at the curriculum (Srivastav, 2021) first to get used to docker basics. Personally I have a presentation around docker basics available as well (Marabesi, 2017).
NOTE 2: if you are interested in the nodejs docker image (how it is build) have a look at the official git repository (Nodejs, 2021)
Docker images and services
Docker images are the base for the containers, which in turn are the running instances of the image. The image is built from a Dockerfile, which is a set of instructions that the docker engine uses to build the image.
1. Make the basic setup with a standard image
Starting to build docker images requires steps with previous knowledge of the docker platform and at least to understanding of a few commands such as RUN, COPY and FROM. Based on those commands, the generated image can be big or small, it depends. Docker hub offers ready-to-use images without the need to build one, and they are classified as (from the biggest to the smallest): standard images, slim and alpine. The following snippet depicts a Dockerfile with standard image:
FROM node:23 # <--- standard image, and also the bigger compared to the next two version
WORKDIR /var/www/app
COPY . .
RUN npm ci && npm run build
EXPOSE 5000
CMD npm run serve
Dockerfile with slim image:
FROM node:23-slim # <--- slim image, smaller, but also has less dependencies installed by default
WORKDIR /var/www/app
COPY . .
RUN npm ci && npm run build
EXPOSE 5000
CMD npm run serve
Dockerfile with alpine image:
FROM node:23-alpine # <--- slim image, the smallest, but also it has some drawbacks such as missing needed dependencies by the code
WORKDIR /var/www/app
COPY . .
RUN npm ci && npm run build
EXPOSE 5000
CMD npm run serve
Usually, the setup using the basic image is faster as it comes with almost everything to run the program. on the other side though, the Alpine version has almost nothing to run the program, it has just the core, nothing else. Which in many cases will make the program not run, depending on the dependencies.
npm ci
In this example, the npm ci command is used to install the dependencies. Here are the reasons for that:
- It is optimized for continuous integration and installs dependencies much faster by skipping certain checks and steps that npm install performs.
- automatically removes the node_modules
The recommended approach is to start with the standard image, then move to the slim image and finally to the alpine image. This way, it is easier to debug issues that might arise from missing dependencies or configurations.
Modern Practice: For production builds, consider using distroless images (from Google’s distroless project) or scratch images as the final stage in multi-stage builds. These eliminate OS overhead entirely, keeping only your application and its runtime dependencies.
2. Multi-Stage Builds (Modern Approach)
Multi-stage builds are a crucial modern Docker pattern that dramatically reduces final image size by separating the build environment from the runtime environment. Build dependencies (compilers, build tools, dev packages) are excluded from the final image.
# Stage 1: Build
FROM node:23 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Runtime
FROM node:23-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
USER node
EXPOSE 5000
CMD ["node", "dist/index.js"]
Benefits:
- Dramatically smaller images: Build tools are excluded from final image
- Faster deployments: Smaller images = shorter push/pull cycles
- Better security: Fewer tools = smaller attack surface
- Cleaner separation: Build concerns (dependencies, compilers) are isolated
3. Layer Caching Optimization
Docker caches layers to speed up rebuilds. Order your Dockerfile instructions strategically to maximize cache hits—dependencies change less frequently than application code.
FROM node:23-alpine
WORKDIR /app
# Cache this layer if package files haven't changed
COPY package*.json ./
RUN npm ci
# This layer invalidates more frequently (application code changes)
COPY . .
RUN npm run build
EXPOSE 5000
CMD npm run serve
Strategy: Copy dependencies first, application code last. This way, dependency installation (usually slow) is cached unless package files change.
4. The root user
The root user is the default user in which the container runs, which makes easier the process of setting up permissions to access files or to set up configurations. Usually this is a bad practice (Tal & Goldberg, 2021), the container should not run with the root user due security issues (Fisher, 2019; Fisher, 2021). However for the process to set up the docker image this can be a bit harder, given the fact that setting up a different user with less permission can make the image setup.
If no user is given (as, in the last 3 dockerfiles shown in the previous section),
docker will build it using root, which of course has security issues. To fix this issue
docker offers the flag USER.
FROM node:23-slim
USER node # <!--- specify the user for docker to build and run the image
WORKDIR /var/www/app
COPY package*.json ./
COPY . .
RUN npm ci && npm run build
EXPOSE 5000
CMD npm run serve
This tip relies on the same approach as the previous one, first, make it work with the root user, then start to trick around permissions with a specific user. This is a common approach when building docker images, as it is easier to get the image working first, and then start to improve it. However, current images are not always built with the root user, so omiting the user is feasiable for some of them. Make sure to check it before building the image.
5. Separate concerns, avoid building different services into one image
As a best practice, the recommended way to build containers is: one container equals one process. Which can avoid problems when it comes to managing them as (DockerHub, n.d.) describes in the section “Decouple applications”. This means that each container should run a single service, such as a web server, a database, or a message broker. This approach allows for better scalability, maintainability, and flexibility in deploying and managing applications for local development.
6. Setup dockerfile first, then docker-compose
Usually, docker-compose is the next step when building services to use with docker, though developers tend to skip the first step which is to understand how the image works and then move on to compose.
7. Networking and sharing hosts
Docker creates its network interface, which in turn containers communicate with each other. Therefore, there are scenarios
in which this behavior is not desired. For example, a database. As the database holds state (the data) usually it is used
by an external provider (RDS, MongoDB atlas etc). By default, the container can’t access external ports, which in turn will
block the database connection. There are two possible options for that, using
a network flag or using the add-host flag.
# using --network flag
docker run --rm --network=host nginx
There is a side effect of using the network flag, which will ignore the docker network created automatically by docker and the container will run as if it were in the host. Impacting the port that the application runs and therefore prevents the possibility of blue-green deployments (Marcus, 2019), which requires two instances of the same app running, each on its specific port.
host network mode
The host networking is not available on Docker Desktop for Mac/Windows.
The add-host gives the flexibility needed to overcome the port issue. The
flag maps a specific host to an IP, the following example maps the localhost
to be the host.
# using --add-host flag
docker run --rm --add-host=localhost:192.168.1.102 nginx
8. .dockerignore — Reduce Build Context
The .dockerignore file excludes unnecessary files and directories from the Docker build context. A large build context can slow builds significantly and potentially leak sensitive information.
# Dependencies and build artifacts
node_modules/
*.npm
.npm/
# Version control
.git
.gitignore
.gitattributes
# Environment
.env
.env.local
.env.*.local
# IDEs and editors
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
# CI/CD
.github/
.gitlab-ci.yml
.circleci/
# Documentation and tests (often not needed in runtime)
README.md
DOCKERFILE
test/
src/
specs/
# OS
Thumbs.db
Impact: Excluding node_modules alone can reduce build context by 90%+ for Node.js projects, dramatically speeding up builds.
9. Security: Scanning and Hardening
Security is critical for container images. Vulnerabilities in base images or dependencies can compromise your deployments.
Tools & Practices:
- Docker Scout: Scans images for known vulnerabilities and provides remediation recommendations
- Trivy: Fast, comprehensive vulnerability scanner (open-source)
- Snyk: Continuous vulnerability monitoring and automated patching
Hardening Practices:
# Don't run as root
RUN useradd -m -u 1000 appuser
USER appuser
# Use specific base image tags (never 'latest')
FROM node:23.3.0-alpine3.19
# Scan regularly during CI/CD pipelines
Pin base image tags to specific versions (not latest) to ensure reproducible, auditable builds.
10. Build Performance and CI/CD Integration
For faster builds in CI/CD pipelines:
- Use build caching: Mount cache volumes in Docker buildx
- Leverage buildkit: Modern builder with better layer caching
- Parallel stages: Multi-stage builds allow parallel execution
# Enable buildkit
export DOCKER_BUILDKIT=1
# Build with cache mount
docker buildx build \
--cache-from=type=local,src=.buildcache \
--cache-to=type=local,dest=.buildcache \
-t myapp:latest .
Docker compose
This section focuses on the docker-compose only.
1. Different docker compose files for different environments
Docker compose files are used to compose the container orchestration, therefore sometimes it is needed to use different behavior based on the environment that the application is one. For example, in development mode, the database container might be needed, but in production, it might not be the case.
For that, it is possible to create different docker-compose files for each environment. For example, for development, staging and production we might have:
- development:
docker-compose-dev.yml - staging and production:
docker-compose-deploy.yml
It is also possible to share code among each docker file, which might make
sense to create a docker-compose.yml as the base for the two files previously
mentioned.
2. CPU and memory limit
Sometimes we want to limit the resources of a given image, it might fit a scenario in which is needed to measure the application performance, or, in an environment with constrained resources.
docker-compose offers two properties for that, mem_limit and cpus.
mem_limit is a hard limit, meaning the container will not try to consume
more memory even if available. On the other hand, the cpus is based on the
cores that the machine has.
version: '2'
services:
webserver:
image: nginx
mem_limit: 1024mb # memory ram setting
cpus: 0.8 # cpu setting
ports:
- 80:80
- 443:443
testable:
build:
args:
context: ./webapp
user: 'node'
Limiting the resources of a container can help to avoid resource contention and ensure that the container does not consume more resources than it needs. This can be especially important in production environments where multiple containers are running on the same host. In addition, limiting the resources of a container can help to prevent performance issues and ensure that the container runs smoothly even under low resources. I personally use this approach to check how the application behaves under low resources.
Related subjects
- Studying the Practices of Deploying Machine Learning Projects on Docker
- Docker Caching — Introduction to Docker Layers
- The Magic of Docker Desktop is Now Available on Linux (official documentation available at Install Docker Desktop on Ubuntu)
References
- Burns, B., Beda, J., & Hightower, K. (2019). Kubernetes: up and running: dive into the future of infrastructure. O’Reilly Media.
- Marabesi, M. (2020). Docker. https://youtube.com/playlist?list=PLN7yVcqYnDlX7EzsleJ1jD_D0q-cYIP7r
- Srivastav, P. (2021). A comprehensive tutorial on getting started with Docker! https://docker-curriculum.com
- Marabesi, M. (2017). Docker 101 - Getting started. https://www.slideshare.net/marabesi/docker-101-getting-started
- Nodejs. (2021). docker-node. https://github.com/nodejs/docker-node
- Tal, L., & Goldberg, Y. (2021). 10 best practices to containerize Node.js web applications with Docker. https://snyk.io/blog/10-best-practices-to-containerize-nodejs-web-applications-with-docker
- Fisher, B. (2019). Docker and Node.js Best Practices from Bret Fisher at DockerCon. http://www.youtube.com/watch?v=Zgx0o8QjJk4
- Fisher, B. (2021). Top 4 Tactics To Keep Node.js Rockin’ in Docker. https://www.docker.com/blog/keep-nodejs-rockin-in-docker
- DockerHub. Best practices for writing Dockerfiles. Retrieved May 23, 2020, from https://docs.docker.com/develop/develop-images/dockerfile_best-practices
- Marcus. (2019). How to do Zero Downtime Deployments of Docker Containers. https://coderbook.com/@marcus/how-to-do-zero-downtime-deployments-of-docker-containers
Changelog
- Feb 15, 2026 - Added multi-stage builds