MakeSpace Association
12/03/2026
Imagine you're developing a killer web app that has three main components: a React frontend, a Python API, and a PostgreSQL database. If you wanted to work on this project, you'd have to install Node, Python, and PostgreSQL.
How do you make sure you have the same versions as your team? Or your CI/CD system? Or what's used in production?
Docker is a containerization platform that packages an application and its dependencies into a standardized unit called a container. Containers ensure consistency across different environments by providing an isolated runtime environment.
Unlike virtual machines, containers share the host system's kernel, making them lightweight and efficient.
Key characteristics:
- Containers do not require a separate operating system (OS) instance
- They share the underlying OS kernel with the host system
- This eliminates the need for additional OS resources and overhead, making containers lightweight and fast to start
Docker relies on Linux kernel features:
- Namespaces: Provide isolated environments for processes so that containers do not interfere with each other. Each container has its own network, filesystem, process tree, and other isolated components.
- cgroups (control groups): Allow fine-grained control over resource allocation and management, ensuring that each container gets the right amount of CPU, memory, and other resources.
Requirements:
- Enable Virtual Machine Platform
- Enable Windows Subsystem for Linux (WSL)
- Open Terminal (PowerShell) and run
docker --version
Recommendation: Install a Linux distribution from the Microsoft Store and integrate it into Docker Desktop
# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
# Download and install Docker Desktop
sudo apt update
sudo apt install gnome-terminal
sudo apt-get update
sudo apt install ./docker-desktop-amd64.debRecommended VS Code Extension: Docker (Microsoft)
If you want to try Docker without installing it locally:
Containers are:
- Self-contained: Each container has everything it needs to function with no reliance on any pre-installed dependencies on the host machine
- Isolated: They have minimal influence on the host and other containers, increasing security
- Independent: Each container is independently managed. Deleting one container won't affect any others
- Portable: Containers can run anywhere - the same container works on your development machine, in a data center, or in the cloud
If you're new to container images, think of them as a standardized package that contains everything needed to run an application, including its files, configuration, and dependencies. These packages can be distributed and shared with others.
For a PostgreSQL image, that image will package the database binaries, config files, and other dependencies. For a Python web app, it'll include the Python runtime, your app code, and all of its dependencies.
-
Images are immutable: Once an image is created, it can't be modified. You can only make a new image or add changes on top of it.
-
Images are composed of layers: Each layer represents a set of file system changes that add, remove, or modify files.
These principles let you extend or add to existing images. For example, if you're building a Python app, you can start from the Python image and add additional layers to install your app's dependencies and add your code.
To share your Docker images, you need a place to store them. This is where registries come in.
Docker Hub is the default and go-to registry for images. It provides both a place for you to store your own images and to find images from others to either run or use as the bases for your own images.
Download an image from Docker Hub:
docker pull <image_name>List available images:
docker image lsRemove an image:
docker image rm <image_name>Remove all unused images:
docker image pruneView image digests:
docker images --digestsView image history:
docker history <image_name>A Dockerfile is a text-based script that provides the instruction set on how to build an image. It contains all the commands needed to create a specific image.
Example for a simple password generator web app
Download the password generator web app from GitHub:
https://github.com/makespacemadrid/docker-workshop/tree/main/pass-generator
docker build -t pwd-gen .
docker run -d -p 8080:80 pwd-genThe Dockerfile code
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]docker run -d -p 8080:80 docker/welcome-to-dockerYour container is now accessible on port 8080. Visit http://localhost:8080
docker run -p HostPort:ContainerPort --name container_name image_namedocker runcreates and starts a container-p HostPort:ContainerPortcreates a port mapping between the host and the container--name container_nameassigns a specific name to the containerimage_namespecifies the image to be used
When you run this command:
- It searches for the image locally; if not found, it downloads it from Docker Hub
- It creates and starts the container:
- Assigns a virtual IP within the Docker network
- Opens and maps the specified ports
To stop a container running in the foreground, use Ctrl + C.
Run in background:
docker run -d -p 8080:80 httpdList running containers:
docker ps
# or
docker container lsStop a container:
docker stop <id/name>Docker sends a SIGTERM first, and if it doesn't respond, it sends a SIGKILL.
Start a stopped container:
docker start <id>Delete a container:
docker rm <id>
# or force delete
docker rm -f <id>View container logs:
docker logs <id>Show processes inside container:
docker top <id>Inspect container details:
docker inspect <container_id>View resource usage:
docker statsShow disk space usage:
docker system dfStop all running containers:
docker stop $(docker ps -q)Remove all unused containers:
docker pruneLaunch a container with interactive shell:
docker run -it -p 8080:80 httpd bashOnce inside, you can execute commands directly within the container. Verify the environment:
cat /etc/os-releaseNote: The main process is launched when you execute docker run. However, if an additional command is provided (such as bash), the container will not exhibit its default behavior.
Run Ubuntu container:
docker run -it ubuntuInside the container, you can install packages:
apt-get update
apt-get install wget
apt-get install curlTip: Alpine Linux is a good option for containers because it is very lightweight and secure.
Docker Desktop allows you to:
- View container information, including logs and files
- Access the shell via the Exec tab
- Perform actions like pause, resume, start, or stop
One best practice for containers is that each container should do one thing and do it well.
You can use multiple docker run commands to start multiple containers. But you'll need to manage networks, all the flags needed to connect containers to those networks, and more. Cleanup is also more complicated.
With Docker Compose, you can define all of your containers and their configurations in a single YAML file. If you include this file in your code repository, anyone that clones your repository can get up and running with a single command.
It's important to understand that Compose is a declarative tool - you simply define it and go. If you make a change, run docker compose up again and Compose will reconcile the changes intelligently.
docker compose up -d --build
docker compose downNote: By default, volumes aren't automatically removed when you tear down a Compose stack. If you want to remove the volumes, add the --volumes flag:
docker compose down --volumes-
Network: A communication path that allows containers to talk to each other or to the outside world.
-
Subnet: A segmented piece of a larger network. In Docker, it defines the range of IP addresses available for containers within that network (e.g.,
172.18.0.0/16). -
Gateway: The "doorway" or router for a network. It is the IP address that allows traffic to exit the current subnet to reach other networks or the internet.
-
Bridge (Default): If no network is specified, Docker uses the default bridge. Docker sets up firewall rules (iptables) and NAT (Network Address Translation) to provide internet access via the host.
-
Commands:
-
docker network ls: List all available networks. -
docker network inspect <network_name>: Displays network configuration in JSON format, including IPAM (IP Address Management), subnet, gateway, and connected containers.
-
-
Bridge: The standard driver for standalone containers.
-
Host: Removes network isolation between the container and the Docker host. The container uses the host's networking namespace. Note: Only fully functional on Linux; on macOS/Windows, it shares the IP of the internal VM.
-
Null (None): No network connectivity. Useful for isolated processes.
To test connectivity, install tools inside a container:
Bash
apt-get update
apt-get install iproute2 iputils-ping wget -y
ip addr # Check local IP
ping <other_container_IP>
wget <other_container_IP:PORT>
-
Create:
docker network create <network_name> -
Manage:
-
docker run -d --name <name> --network <network_name> <image> -
docker network connect <network> <container> -
docker network disconnect <network> <container>
-
-
Key Concept: When containers share a custom network, they can use their container names as hostnames instead of IP addresses. Docker's embedded DNS server handles the resolution.
-
Important: This feature only works on user-defined networks. It does not work on the default
bridgenetwork. Using user-defined networks is a best practice to achieve proper network isolation and easy service discovery.
When you delete a container, its internal files are lost. However, there are two main ways to keep your data safe:
- Volumes: Data is stored outside the container's lifecycle. Even if the container is removed, the data persists.
- Bind Mounts: This method links a specific directory on your host machine directly to a directory inside the container.
Working with Volumes
In a Dockerfile, you can use the VOLUME instruction to declare a persistent point. For example, in a MariaDB image:
VOLUME /var/lib/mysql
When the container starts, Docker creates a volume and mounts it at /var/lib/mysql.
By default, Docker assigns a random, cryptic name to the volume. To make it dentifiable, you should assign a custom name when running the container (e.g., -v my-db-data:/var/lib/mysql).
Useful Commands:
docker volume ls: Lists all available volumes.docker volume inspect <name>: Shows detailed information about a specific volume.
Verifying the Data:
If you want to see the files inside, use docker exec -it <container_id> bash to enter the container. Once inside, navigate to the directory (cd /var/lib/mysql) and run ls to list all the persistent files.
Bind Mounts
How can I use a host folder with Docker? It's easy. For an Apache container, you must specify the path using the -v flag:
docker run -d -p 80:80 -v C:\hostDir:/usr/local/apache2/htdocs/ httpd
This command maps your local directory C:\hostDir to the container's document root. Any changes you make to the files on your host will immediately reflect inside the container. This is particularly useful for development, as you don't need to rebuild the image to see your code changes.
🚀 Challenge: The "Live Code" PHP Server
PHP requires a server-side engine to process the code, whereas HTML is simply rendered by the browser. By using a bind mount, we can edit our code on the host machine and see the results instantly inside the container.
You will use Docker to spin up a professional web server in seconds and link your local folder to it using a Bind Mount
<?php
echo "<h1>Hi! You are in MakeSpace :) </h1>";
echo "<p>Current server time" . date('H:i:s') ."</p>";
echo "<p>Try changing this text in your editor and refreshing the page.</p>";
?>docker run -d -p 8080:80 --name php-server -v C:\host dir:/var/www/html php:apache
Docmost is an enterprise-ready, open-source collaborative wiki and documentation platform. It is designed for seamless real-time collaboration, allowing multiple users to edit the same page simultaneously without conflicts or data loss.
First, create a dedicated directory for your Docmost installation and download the official configuration file.
mkdir docmost
cd docmost
# Download the official docker-compose file
curl -O https://raw.githubusercontent.com/docmost/docmost/main/docker-compose.ymlOpen the file with your preferred text editor (e.g., Nano or Vi):
nano docker-compose.yml
# OR
vi docker-compose.ymlYou must replace the default environment variables to ensure the application starts correctly and securely.
-
APP_URL: Replace this with your domain or IP (e.g.,http://localhost:3000for local testing orhttps://wiki.example.comfor production). -
APP_SECRET: This must be a long, random string (minimum 32 characters). -
Database Credentials: Replace
STRONG_DB_PASSWORDin bothPOSTGRES_PASSWORDand theDATABASE_URLstring with a secure password of your choice.
Caution
If you keep the default values, the application will fail to start for security reasons.
[!TIP] You can quickly generate a secure 32-character secret using OpenSSL:
openssl rand -hex 32
Run the following command in the directory containing your docker-compose.yml file:
docker compose up -dThe -d flag runs the containers in detached mode, meaning they will keep running in the background.
[!IMPORTANT] Troubleshooting: If you see the error
Cannot connect to the Docker daemon, ensure that the Docker service is running on your machine:
Linux:
sudo systemctl start dockerDesktop: Open the Docker Desktop application.
Docker will pull the necessary images, create a default network (docmost_default), and set up the persistent volumes. To check the status of your containers, run:
docker compose psOnce all containers show as running or healthy, you can access Docmost through your browser at: http://localhost:3000











