Containers: Docker (Part 1)
We will be using a virtual machine in the faculty's cloud.
When creating a virtual machine in the Launch Instance
window:
- Name your VM using the following convention:
cc_lab_<username>
, where<username>
is your institutional account. - Select Boot from image in Instance Boot Source section
- Select CC 2024-2025 in Image Name section
- Select the m1.large flavor.
In the virtual machine:
-
Download the laboratory archive from here in the
/home/student
directory. Use:wget https://repository.grid.pub.ro/cs/cc/laboratoare/lab-containers-part-1.zip
to download the archive. -
Extract the archive using
unzip lab-containers-part-1.zip
-
Run chmod u+x ./lab-container-part-1.sh && ./lab-container-part-1.sh to create the lab-container-part-1 directory with the necessary files.
-
Navigate to the corresponding directory where you will resolve all the tasks using `cd ./lab-container-part-1"
$ # Download the laboratory archive
$ wget https://repository.grid.pub.ro/cs/cc/laboratoare/lab-containers-part-1.zip
$ # Extract the archive
$ unzip lab-containers-part-1.zip
$ # Change permissions and execute the setup script
$ chmod u+x ./lab-container-part-1.sh
$ # Run the setup script
$ ./lab-container-part-1.sh
$ # Navigate to the working directory
$ cd ./lab-container-part-1
Note that in this laboratory, you have all the required software on the working infrastructure. If you want to install Docker on your local system, follow this tutorial.
Needs / use-cases
- easy service install
- isolated test environments
- local replicas of production environments
Objectives
- container management (start, stop, build)
- service management
- container configuration and generation
What are containers?
Containers are an environment in which we can run applications isolated from the host system.
In Linux-based operating systems, containers are run like an application which has access to the resources of the host station, but which may interact with processes from outside the isolated environment.
The advantage of using a container for running applications is that it can be easily turned on and off and modified. Thus, we can install applications in a container, configure them and run them without affecting the other system components
A real usecase where we run containers is when we want to set up a server that depends on fixed, old versions of certain libraries. We don't want to run that server on our system physically, as conflicts with other applications may occur. Containerizing the server, we can have a version of the library installed on the physical machine and another version installed on the container without conflict between them.
Containers versus virtual machines?
Both containers and virtual machines allow us to run applications in an isolated environment. However, there are fundamental differences between the two mechanisms. A container runs directly on top of the operating system. Meanwhile, a virtual machine runs its own kernel and then runs the applications on top of that. This added abstraction layer adds overhead to running the desired applications, and the overhead slows down the applications.
Another plus for running containers is the ability to build and pack them iteratively. We can easily download a container from a public repository, modify it, and upload it to a public repository without uploading the entire image. We can do that because changes to a container are made iteratively, saving the differences between the image original and modified version.
There are also cases where we want to run applications inside a virtual machine. E.g, if we want to run a compiled application for an operating system other than Linux, we could not do this because containers can run applications that are compiled for the system host operation. Virtual machines can also run operating systems other than the operating system host.
Before we start
If you do not have a DockerHub account, create one before starting this laboratory. It will be used to push or pull Docker images.
Inspect Docker Instances
Let's start with inspecting the Docker installation and instances on the virtual machine.
Follow the steps below:
-
See available
docker
commands:docker help
-
Check the
docker
version:docker version
-
Find out the currently running Docker containers:
docker ps
You will see the Docker containers that are currently running, namely an Nginx container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fbfe1d0b5870 nginx:latest "/docker-entrypoint.…" 6 hours ago Up 38 seconds 0.0.0.0:8080->80/tcp, [::]:8080->80/tcp cdl-nginx -
Find out all containers, including those that are stopped:
docker ps -a
A new container, named
ctf-piece_of_pie
is now visible:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16a526c7c94c ctf-piece_of_pie "/usr/local/bin/run.…" 24 minutes ago Exited (137) 51 seconds ago ctf-piece_of_pie
fbfe1d0b5870 nginx:latest "/docker-entrypoint.…" 6 hours ago Up 40 seconds 0.0.0.0:8080->80/tcp, [::]:8080->80/tcp cdl-nginx -
Find out port-related information about the
cdl-nginx
container that is running:docker port cdl-nginx
You can see the port forwarding:
80/tcp -> 0.0.0.0:8080
80/tcp -> [::]:8080You can check the current install by querying the server:
curl localhost:8080
You will see the default HTML page of Nginx.
No information is shown for containers that are not running:
docker port ctf-piece_of_pie
-
Get detailed information about the Docker instances, either started or stopped:
docker inspect cdl-nginx
docker inspect ctf-piece_of_pie -
Find out of the runtime logging information of the container:
docker logs cdl-nginx
docker logs ctf-piece_of_pie -
Find out runtime statistics and resource consumption of the running Nginx container:
docker stats cdl-nginx
Close the screen by running
Ctrl+c
three times. -
Find out the internal processes of the running Nginx container:
docker top cdl-nginx
Exercise: Inspect Docker Instances
Repeat the steps above, at least 2-3 times.
Now, let's use the steps above on different containers.
Start two new containers named cdl-caddy
and cdl-debian-bash
by running the corresponding scripts:
./vanilla-caddy/run-caddy-container.sh
./debian-bash/run-debian-bash-container.sh
Inspect the two newly started containers using the commands above.
Interact with Docker Instances
Let's now do actual interaction with Docker container instances. Such as starting and stopping containers, copying files to / from containers, getting a shell inside containers etc.
Follow the steps below.
Start Instances
Start the ctf-piece_of_pie
instance:
docker start ctf-piece_of_pie
Now check it is started:
docker ps
You can see it appears as a started container.
Check the ports and the processes:
docker port ctf-piece_of_pie
docker top ctf-piece_of_pie
Connect locally to test the service:
nc localhost 31337
Stop Instances
Stop the cdl-nginx
instance:
docker stop cdl-nginx
You can see it does not appear as a started container.
Check to see the list of stopped containers:
docker ps -a
Remove Containers
A stopped container can be removed. Once this is done, the container is gone forever. It will have to be re-instantiated if needed, as we'll see in section "Images and Containers".
Remove the cdl-nginx
container:
docker rm cdl-nginx
The container is now gone. You can use different commands to see if is gone:
docker ps -a
docker inspect cdl-nginx
docker stats cdl-nginx
Connect to a Container
You can connect to a container by using docker exec
.
Typically, you want to start a shell.
Start a shell on the ctf-piece_of_pie
container by using
docker exec -it ctf-piece_of_pie /bin/bash
More than that, you can run different commands inside the container:
docker exec -it ctf-piece_of_pie ls
docker exec -it ctf-piece_of_pie ls /proc
docker exec -it ctf-piece_of_pie cat /etc/shadow
docker exec -it ctf-piece_of_pie id
Copy Files To / From a Container
You can copy files or entire directories to or from a container.
For example, to copy the README.md
file to the cdl-nginx
container in the root
directory, use:
docker cp README.md cdl-nginx:/root/
Likewise, if we want to copy the index.html
file we use:
docker cp cdl-nginx:/usr/share/nginx/html/index.html .
There is a period (.
) at the end of the command above.
It is required, it points to the current directory.
You can see that the container doesn't need to be running.
Exercise: Interact with Docker instances
Make sure all four containers are started: cdl-nginx
, ctf-piece_of_pie
, cdl-caddy
, cdl-debian-bash
.
Start them if they are not stared.
Copy files to and from containers.
-
Copy
README.md
andinstall-docker.sh
files from the current directory in the/usr/local/
directory in all containers available (viadocker ps -a
). -
Copy the
ctf/
local directory in the/usr/local/
directory in all containers available (viadocker ps -a
). -
Create a directory for each available container:
mkdir container-cdl-nginx
mkdir container-ctf-piece_of_pie
mkdir container-cdl-caddy
mkdir container-cdl-debian-bashCopy the
/bin/bash
binary from each available container to their respective directory.Copy the
/etc/os-release
file from each available container to their respective directory. Check the contents to see what Linux distro was used to construct the filesystem.
Docker Images
Images are stored locally either by being pulled from a container registry such as DockerHub (see section "Getting Images") or from a Dockefile
(see section "Dockerfile").
List the available Docker images by using:
docker image ls
You will get an output such as:
REPOSITORY TAG IMAGE ID CREATED SIZE
ctf-piece_of_pie latest 1f844c4f935b 9 hours ago 209MB
<none> <none> 99ba2c76892a 9 hours ago 216MB
<none> <none> e81d4254c928 13 hours ago 209MB
<none> <none> 2d74afaf7b34 13 hours ago 209MB
debian bookworm 617f2e89852e 2 weeks ago 117MB
nginx latest 3b25b682ea82 4 weeks ago 192MB
gcc 14.2 d0b5d902201b 3 months ago 1.42GB
The <none>
entries store intermediary versions of an image file.
You can also inspect an image, such as debian:bookworm
.
docker image inspect debian:bookworm
Images and Containers
As stated above, containers are created from images.
Let's re-create the Nginx container, starting from the nginx:latest
image:
docker create --rm --name cdl-nginx nginx:latest
Check out it was created by running:
docker ps -a
The container is currently stopped. In order to start the container, run:
docker start cdl-nginx
Check out it was started by running:
docker ps
docker logs cdl-nginx
docker inspect cdl-nginx
docker stats cdl-nginx
The create and start command can be combined in a single command, docker run
.
Create two more Nginx containers by running docker run
:
docker run --rm --name cdl2-nginx -p 8882:80 nginx:latest
docker run --rm --name cdl3-nginx -p 8883:80 nginx:latest
Check whether they are running:
docker ps
docker stats cdl2-nginx
docker stats cdl3-nginx
curl localhost:8882
curl localhost:8883
The --rm
option will remove an Nginx instance once it is stopped.
Stop the instances:
docker stop cdl2-nginx
docker stop cdl3-nginx
Now the containers are gone forever (because of the --rm
option):
docker ps -a
Exercise: Create more Nginx instances
Create more Nginx instances from available images:
-
Use
docker run
to create 5 more Nginx images from thenginx:latest
image. Make sure you use different public ports.Use the
--rm
option ofdocker run
. -
Stop the containers you have just started.
-
Check they are gone forever.
Getting Images
Images are stored locally either by being pulled from a container registry such as DockerHub (see section "Getting Images") or from a Dockefile
(see section "Dockerfile).
To search for an image you like, use the commands below:
docker search database
To pull images locally, use:
docker pull <container-image-name-and-path-in-regitry>
such as:
docker pull nginx:latest
docker pull gcc:14.2
Exercise: Download Docker images
Download and instantiate other images.
-
Download images the applications: MongoDB, MariaDB. Use the names
mongodb:latest
andmariadb:latest
. -
Create 5 container instances for
MongoDB
and 5 container instances forMariaDB
. Use the--rm
option fordocker run
. -
Check to see the container instances are running.
-
After a while, stop the newly instances.
Inspect the docker service
Docker runs as a service (docker.service
) under Linux (dockerd
is the Docker daemon). You can inspect its status by using systemctl status docker
.
student@work:~$ systemctl status docker
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2025-02-18 18:38:17 EET; 6 days ago
TriggeredBy: docker.socket
Docs: https://docs.docker.com
Main PID: 7580 (dockerd)
Tasks: 21
Memory: 538.7M
CGroup: /system.slice/docker.service
├─ 7580 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
└─23702 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 3000 -con>
You can restart the service (usually when changing the Docker daemon configuration) by running systemctl restart docker
.
docker system info
(or docker info
) shows general information about the Docker installation (version, plugins), data regarding containers (number of containers, number of images), runtime solution, security options, and details about the current system (operating system, architecture, resources).
student@work:~$ docker system info
Client: Docker Engine - Community
Version: 27.1.2
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.16.2
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
scan: Docker Scan (Docker Inc.)
Version: v0.23.0
Path: /usr/libexec/docker/cli-plugins/docker-scan
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 2
Server Version: 27.1.2
...
Security Options:
apparmor
seccomp
Profile: builtin
Kernel Version: 5.15.0-118-generic
Operating System: Ubuntu 20.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 5.748GiB
Using docker system df
, you can see the total space used by the containers, images, volumes etc., including the space that can be reclaimed (unused data).
student@work:~$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 2 1 965.7MB 396.8MB (41%)
Containers 1 1 3.869kB 0B (0%)
Local Volumes 10 3 632.6MB 347.5MB (54%)
Build Cache 16 0 0B 0B
To reclaim the space, you can use docker system prune
. It's always a good idea to clean up your working space.
student@work:~$ docker system prune
To check the system-wide events, you can use docker system events
. The command below limits the events to the ones that happened since last hour (it helps while filtering and debugging).
student@work:~$ docker system events --since $(echo $(date +"%s") - 3600 | bc)
2025-02-25T10:31:38.911766282+02:00 container prune (reclaimed=0)
2025-02-25T10:31:38.913897032+02:00 network prune (reclaimed=0)
2025-02-25T10:31:38.914807392+02:00 image prune (reclaimed=0)
2025-02-25T10:31:38.986330104+02:00 builder prune (reclaimed=0)
...
Another method of inspecting the logs associated with the docker service is by using journalctl
. Run it yourself and compare the results with the ones displayed using docker system events
.
Building a container
Most times just running a container interactively and connecting to it when the need arises is not enough. We want a way to automatically build and distribute single-use containers. For example, we want to use purpose build containers when running a CI/CD system that build a website and publishes it to the web. Each website has its own setup requirements, and we'd like to automate this. We could add automation by running a script, but in this case we'd lose one of the positives of running containers, the iterative nature of images, because the docker images would be monolithic.
In order to create a container we need to define a Dockerfile
file as follows:
FROM gitlab.cs.pub.ro:5050/scgc/cloud-courses/ubuntu:18.04
ARG DEBIAN_FRONTEND=noninteractive
ARG DEBCONF_NONINTERACTIVE_SEEN=true
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-get install -y firefox
Each line contains commands that will be interpreted by Docker when building the image:
FROM
, specifies the base container imageRUN
, runs in container
This container will then be used to compile a container which can run Firefox.
It should be noted that in the process of building containers we have to use non-interactive commands, because we do not have access to the terminal where the terminal is built, so we can not write the keyboard options.
To build the container we will use the following command:
student@lab-docker:~$ docker build -t firefox-container .
When we run the command we base that the Dockerfile
file is in the current directory (~
). The -t
option will generate a container image named firefox-container
.
To list container images on the machine use the following command:
student@lab-docker:~$ docker image list
This list contains both internally downloaded and locally built containers.
Exercise: Generate a container image
- Write a
Dockerfile.centos
file containing a recipe for generating a container image based on thegitlab.cs.pub.ro:5050/scgc/cloud-courses/centos:7
container in which to install thebind-utils
tool.
To generate a container using a file other than the default Dockerfile
we use the -f
option.
- Start the container generated in the previous exercise and run the command
nslookup hub.docker.com
to verify the installation of the package.
Downloading containers
Another important principle, both in the use of containers and in programming in general, is reusability. Instead of developing a new solution for every problem we encounter, we can use a solution that has already been implemented and submitted to a public repository.
For example, if we want to use a MySQL database to store information, instead of using a basic Ubuntu container and installing and configuring the server ourselves, we can download a container that already has the package installed.
Running commands in an unloaded container
We will use as an example, a set of containers consisting of a MySQL database and a WordPress service.
To start the two containers we will use the following commands:
student@lab-docker:~$ sudo docker network remove test-net
test-net
student@lab-docker:~$ sudo docker network create test-net
69643d63f7a785c07d4b93cf77a8b921e97595da778344e9aa8f62ac9cb6909a
student@lab-docker:~$ sudo docker run -d --hostname db --network test-net -e "MYSQL_ROOT_PASSWORD=somewordpress" -e "MYSQL_DATABASE=wordpress" -e "MYSQL_USER=wordpress" -e "MYSQL_PASSWORD=wordpress" mysql:5.7
657e3c4a23e120adf0eb64502deead82e156e070f7e9b47eff522d430279d3e1
student@lab-docker:~$ sudo docker run -d --hostname wordpress --network test-net -p "8000:80" -e "WORDPRESS_DB_HOST=db" -e "WORDPRESS_DB_USER=wordpress" -e "WORDPRESS_DB_PASSWORD=wordpress" gitlab.cs.pub.ro:5050/scgc/cloud-courses/wordpress:latest
Unable to find image 'wordpress:latest' locally
latest: Pulling from library/wordpress
c229119241af: Pull complete
47e86af584f1: Pull complete
e1bd55b3ae5f: Pull complete
1f3a70af964a: Pull complete
0f5086159710: Pull complete
7d9c764dc190: Pull complete
ec2bb7a6eead: Pull complete
9d9132470f34: Pull complete
fb23ab197126: Pull complete
cbdd566be443: Pull complete
be224cc1ae0f: Pull complete
629912c3cae4: Pull complete
f1bae9b2bf5b: Pull complete
19542807523e: Pull complete
59191c568fb8: Pull complete
30be9b012597: Pull complete
bb41528d36dd: Pull complete
bfd3efbb7409: Pull complete
7f19a53dfc12: Pull complete
23dc552fade0: Pull complete
5133d8c158a7: Pull complete
Digest: sha256:df2edd42c943f0925d4634718d1ed1171ea63e043a39201c0b6cbff9d470d571
Status: Downloaded newer image for wordpress:latest
b019fd009ad4bf69a9bb9db3964a4d446e9681b64729ffb850af3421c1df070c
The useful options above are:
-e
sets an environment variable. This variable will be received by the container;-p
exposes an internal port of the container (80
) to a port on the host machine (8000
);--hostname
makes it so the container uses a specific hostname;--network
connects the container to a network other than the default.
We noticed in the output that we created the test-net
network. We did this because in the default docker configuration, containers cannot communicate between themselves
We can connect using the Firefox browser to the virtual machine on port 8000
to configure the WordPress server.
Exercise: Running commands in the container
Start a container that hosts the NextCloud file sharing service. To connect to the NextCloud service, you need to expose the HTTP server running in the virtual machine. To do this, follow the example above. The container image name is nextcloud
.
Build More Images from Dockerfiles
Let's build the following Docker images:
-
Build the CTF Docker image:
docker build -f dockerfile/ctf.Dockerfile -t my-ctf ctf/
The options in the above command are:
-f dockerfile/ctf.Dockerfile
: the path to theDockerfile
used to build the image-t my-ctf
: the image name (also called a tag)ctf/
: the directory that will be used as the base forCOPY
commands
Running the command above results in the creation of the
my-ctf
image. -
Build the
linux-kernel-labs
Docker image:docker build -f dockerfile/linux-kernel-labs.Dockerfile -t linux-kernel-labs .
Running the command above results in an error:
=> ERROR [32/36] RUN groupadd -g $ARG_GID ubuntu
------
> [32/36] RUN groupadd -g $ARG_GID ubuntu:
0.207 groupadd: invalid group ID 'ubuntu'
------
linux-kernel-labs.Dockerfile:42
--------------------
40 | ARG ARG_GID
41 |
42 | >>> RUN groupadd -g $ARG_GID ubuntuThis is caused by missing build arguments
ARG_UID
andARG_GID
. We provide these arguments via the--build-arg
option:docker build -f dockerfile/linux-kernel-labs.Dockerfile --build-arg ARG_GID=$(id -g) --build-arg ARG_UID=$(id -u) -t linux-kernel-labs .
Running the command above results in the creation of the
linux-kernel-labs
image. -
Build the
uso-lab
Docker image:docker build -f dockerfile/uso-lab.Dockerfile -t uso-lab .
Running the command above results in an error:
=> ERROR [15/16] COPY ./run.sh /usr/local/bin/run.sh
------
> [15/16] COPY ./run.sh /usr/local/bin/run.sh:
------
uso-lab.Dockerfile:20
--------------------
18 | RUN rm -rf /var/lib/apt/lists/*
19 |
20 | >>> COPY ./run.sh /usr/local/bin/run.sh
21 | CMD ["run.sh"]This is because the
run.sh
script is not available in the local filesystem. You will fix that as a task below. -
Build the
dropbox
Docker image:docker build -f dockerfile/dropbox.Dockerfile -t dropbox .
Running the command above results in a similar error as above:
=> ERROR [9/9] COPY ./run.sh /usr/local/bin/run.sh
------
> [9/9] COPY ./run.sh /usr/local/bin/run.sh:
------
dropbox.Dockerfile:80
--------------------
78 |
79 | # Install init script and dropbox command line wrapper
80 | >>> COPY ./run.sh /usr/local/bin/run.sh
81 | CMD ["run.sh"]This is because the
run.sh
script is not available in the local filesystem. You will fix that as a task below.
Exercise: Fix Build Issue
First, fix the issue with the creation of the uso-lab
image.
That is:
-
Copy the
run.sh
script locally. -
Run the
docker build
command again. Be sure to pass the correct path as the final argument to thedocker build
command. This is the path where therun.sh
script is located locally.
Follow similar steps to fix the issue with the creation of the dropbox
image.
Exercise: Images from Other Dockerfiles
Search the Internet (GitHub or otherwise) for two Dockerfiles. Build images from those two Dockerfiles.
Exercise: Python Server
Go to the python-server
directory and build the container using the following command:
docker build -t python-server:1.0 .
The command builds the container with the specification from the Dockerfile
.
Test the container functionality by running:
curl localhost:8080
Change the base image to Debian and rebuild the container tagged with the python-server-debian:1.0
tag.
Create a Makefile
with has the following rules:
build
: creates a new image using theDockerfile
;start
: starts a container based on thepython-server
image namedpython-workspace
in the background;stop
: stops thepython-workspace
container;connect
: connects to the container in an interactive shell.
Exercise: Assignment Checker
A common use case for using containers is platform-agnostic testing.
The assignment-checker
directory contains a bash scripts which runs tests on an application by running it and comparing its output with a reference.
Create a Docker image which is able to run this script, compile de application and run the tests.
Exercise: Build Program With GCC13
An advantage of using containers is the fact that they offer a flexible environment for testing and building applications.
Based on this Dockerfile, create a Docker image which compiles an application based based on a Makefile
located in the /workdir
path.
The container must be able to compile applications using GCC13.
The application to be compiled is located in assignment-checker/src
.
Use the included Makefile
to compile it.
Container Registries
Now that we have created a set of containers, we want to publish them so they are available to the world and to download on other systems.
To push the python-container
image that we have built earlier, we will need to tag it so that it has an associated namespace as such:
docker tag python-container:1.0 <dockerhub-username>/python-container:1.0
Where dockerhub-username
is your DockerHub username.
To push the container you will use the docker push command
:
docker push <dockerhub-username>/python-container:1.0
Tag the assignment-checker
container and push it to DockerHub.
Using GitHub Container Registry
While using DockerHub offers great visibility for projects and container images, it limits the number of pulls for images on a specific IP. To bypass this issue we will create a GitHub Container Registry (GHCR) account and login to it.
Follow the GHCR tutorial to create a GHCR account.
Login to the account the same as you did with the DockerHub account and tag the assignment-checker
image to be pushed to GHCR.