In this segment, you will learn how to install and use Docker on an Ubuntu Linux VM.
A tutorial series on building a complete home lab system from the ground up that is beginner-friendly, versatile, and maintainable.
Important note
I will soon be covering the process of migrating the home lab pfSense instance to serve the entire home network, so we can make better use of pfSense and make accessing VMs a lot easier. This is best done with some extra hardware for pfSense, it can be done while keeping pfSense as a VM but I don’t recommend it unless you have a cluster of Proxmox hosts to enable high availability (HA) on the pfSense VM.
If you don’t have any extra hardware but are up for doing a little shopping, I have some recommendations here.
What is Docker
Docker is a containerization tool, before we get into all that let’s first talk about how virtualization works a bit, just to express the differences from containerization.
With virtualization, we use a hypervisor on the operating system, (like we’re doing with Proxmox, which uses the Linux KVM hypervisor). The hypervisor creates a “virtual machine”, in that it provides virtual RAM, CPU cores, hard drives, network interfaces, and other hardware. We install on top of that virtual machine an operating system (and kernel, which is the software that communicates with the hardware and is considered the “core” of the operating system).
Docker, on the other hand, doesn’t virtualize anything at all. The kernel and hardware are shared with the host machine (or host VM in our case, which is virtualized, but let’s ignore that fact for now). These resources are of course sandboxed to keep them separated from the host machine, but it’s the methodologies that set Docker apart, here are some of the core conceptual components:
- Images: A Docker image is kind of like a snapshot of a container, it contains the software and filesystem. Images are immutable, meaning they cannot be modified, but you can start a container from it to make changes and commit that container as a new image.
- Containers: A container is a runnable state, when you “run” an image it runs as a container, it’s separated from the original image. Containers are generally regarded as expendable unless you’re putting together a new image. This may seem counter-intuitive, but you’ll see why this is so powerful later on.
- Volumes: A volume is for persistent storage of files, think of it as an external drive for a container. You can also bind mount a directory on the host machine into a container as a volume.
- Networks: Docker uses internal private networks for containers by default instead of giving them virtual network interfaces on the outside network, you can “expose” ports on the container from the docker host instead though.
With me so far? One other important concept is the use of environment variables for any instance-specific configuration, the image should check for whatever relevant environment variables it might utilize and fall back to sane defaults. What about those expendable containers though?
In a production environment, you’d have a container spawned from an image, with volumes storing its persistent data, and configuration details in environment variables. When you want to make changes, you build a new image, now you can test the new image in a staging environment with the same configuration and maybe some mock data, then just wipe out the running production container and spawn a new container from the new image. This is really cool, because your staging environment can match production exactly for accurate testing, and updating production is near-instant!
Better still, you can build your image from a pre-existing image from the Docker hub, no need to run through any OS installations or other provisioning!
One more important concept - a running container executes a command when it starts, if that command terminates the container stops. This is also very useful with Docker’s restart policy settings, if your application crashes Docker will restart it automatically if it’s configured to do so.
Most of this can be configured in the command line or a “Dockerfile”, which is a file that contains instructions on how docker should build an image.
We’ll get into building images and pulling them from the Docker hub down below, I have a few more things to explain first.
What is Docker-Compose
If Docker is a tool for running containers, the best way to explain Docker-compose is a tool for running multiple containers. It also provides more configuration options than a Dockerfile, like attaching named volumes or bind-mounts, or named networks, and most importantly attaching multiple containers to the same network.
What is Portainer
Portainer is a nice web interface for Docker. I’ll mostly be covering the command line usage in this segment, but Portainer is pretty easy to figure out once you understand Docker itself. If you are a visual learner, this will be a useful tool for you.
Installation
As usual, we’ll want to create a new VM and do all the boilerplate configurations, I’ll omit them here but you can refer to part 5 (don’t forget to up the RAM, and CPU count).
Add Disk Space
I’ve shown you how to add a new “disk” and combine it with a logical volume, but let’s grow the existing disk this time, I’ll increase mine to 32GiB, that should be adequate for our needs here.
Start by going to the hardware tab of the VM in Proxmox, select the hard disk, then click the “Resize Disk” button up top, whatever number you choose will be added to the existing size.
Then, SSH into the new VM. We need to make sure the system is aware of the new disk size, the easiest way to do this in my experience without rebooting is with parted
.
sudo parted /dev/sda
If you don’t already have it installed, sudo apt install -y parted
.
Type p
to print the current partition table, and you should be presented with this message:
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space
(an extra 50331648 blocks) or continue with the current setting?
Fix/Ignore? fix
Type fix
, then q
to quit parted
.
You could do the next step in parted
, but I prefer using fdisk
since changes are stored in memory until you tell it to apply them, it’s just a bit safer.
sudo fdisk /dev/sda
Type p
to print the current table, you should have something like this:
Command (m for help): p
Disk /dev/sda: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E32F132B-3FFE-4657-B64A-DD6D8F88ED62
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2101247 2097152 1G Linux filesystem
/dev/sda3 2101248 16775167 14673920 7G Linux filesystem
We’re going to delete partition 3 and re-create it as a larger size (don’t panic!). Note that partition 3 starts on sector 2101248, and its type is “Linux filesystem”, we’ll want to make sure these don’t change.
Type d
to delete a partition, then hit enter to select partition 3 which should be the default, then n
to create a new partition, and enter three times to select the default values (just make sure it’s the same partition number and starting sector that it was before).
Command (m for help): d
Partition number (1-3, default 3):
Partition 3 has been deleted.
Command (m for help): n
Partition number (3-128, default 3):
First sector (2101248-67108830, default 2101248):
Last sector, +sectors or +size{K,M,G,T,P} (2101248-67108830, default 67108830):
Created a new partition 3 of type 'Linux filesystem' and of size 31 GiB.
You’ll then be asked if you want to remove the existing filesystem signature, choose no here.
Partition #3 contains a LVM2_member signature.
Do you want to remove the signature? [Y]es/[N]o: n
Type p
one more time to verify that everything looks good, then wq
to write the changes to disk and quit fdisk
.
Command (m for help): p
Disk /dev/sda: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E32F132B-3FFE-4657-B64A-DD6D8F88ED62
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2101247 2097152 1G Linux filesystem
/dev/sda3 2101248 67108830 65007583 31G Linux filesystem
Command (m for help): wq
The partition table has been altered.
Syncing disks.
You should see this message:
The partition table has been altered.
Syncing disks.
If you get a warning about the disk being in use, use the command partprobe /dev/sda
to force re-sync the active partition table.
Next, we need to grow the LVM physical volume, with the command sudo pvresize /dev/sda3
, you should see:
Physical volume "/dev/sda3" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Then run sudo lvs
to see the name of your volume group and logical volume, in my case ubuntu-vg
and ubunut-lv
, respectively.
Run sudo lvresize -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
to extend the logical volume.
Size of logical volume ubuntu-vg/ubuntu-lv changed from 4.00 GiB (1024 extents) to <31.00 GiB (7935 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
Finally, the command sudo resize2fs /dev/ubuntu-vg/ubuntu-lv
will extend the filesystem to the new size.
Install Docker
These are the recommended installation instructions from docs.docker.com :
sudo apt install -y apt-transport-https \
ca-certificates curl gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
If you want to be able to run Docker as a non-root user, you must add your user to the “docker” group by running the command sudo usermod -aG docker YOURUSERNAME
(you’ll have to log out and then back in to have the change take effect). Note that this is a security risk, effectively giving your user passwordless root access to the host VM.
Install Docker-Compose
Again from the Docker documentation site :
sudo apt install -y curl
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.3/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Install Portainer
The following commands will get it up and running with all the bells and whistles enabled:
docker pull portainer/portainer
docker volume create portainer_data
docker run -d --name portainer_gui \
--restart always \
-e "CAP_HOST_MANAGEMENT=1" \
-p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
-v /:/host \
portainer/portainer
Note: After setting a username and password under the Users menu at http://IPADDRESS:9000
, head to Endpoints > local and set the Public IP to the hosts IP address (or hostname if it’s configured with DNS on your network), so when you click the links on published ports in the containers list it opens the correct URL.
Usage
Now we can dig into it, start by running docker version
, and docker-compose version
to ensure they’ve been successfully installed.
I won’t be covering every possible command here, I recommend taking a look at the Docker CLI documentation when you are finished here.
CLI
Let’s start simply by spinning up a Fedora container:
docker run -it fedora:latest /bin/bash
This command means we want to run
a container from the image fedora:latest
, the -it
(same as -i -t
) means we want it to run an interactive TTY. The -i
flag is for “interactive”, to show the output on our command line, and the -t
flag is for “TTY”, which lets us type commands into the container from our terminal. Lastly, /bin/bash
is the command we will run within the container.
You’re now logged into a new Fedora container as the root user running a bash session, not much to do here as it’s just an empty container, type exit
to get out of the container.
The command docker ps
will show a list of running containers, you’ll note that only Portainer is running, our Fedora container has stopped since the command /bin/bash
it was started with has exited.
Try running docker container list -a
to show all running and stopped containers, you’ll find the Fedora container there, it’s been given a random name since we didn’t specify one. let’s get rid of this one with docker container rm CONTAINER_NAME
.
Another neat trick, if had run the command docker run --rm -it fedora:latest /bin/bash
, the --rm
flag means “Remove the container when it stops”, which makes for easy cleanup.
Running docker image list
will show you what images you have on the system, so far we have portainer/portainer:latest
, and fedora:latest
. In the case of Portainer, the image has an organization of portainer
and an image named portainer
, hence portainer/portainer
, and in both cases, the image is tagged latest
, but there are many other tags (think of tags similarly to releases), but latest
is the default if you don’t specify a tag.
You can find ready-made images for tons of software at the Docker Hub , try search for Fedora there and see what tags are available.
Now let’s try bind-mounting, first create a JavaScript file in the current directory:
hello.js
console.log('hello world');
Then run the command docker run --rm -it -v "${PWD}:/app" node:10-alpine /usr/local/bin/node /app/hello
, the -v
flag specifies a volume, we’re mounting ${PWD}
(which resolves to your current working directory) on the host to /app
on the container.
The command then executes node /app/hello
within the container, which tells NodeJS to run the /app/hello.js
file (from our working directory via bind-mount). It’ll take a minute to download the node:10-alpine
image, then you should see “hello world” printed out in the terminal before the process ends and the container stops. Running this again should be much faster now since the image has already been downloaded from the Docker Hub.
It’s common practice to specify the full path of the command to run (e.g. /usr/local/bin/node
instead of node
), I found that path by running docker run --rm -it node:10-alpine which node
.
Alpine Linux is commonly used in Docker because it is designed to be very small in size, Alpine achieves its small size by only shipping with a small set of command-line tools.
One last useful command is docker exec -it CONTAINER_NAME /bin/bash
, which gets you into a bash session of a running container, note that on Alpine Linux you need to use sh
instead of bash
, and this won’t work on some images like Portainer because it ships without a shell.
Dockerfile
Let’s take it up a notch now, create a file in your working directory:
Dockerfile
FROM alpine:latest
RUN apk update && \
apk add --no-cache git perl && \
cd /tmp && \
git clone https://github.com/jasonm23/cowsay.git && \
cd cowsay ; ./install.sh /usr/local && \
rm -rf /var/cache/apk/* /var/tmp/* /tmp/* && \
apk del git
CMD [ "/usr/local/bin/cowsay", "I am a Docker cow!" ]
The RUN
command here is a bit complex, it’s installing git
and perl
, cloning the cowsay
repository, installing it, them removing all the unneeded software and cache files to save space. The commands are also all chained into one to reduce the number of layers in the final image (each line in the Dockerfile is a new layer).
the CMD
line is given as a space separated array, so the actual command it will run in the container is /usr/local/bin/cowsay "I am a Docker cow!"
.
Now we can run the command docker build -t mycoolapp:mytag .
to build the image mycoolapp
with the tag mytag
using the current directory (.
).
If we run the command docker run --rm mycoolapp:mytag
, you should see a neat cow delivering your message declaring his Docker-ness. While this isn’t all that useful, but hopefully, you’re beginning to see some of the things that make Docker such a powerful tool.
This barely even scratches the surface though, you can copy files into the image, declare volumes, even build your application in one container and then pass the finished build off to a fresh container all in the same Dockerfile. I’ll leave you with the Dockerfile reference to look through as well.
Docker-Compose
Here’s where it gets really fun! We’re going to stand up a quick and dirty issue tracker I slapped together for another article to demonstrate just how portable Dockerized applications are, it’ll be the easiest thing you’ve done all day.
First, download the source code by running git clone https://github.com/dlford/example-cloud-native-fullstack-nextjs.git
, then cd example-cloud-native-fullstack-nextjs
.
Run docker-compose up
, then sit back and watch as four containers are created; a MongoDB database, a GraphQL API server, a NextJS frontend server, and an NGINX reverse proxy server to tie it all together. We did all that in just three commands! Go ahead and point your web browser to http://IP_ADDRESS_OF_DOCKER_VM:3000
to check it out. (Hit ctrl
+ c
when you want to shut it all down).
Note: The images, volumes, and networks are still on the system, if you want to remove them you can run docker-compose down --rmi local --volumes
from the project directory, or docker container prune
, docker image prune
, docker volume prune
, and docker network prune
, which will remove any container, image, volume, and network that isn’t running or in use.
How did all of that even work you might ask? Here’s your homework assignment: I’ll put the Docker-Compose file down below, it may seem pretty long but we just configure four systems in those 60 lines of configuration! See if you can decipher what each line does. The documentation should have all the information you’ll need to reference.
If you want to know how I created this issue tracker app, you can read all about it here.
docker-compose.yml
version: '3.7'
services:
db_prod:
volumes:
- type: volume
source: db-data_prod
target: /data/db
- type: volume
source: db-config_prod
target: /data/configdb
networks:
net1:
aliases:
- db
build:
context: ./db
dockerfile: Dockerfile
server_prod:
depends_on:
- db_prod
build:
context: ./server
dockerfile: Dockerfile
environment:
- DATABASE_URL=mongodb://db:27017/cloud-native-next
- PORT=3000
networks:
net1:
aliases:
- server
client_prod:
depends_on:
- server_prod
build:
context: ./client
dockerfile: Dockerfile
networks:
net1:
aliases:
- client
nginx_prod:
depends_on:
- server_prod
- client_prod
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- '3000:3000'
networks:
net1:
aliases:
- nginx
networks:
net1:
name: cloud-native-next
volumes:
db-data_prod:
db-config_prod: