Docker is just cgroups and namespaces with a nice CLI and extra steps
Under the hood
- cgroups Linux kernel feature that limits, accounts for, and isolates the resource usage of process groups. Wikipedia ↗ (opens in a new tab)
- namespaces A Linux kernel feature that partitions system resources so each process group has its own isolated view. Wikipedia ↗ (opens in a new tab)
- union filesystem A filesystem that layers multiple directories on top of each other, presenting a merged view. Wikipedia ↗ (opens in a new tab)
What they say
Docker “revolutionized software delivery” by introducing containers — “lightweight, portable, self-sufficient units” that package an application and its dependencies together. Containers are presented as a fundamentally new way to deploy software, distinct from virtual machines.
What it actually is
A container is a regular Linux process with two kernel features applied to it:1
- Namespaces — the process gets its own isolated view of PIDs, network interfaces, mount points, and hostnames. It thinks it’s the only thing running.
- cgroups (control groups) — the process gets resource limits: max memory, CPU shares, I/O bandwidth.
That’s it. There’s no hypervisor, no hardware virtualization, no separate kernel. It’s one process, on the host kernel, with isolation and limits.
The pattern in pseudocode
# 1. Create isolated namespaces (the "container" part)
unshare --mount --uts --ipc --net --pid --fork bash
# 2. Set up a layered filesystem (the "image" part)
mount -t overlay overlay \
-o lowerdir=/ubuntu-base,upperdir=/my-app,workdir=/work \
/merged
# 3. Apply resource limits (the "resource management" part)
echo $$ > /sys/fs/cgroup/memory/mycontainer/cgroup.procs
echo "512m" > /sys/fs/cgroup/memory/mycontainer/memory.limit_in_bytes
# 4. chroot into the merged filesystem
chroot /merged /bin/bash
You now have a “container.” docker run does these four steps (plus networking, logging, and a lot of convenience) behind a single command.2
The “extra steps”
- Image format — a layered filesystem built from a Dockerfile, stored as a tarball of diffs (union filesystem / overlay)
- Image registry — Docker Hub, a central place to push/pull images (just an HTTP API serving tarballs)
- Networking — virtual bridges and iptables rules so containers can talk to each other (standard Linux networking)
- Build system — Dockerfiles: a DSL that runs shell commands in sequence, snapshotting the filesystem after each step (cached, layered builds)
- CLI and daemon — the UX layer that made all of the above a one-liner
What you already know
If you’ve ever used chroot to set up a build environment, or set ulimit to restrict a process’s resources, you’ve used the same kernel features that containers are built on.
# chroot — you've seen this
chroot /path/to/rootfs /bin/bash
# docker run — same idea, more isolation
docker run -it ubuntu bash
# Both give you a shell in an isolated filesystem.
# Docker adds namespace isolation, cgroup limits,
# and a layered image format on top.
Docker’s real contribution was packaging these kernel primitives into a workflow that developers actually wanted to use. The Dockerfile, the image registry, docker build && docker push — that’s the product.3
Footnotes
-
Linux namespaces — Wikipedia — the kernel feature that provides process isolation. Mount, PID, network, UTS, IPC, and user namespaces have been in the kernel since 2002-2013. Docker uses all of them. ↩
-
cgroups — Wikipedia — control groups for resource limiting, introduced in Linux 2.6.24 (2008). Google engineers originally developed them for internal process management — years before Docker existed. ↩
-
Solomon Hykes’ dotCloud demo at PyCon 2013 — the original Docker presentation. The pitch wasn’t “we invented containers” — it was “we made them easy to use.” The
Dockerfileanddocker push/docker pullworkflow were the breakthrough, not the isolation primitives. ↩