Understanding how Docker containers communicate — with each other, the host, and the outside world — through bridges, overlays, DNS, and network policies.
Docker uses a pluggable networking architecture built on top of Linux kernel primitives. Every container gets its own network namespace, with virtual ethernet pairs connecting it to a network driver.
Each container has an isolated network stack: its own interfaces, routing table, iptables rules, and /proc/net entries.
Virtual ethernet device pairs act as tunnels between the container namespace and the host/bridge namespace.
Pluggable drivers (bridge, host, overlay, macvlan, none) determine how traffic is routed and isolated.
# List all Docker networks
docker network ls
# Output:
# NETWORK ID NAME DRIVER SCOPE
# a1b2c3d4e5f6 bridge bridge local
# f6e5d4c3b2a1 host host local
# 9a8b7c6d5e4f none null local
Every Docker installation creates a default bridge network (the docker0 interface). Containers connect here automatically unless you specify otherwise.
docker0 created at daemon start172.17.0.0/16172.17.0.1--link (deprecated) or IP addresses# Run a container on the default bridge
docker run -d --name web nginx
# Inspect the network
docker network inspect bridge
# Check the container's IP
docker inspect -f \
'{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' \
web
# 172.17.0.2
# Containers can ping by IP (not name)
docker exec web ping 172.17.0.3
# View the bridge on the host
ip addr show docker0
# docker0: <BROADCAST,MULTICAST,UP>
# inet 172.17.0.1/16 scope global docker0
brctl show docker0
# bridge interfaces
# docker0 veth3a1b2c
User-defined bridges are the recommended way to run containers. They provide automatic DNS, better isolation, and can be connected/disconnected on the fly.
# docker-compose.yml (automatic user-defined bridge)
services:
web:
image: nginx
networks: [frontend]
api:
image: node:20
networks: [frontend, backend]
db:
image: postgres:16
networks: [backend]
networks:
frontend:
backend:
# Create a user-defined bridge
docker network create my-app
# Run containers on it
docker run -d --name api \
--network my-app node:20
docker run -d --name db \
--network my-app postgres:16
# DNS works automatically!
docker exec api ping db
# PING db (172.18.0.3): 56 bytes
# Connect a running container
docker network connect my-app web
# Disconnect
docker network disconnect my-app web
# Custom subnet and gateway
docker network create \
--subnet 10.10.0.0/24 \
--gateway 10.10.0.1 \
--ip-range 10.10.0.128/25 \
my-custom-net
With --network host, the container shares the host’s network namespace directly. No network isolation, no NAT — the container is the host from a networking perspective.
-p flag is ignored)# Run with host networking
docker run -d --network host --name monitor \
nicolaka/netshoot
# The container sees all host interfaces
docker exec monitor ip addr
# 1: lo ...
# 2: eth0: 192.168.1.100/24
# 3: docker0: 172.17.0.1/16
# nginx on host mode binds directly to port 80
docker run -d --network host nginx
curl http://localhost:80 # Works!
# docker-compose.yml
services:
prometheus:
image: prom/prometheus
network_mode: host
# No ports: mapping needed
Overlay networks enable multi-host communication by encapsulating container traffic in VXLAN tunnels. Essential for Docker Swarm and multi-node deployments.
--opt encrypteddocker swarm init)--attachable for standalone container access# Initialise Swarm
docker swarm init --advertise-addr 10.0.1.10
# Create an overlay network
docker network create \
--driver overlay \
--attachable \
--opt encrypted \
my-overlay
# Deploy a service across nodes
docker service create \
--name web \
--network my-overlay \
--replicas 3 \
nginx
# Standalone container on overlay
docker run -d --network my-overlay \
--name debug nicolaka/netshoot
These drivers assign real IP addresses from the physical network to containers, making them appear as physical hosts on the LAN. No NAT, no bridge.
# Create a macvlan network
docker network create -d macvlan \
--subnet 192.168.1.0/24 \
--gateway 192.168.1.1 \
-o parent=eth0 \
my-macvlan
# Run container with LAN IP
docker run -d --network my-macvlan \
--ip 192.168.1.50 \
--name lan-server nginx
# IPvlan L2 mode
docker network create -d ipvlan \
--subnet 192.168.1.0/24 \
--gateway 192.168.1.1 \
-o parent=eth0 \
-o ipvlan_mode=l2 \
my-ipvlan
# IPvlan L3 mode (routed)
docker network create -d ipvlan \
--subnet 10.10.0.0/24 \
-o parent=eth0 \
-o ipvlan_mode=l3 \
my-ipvlan-l3
| Driver | Scope | Container IP | DNS | Isolation | Best For |
|---|---|---|---|---|---|
| bridge (default) | Single host | Private (172.17.x.x) | No | Low | Quick testing |
| bridge (user-defined) | Single host | Private (custom) | Yes | Medium | Most workloads |
| host | Single host | Host IP | Host | None | Performance, monitoring |
| overlay | Multi-host | Private (10.0.x.x) | Yes | High | Swarm services |
| macvlan | Single host | LAN IP (unique MAC) | No | High | LAN-direct access |
| ipvlan | Single host | LAN IP (shared MAC) | No | High | Cloud VMs, L3 routing |
| none | Single host | None | No | Complete | Batch jobs, security |
Rule of thumb: Use user-defined bridges for single-host apps, overlay for multi-host, and host only when you need raw performance or host-level network access.
Docker runs an embedded DNS server at 127.0.0.11 inside every user-defined network. This enables automatic service discovery by container name.
127.0.0.11:53# Network aliases for round-robin
docker run -d --network my-app \
--network-alias api worker1
docker run -d --network my-app \
--network-alias api worker2
# "api" resolves to both IPs
# DNS in action
docker network create app-net
docker run -d --name db \
--network app-net postgres:16
docker run -d --name api \
--network app-net node:20
# Automatic resolution
docker exec api nslookup db
# Server: 127.0.0.11
# Name: db
# Address: 172.19.0.2
docker exec api ping db
# PING db (172.19.0.2): 56 bytes
# Custom DNS configuration
docker run -d \
--dns 8.8.8.8 \
--dns-search example.com \
--hostname myapp.local \
--name api my-image
Port mapping uses iptables DNAT rules to forward traffic from a host port to a container port. This is how external clients reach containerised services.
| Flag | Meaning | Example |
|---|---|---|
-p 8080:80 | Host 8080 → Container 80 | All interfaces |
-p 127.0.0.1:8080:80 | Localhost only | Dev safety |
-p 8080:80/udp | UDP protocol | DNS, QUIC |
-p 8080-8090:80-90 | Port range | Multi-port apps |
-P | Publish all EXPOSE | Random host ports |
Docker port mappings bypass UFW/firewalld by default. Use 127.0.0.1: binding or configure DOCKER_IPTABLES=false and manage rules manually.
# Basic port mapping
docker run -d -p 8080:80 nginx
# Bind to localhost only (secure)
docker run -d -p 127.0.0.1:3000:3000 \
my-api
# Multiple port mappings
docker run -d \
-p 80:80 \
-p 443:443 \
-p 127.0.0.1:8443:8443 \
my-proxy
# Check published ports
docker port my-proxy
# 80/tcp -> 0.0.0.0:80
# 443/tcp -> 0.0.0.0:443
# 8443/tcp -> 127.0.0.1:8443
# docker-compose.yml
services:
web:
image: nginx
ports:
- "80:80"
- "127.0.0.1:8080:8080"
Containers on the same user-defined network can communicate freely using container names. Cross-network communication requires connecting containers to multiple networks.
# Multi-network Compose pattern
services:
nginx:
image: nginx
networks: [frontend]
ports: ["80:80"]
api:
image: node:20
networks: [frontend, backend]
# api talks to both nginx and db
db:
image: postgres:16
networks: [backend]
# db is isolated from nginx
networks:
frontend:
name: app-frontend
backend:
name: app-backend
internal: true # No external access
frontend backend (internal)
--internal blocks external access--network none for complete isolation0.0.0.0# Internal network (no external access)
docker network create --internal backend
# Disable inter-container communication
dockerd --icc=false
# No network at all
docker run --network none alpine
# Encrypt overlay traffic
docker network create --driver overlay \
--opt encrypted secure-overlay
# Restrict daemon-level networking
# /etc/docker/daemon.json
{
"icc": false,
"iptables": true,
"default-address-pools": [
{"base": "10.10.0.0/16", "size": 24}
]
}
| Command | Description |
|---|---|
docker network ls | List all networks |
docker network create <name> | Create a bridge network (default driver) |
docker network create -d overlay <name> | Create an overlay network |
docker network inspect <name> | Show network details (subnet, containers, config) |
docker network connect <net> <container> | Attach a running container to a network |
docker network disconnect <net> <container> | Detach a container from a network |
docker network rm <name> | Remove a network (must have no containers) |
docker network prune | Remove all unused networks |
docker port <container> | Show port mappings for a container |
docker inspect --format '...' <ctr> | Extract networking info from a container |
# Useful inspect format strings
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-ctr
docker inspect -f '{{json .NetworkSettings.Ports}}' my-ctr | jq .
docker inspect -f '{{.NetworkSettings.Gateway}}' my-ctr
When containers cannot communicate, use these tools and techniques to diagnose the problem systematically.
nicolaka/netshoot is a container packed with networking tools: curl, dig, nslookup, tcpdump, iperf, nmap, and more.
# Attach netshoot to a network
docker run --rm -it \
--network my-app \
nicolaka/netshoot
# Debug from inside a container's namespace
docker run --rm -it \
--network container:my-api \
nicolaka/netshoot
# Capture traffic on docker0
docker run --rm -it --net host \
nicolaka/netshoot \
tcpdump -i docker0 -nn port 80
docker network inspect0.0.0.0 not 127.0.0.1)?iptables -L -n -t nat# Quick diagnostics from host
docker exec my-api cat /etc/resolv.conf
docker exec my-api nslookup db
docker exec my-api curl -v http://db:5432
iptables -L DOCKER -n -v
Docker supports third-party network plugins for advanced use cases. In Kubernetes, the Container Network Interface (CNI) standard replaces Docker’s built-in networking.
docker plugin install <name>--network container:mainlocalhost# Reverse proxy with Traefik
services:
traefik:
image: traefik:v3.0
ports: ["80:80", "443:443"]
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks: [proxy]
api:
image: my-api:latest
labels:
- "traefik.http.routers.api.rule=Host(`api.example.com`)"
networks: [proxy, backend]
db:
image: postgres:16
networks: [backend]
networks:
proxy:
backend:
internal: true
# Sidecar: envoy shares api's network
docker run -d --name api my-api
docker run -d --name envoy \
--network container:api \
envoyproxy/envoy
--internal127.0.0.1 in development