Chapter 12. Setting container network modes
The chapter provides information about how to set different network modes.
12.1. Running containers with a static IP
The podman run
command with the --ip
option sets the container network interface to a particular IP address (for example, 10.88.0.44). To verify that you set the IP address correctly, run the podman inspect
command.
Prerequisites
-
The
container-tools
meta-package is installed.
Procedure
Set the container network interface to the IP address 10.88.0.44:
podman run -d --name=myubi --ip=10.88.0.44 registry.access.redhat.com/ubi9/ubi
# podman run -d --name=myubi --ip=10.88.0.44 registry.access.redhat.com/ubi9/ubi efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85
Copy to Clipboard Copied!
Verification
Check that the IP address is set properly:
podman inspect --format='{{.NetworkSettings.IPAddress}}' myubi
# podman inspect --format='{{.NetworkSettings.IPAddress}}' myubi 10.88.0.44
Copy to Clipboard Copied!
12.2. Running the DHCP plugin for Netavark using systemd
Prerequisites
-
The
container-tools
meta-package is installed.
Procedure
Enable the DHCP proxy by using the systemd socket:
systemctl enable --now netavark-dhcp-proxy.socket Created symlink /etc/systemd/system/sockets.target.wants/netavark-dhcp-proxy.socket
/usr/lib/systemd/system/netavark-dhcp-proxy.socket. systemctl enable --now netavark-dhcp-proxy.socket Created symlink /etc/systemd/system/sockets.target.wants/netavark-dhcp-proxy.socket
/usr/lib/systemd/system/netavark-dhcp-proxy.socket. Copy to Clipboard Copied! Optional: Display the socket unit file:
cat /usr/lib/systemd/system/netavark-dhcp-proxy.socket [Unit] Description=Netavark DHCP proxy socket [Socket] ListenStream=%t/podman/nv-proxy.sock SocketMode=0660 [Install] WantedBy=sockets.target
# cat /usr/lib/systemd/system/netavark-dhcp-proxy.socket [Unit] Description=Netavark DHCP proxy socket [Socket] ListenStream=%t/podman/nv-proxy.sock SocketMode=0660 [Install] WantedBy=sockets.target
Copy to Clipboard Copied! Create a macvlan network and specify your host interface with it. Typically, it is your external interface:
podman network create -d macvlan --interface-name <LAN_INTERFACE> mv1
# podman network create -d macvlan --interface-name <LAN_INTERFACE> mv1 mv1
Copy to Clipboard Copied! Run the container by using newly created network:
podman run --rm --network mv1 -d --name test alpine top
# podman run --rm --network mv1 -d --name test alpine top 894ae3b6b1081aca2a5d90a9855568eaa533c08a174874be59569d4656f9bc45
Copy to Clipboard Copied!
Verification
Confirm the container has an IP on your local subnet:
podman exec test ip addr
# podman exec test ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 5a:30:72:bf:13:76 brd ff:ff:ff:ff:ff:ff inet 192.168.188.36/24 brd 192.168.188.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5830:72ff:febf:1376/64 scope link valid_lft forever preferred_lft forever
Copy to Clipboard Copied! Inspect the container to verify it uses correct IP addresses:
podman container inspect test --format {{.NetworkSettings.Networks.mv1.IPAddress}}
# podman container inspect test --format {{.NetworkSettings.Networks.mv1.IPAddress}} 192.168.188.36
Copy to Clipboard Copied!
When attempting to connect to this IP address, ensure the connection is made from a different host. Connections from the same host are not supported when using macvlan networking.
12.3. Running the DHCP plugin for CNI using systemd
You can use the systemd
unit file to run the dhcp
plugin.
Prerequisites
-
The
container-tools
meta-package is installed.
Procedure
Optional: Make sure you re sign the CNI network stack:
podman info --format "{{.Host.NetworkBackend}}"
# podman info --format "{{.Host.NetworkBackend}}" cni
Copy to Clipboard Copied! Enable the DHCP proxy by using the systemd socket:
systemctl enable --now cni-dhcp.socket
# systemctl enable --now cni-dhcp.socket Created symlink /etc/systemd/system/sockets.target.wants/cni-dhcp.socket
/usr/lib/systemd/system/cni-dhcp.socket. Copy to Clipboard Copied! Optional: Display the socket unit file:
cat /usr/lib/systemd/system/io.podman.dhcp.socket [Unit] Description=CNI DHCP service socket Documentation=https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp PartOf=cni-dhcp.service [Socket] ListenStream=/run/cni/dhcp.sock SocketMode=0660 SocketUser=root SocketGroup=root RemoveOnStop=true [Install] WantedBy=sockets.target
# cat /usr/lib/systemd/system/io.podman.dhcp.socket [Unit] Description=CNI DHCP service socket Documentation=https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp PartOf=cni-dhcp.service [Socket] ListenStream=/run/cni/dhcp.sock SocketMode=0660 SocketUser=root SocketGroup=root RemoveOnStop=true [Install] WantedBy=sockets.target
Copy to Clipboard Copied!
Verification
Check the status of the socket:
systemctl status io.podman.dhcp.socket systemctl status cni-dhcp.socket
# systemctl status io.podman.dhcp.socket # systemctl status cni-dhcp.socket ● cni-dhcp.socket - CNI DHCP service socket Loaded: loaded (/usr/lib/systemd/system/cni-dhcp.socket; enabled; vendor preset: disabled) Active: active (listening) since Mon 2025-01-06 08:39:35 EST; 33s ago Docs: https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp Listen: /run/cni/dhcp.sock (Stream) Tasks: 0 (limit: 11125) Memory: 4.0K CGroup: /system.slice/cni-dhcp.socket
Copy to Clipboard Copied!
12.4. The macvlan plugin
Most of the container images do not have a DHCP client, the dhcp
plugin acts as a proxy DHCP client for the containers to interact with a DHCP server.
The host system does not have network access to the container. To allow network connections from outside the host to the container, the container has to have an IP on the same network as the host. The macvlan
plugin enables you to connect a container to the same network as the host.
This procedure only applies to rootfull containers. Rootless containers are not able to use the macvlan
and dhcp
plugins.
You can create a macvlan network using the podman network create --driver=macvlan
command.
12.5. Switching the network stack from CNI to Netavark
Previously, containers were able to use DNS only when connected to the single Container Network Interface (CNI) plugin. Netavark is a network stack for containers. You can use Netavark with Podman and other Open Container Initiative (OCI) container management applications. The advanced network stack for Podman is compatible with advanced Docker functionalities. Now, containers in multiple networks access containers on any of those networks.
Netavark is capable of the following:
- Create, manage, and remove network interfaces, including bridge and MACVLAN interfaces.
- Configure firewall settings, such as network address translation (NAT) and port mapping rules.
- Support IPv4 and IPv6.
- Improve support for containers in multiple networks.
The CNI network stack is deprecated and will be removed in a future RHEL release. Use the Netavark network stack instead.
Prerequisites
-
The
container-tools
meta-package is installed.
Procedure
If the
/etc/containers/containers.conf
file does not exist, copy the/usr/share/containers/containers.conf
file to the/etc/containers/
directory:cp /usr/share/containers/containers.conf /etc/containers/
# cp /usr/share/containers/containers.conf /etc/containers/
Copy to Clipboard Copied! Edit the
/etc/containers/containers.conf
file, and add the following content to the[network]
section:network_backend="netavark"
network_backend="netavark"
Copy to Clipboard Copied! If you have any containers or pods, reset the storage back to the initial state:
podman system reset
# podman system reset
Copy to Clipboard Copied! Reboot the system:
reboot
# reboot
Copy to Clipboard Copied!
Verification
Verify that the network stack is changed to Netavark:
cat /etc/containers/containers.conf ... [network] network_backend="netavark" ...
# cat /etc/containers/containers.conf ... [network] network_backend="netavark" ...
Copy to Clipboard Copied!
If you are using Podman 4.0.0 or later, use the podman info
command to check the network stack setting.
12.6. Switching the network stack from Netavark to CNI
You can switch the network stack from Netavark to CNI.
The CNI network stack is deprecated and will be removed in a future RHEL release. Use the Netavark network stack instead.
Prerequisites
-
The
container-tools
meta-package is installed.
Procedure
If the
/etc/containers/containers.conf
file does not exist, copy the/usr/share/containers/containers.conf
file to the/etc/containers/
directory:cp /usr/share/containers/containers.conf /etc/containers/
# cp /usr/share/containers/containers.conf /etc/containers/
Copy to Clipboard Copied! Edit the
/etc/containers/containers.conf
file, and add the following content to the[network]
section:network_backend="cni"
network_backend="cni"
Copy to Clipboard Copied! If you have any containers or pods, reset the storage back to the initial state:
podman system reset
# podman system reset
Copy to Clipboard Copied! Reboot the system:
reboot
# reboot
Copy to Clipboard Copied!
Verification
Verify that the network stack is changed to CNI:
cat /etc/containers/containers.conf ... [network] network_backend="cni" ...
# cat /etc/containers/containers.conf ... [network] network_backend="cni" ...
Copy to Clipboard Copied!
If you are using Podman 4.0.0 or later, use the podman info
command to check the network stack setting.