Search

Chapter 12. Setting container network modes

download PDF

The chapter provides information about how to set different network modes.

12.1. Running containers with a static IP

The podman run command with the --ip option sets the container network interface to a particular IP address (for example, 10.88.0.44). To verify that you set the IP address correctly, run the podman inspect command.

Prerequisites

  • The container-tools module is installed.

Procedure

  • Set the container network interface to the IP address 10.88.0.44:

    # podman run -d --name=myubi --ip=10.88.0.44 registry.access.redhat.com/ubi8/ubi
    efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85

Verification

  • Check that the IP address is set properly:

    # podman inspect --format='{{.NetworkSettings.IPAddress}}' myubi
    10.88.0.44

12.2. Running the DHCP plugin without systemd

Use the podman run --network command to connect to a user-defined network. While most of the container images do not have a DHCP client, the dhcp plugin acts as a proxy DHCP client for the containers to interact with a DHCP server.

Note

This procedure only applies to rootfull containers. Rootless containers do not use the dhcp plugin.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Manually run the dhcp plugin:

    # /usr/libexec/cni/dhcp daemon &
    [1] 4966
  2. Check that the dhcp plugin is running:

    # ps -a | grep dhcp
    4966 pts/1    00:00:00 dhcp
  3. Run the alpine container:

    # podman run -it --rm --network=example alpine ip addr show enp1s0
    Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
    Trying to pull docker.io/library/alpine:latest...
    ...
    Storing signatures
    
    2: eth0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
        link/ether f6:dd:1b:a7:9b:92 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0
        ...

    In this example:

    • The --network=example option specifies the network named example to connect.
    • The ip addr show enp1s0 command inside the alpine container checks the IP address of the network interface enp1s0.
    • The host network is 192.168.1.0/24
    • The eth0 interface leases an IP address of 192.168.1.122 for the alpine container.
Note

This configuration may exhaust the available DHCP addresses if you have a large number of short-lived containers and a DHCP server with long leases.

12.3. Running the DHCP plugin using systemd

You can use the systemd unit file to run the dhcp plugin.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. Create the socket unit file:

    # cat /usr/lib/systemd/system/io.podman.dhcp.socket
    [Unit]
    Description=DHCP Client for CNI
    
    [Socket]
    ListenStream=%t/cni/dhcp.sock
    SocketMode=0600
    
    [Install]
    WantedBy=sockets.target
  2. Create the service unit file:

    # cat /usr/lib/systemd/system/io.podman.dhcp.service
    [Unit]
    Description=DHCP Client CNI Service
    Requires=io.podman.dhcp.socket
    After=io.podman.dhcp.socket
    
    [Service]
    Type=simple
    ExecStart=/usr/libexec/cni/dhcp daemon
    TimeoutStopSec=30
    KillMode=process
    
    
    [Install]
    WantedBy=multi-user.target
    Also=io.podman.dhcp.socket
  3. Start the service immediately:

    # systemctl --now enable io.podman.dhcp.socket

Verification

  • Check the status of the socket:

    # systemctl status io.podman.dhcp.socket
    io.podman.dhcp.socket - DHCP Client for CNI
    Loaded: loaded (/usr/lib/systemd/system/io.podman.dhcp.socket; enabled; vendor preset: disabled)
    Active: active (listening) since Mon 2022-01-03 18:08:10 CET; 39s ago
    Listen: /run/cni/dhcp.sock (Stream)
    CGroup: /system.slice/io.podman.dhcp.socket

12.4. The macvlan plugin

Most of the container images do not have a DHCP client, the dhcp plugin acts as a proxy DHCP client for the containers to interact with a DHCP server.

The host system does not have network access to the container. To allow network connections from outside the host to the container, the container has to have an IP on the same network as the host. The macvlan plugin enables you to connect a container to the same network as the host.

Note

This procedure only applies to rootfull containers. Rootless containers are not able to use the macvlan and dhcp plugins.

Note

You can create a macvlan network using the podman network create --macvlan command.

Additional resources

12.5. Switching the network stack from CNI to Netavark

Previously, containers were able to use DNS only when connected to the single Container Network Interface (CNI) plugin. Netavark is a network stack for containers. You can use Netavark with Podman and other Open Container Initiative (OCI) container management applications. The advanced network stack for Podman is compatible with advanced Docker functionalities. Now, containers in multiple networks access containers on any of those networks.

Netavark is capable of the following:

  • Create, manage, and remove network interfaces, including bridge and MACVLAN interfaces.
  • Configure firewall settings, such as network address translation (NAT) and port mapping rules.
  • Support IPv4 and IPv6.
  • Improve support for containers in multiple networks.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. If the /etc/containers/containers.conf file does not exist, copy the /usr/share/containers/containers.conf file to the /etc/containers/ directory:

    # cp /usr/share/containers/containers.conf /etc/containers/
  2. Edit the /etc/containers/containers.conf file, and add the following content to the [network] section:

    network_backend="netavark"
  3. If you have any containers or pods, reset the storage back to the initial state:

    # podman system reset
  4. Reboot the system:

    # reboot

Verification

  • Verify that the network stack is changed to Netavark:

    # cat /etc/containers/containers.conf
    ...
    [network]
    network_backend="netavark"
    ...
Note

If you are using Podman 4.0.0 or later, use the podman info command to check the network stack setting.

Additional resources

12.6. Switching the network stack from Netavark to CNI

You can switch the network stack from Netavark to CNI.

Prerequisites

  • The container-tools module is installed.

Procedure

  1. If the /etc/containers/containers.conf file does not exist, copy the /usr/share/containers/containers.conf file to the /etc/containers/ directory:

    # cp /usr/share/containers/containers.conf /etc/containers/
  2. Edit the /etc/containers/containers.conf file, and add the following content to the [network] section:

    network_backend="cni"
  3. If you have any containers or pods, reset the storage back to the initial state:

    # podman system reset
  4. Reboot the system:

    # reboot

Verification

  • Verify that the network stack is changed to CNI:

    # cat /etc/containers/containers.conf
    ...
    [network]
    network_backend="cni"
    ...
Note

If you are using Podman 4.0.0 or later, use the podman info command to check the network stack setting.

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.