Chapter 6. Managing project networks


Project networks help you to isolate network traffic for cloud computing. Steps to create a project network include planning and creating the network, and adding subnets and routers.

6.1. VLAN planning

When you plan for VLANs in your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you start with a number of subnets, from which you allocate individual IP addresses. When you use multiple subnets you can segregate traffic between systems into VLANs.

For example, it is ideal that your management or API traffic is not on the same network as systems that serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to govern traffic flow.

You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and IP address utilization for the various types of virtual networking resources in your deployment.

Red Hat OpenStack Services on OpenShift (RHOSO) requires the following physical data center networks.

Control plane network
Used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
Designate network
Used internally by the RHOSO DNS service (designate) to manage the DNS servers. For more information, see Designate networks in Configuring DNS as a service.
Designateext network
Used to provide external access to the DNS service resolver and the DNS servers.
External network

An optional network that is used when required for your environment. For example, you might create an external network for any of the following purposes:

  • To provide virtual machine instances with Internet access.
  • To create flat provider networks that are separate from the control plane.
  • To configure VLAN provider networks on a separate bridge from the control plane.
  • To provide access to virtual machine instances with floating IPs on a network other than the control plane network.

    Note

    When an external network is used for workloads, an OVN gateway is required in some use cases. For more information, see on use cases and available options, see Configuring a control plane OVN gateway with a dedicated NIC in Configuring networking services.

Internal API network
Used for internal communication between RHOSO components.
Octavia network
Used to connect Load-balancing service (octavia) controllers running in the control plane. For more information, see Octavia network in Configuring load balancing as a service.
Storage network
Used for block storage, RBD, NFS, FC, and iSCSI.
Storage Management network

An optional network that is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.

Note

For more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:

Tenant (project) network
Used for data communication between virtual machine instances within the cloud deployment.

Figure 6.1. Physical networks for RHOSO

Physical networks for RHOSO

The following table details the default networks used in a RHOSO deployment.

Note

By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.

Expand
Table 6.1. Default RHOSO networks
Network nameCIDRNetConfig allocationRangeMetalLB IPAddressPool rangenet-attach-def ipam rangeOCP worker nncp range

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

192.168.122.10 - 192.168.122.20

designate

172.26.0.0/24

n/a

n/a

172.26.0.30 - 172.26.0.70

172.26.0.10 - 172.26.0.20

designateext

172.34.0.0/24

n/a

172.34.0.80 - 172.34.0.120

172.34.0.30 - 172.34.0.70

172.34.0.10 - 172.34.0.20

external

10.0.0.0/24

10.0.0.100 - 10.0.0.250

n/a

n/a

n/a

internalapi

172.17.0.0/24

172.17.0.100 - 172.17.0.250

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

172.17.0.10 - 172.17.0.20

octavia

172.23.0.0/24

n/a

n/a

172.23.0.30 - 172.23.0.70

n/a

storage

172.18.0.0/24

172.18.0.100 - 172.18.0.250

n/a

172.18.0.30 - 172.18.0.70

172.18.0.10 - 172.18.0.20

storageMgmt

172.20.0.0/24

172.20.0.100 - 172.20.0.250

n/a

172.20.0.30 - 172.20.0.70

172.20.0.10 - 172.20.0.20

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

n/a

172.19.0.30 - 172.19.0.70

172.19.0.10 - 172.19.0.20

6.3. IP address consumption

In Red Hat OpenStack Services on OpenShift (RHOSO) environments the following systems consume IP addresses from your allocated range:

  • Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate physical NICs to specific functions. For example, allocate management and NFS traffic to distinct physical NICs, sometimes with multiple NICs connecting across to different switches for redundancy purposes.
  • Virtual IPs (VIPs) for High Availability - Plan to allocate between one and three VIPs for each network that controller nodes share.

6.4. Virtual networking

The following virtual resources consume IP addresses in OpenStack Networking in Red Hat OpenStack Services on OpenShift (RHOSO) environments. These resources are considered local to the cloud infrastructure, and do not need to be reachable by systems in the external physical network:

  • Project networks - Each project network requires a subnet that it can use to allocate IP addresses to instances.
  • Virtual routers - Each router interface plugging into a subnet requires one IP address.
  • Instances - Each instance requires an address from the project subnet that hosts the instance. If you require ingress traffic, you must allocate a floating IP address to the instance from the designated external network.
  • Management traffic - Includes OpenStack Services and API traffic. All services share a small number of VIPs. API, RPC and database services communicate on the internal API VIP.

6.5. Example network plan

This example shows a number of networks in a Red Hat OpenStack Services on OpenShift (RHOSO) environment that accommodate multiple subnets, with each subnet being assigned a range of IP addresses:

Example subnet plan
Expand
Subnet nameAddress rangeNumber of addressesSubnet Mask

Provisioning network

192.168.100.1 - 192.168.100.250

250

255.255.255.0

Internal API network

172.16.1.10 - 172.16.1.250

241

255.255.255.0

Storage

172.16.2.10 - 172.16.2.250

241

255.255.255.0

Storage Management

172.16.3.10 - 172.16.3.250

241

255.255.255.0

Tenant network (GRE/VXLAN)

172.16.4.10 - 172.16.4.250

241

255.255.255.0

External network (incl. floating IPs)

10.1.2.10 - 10.1.3.222

469

255.255.254.0

Provider network (infrastructure)

10.10.3.10 - 10.10.3.250

241

255.255.252.0

6.6. Working with subnets

In Red Hat OpenStack Services on OpenShift (RHOSO) environments use subnets to grant network connectivity to instances. A subnet is a pool of IP addresses. Instances are assigned to a Networking service (neutron) network. One network can have multiple subnets, and you can also add IP addresses from multiple subnets to the port.

You can create subnets only in pre-existing networks. Remember that project networks in the Networking service can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and prefer a measure of isolation between them.

You can lessen network latency and load by grouping systems in the same subnet that require a high volume of traffic between each other.

6.7. Configuring floating IP port forwarding

In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to enable users to set up port forwarding for floating IPs, you must enable the Networking service (neutron) port_forwarding service plug-in.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • The port_forwarding service plug-in requires that you also set the ovn-router service plug-in.

Procedure

  • Update the control plane:

    $ oc patch -n openstack openstackcontrolplane openstack-galera-network-isolation --type=merge --patch "
    ---
    spec:
      neutron:
        template:
          customServiceConfig: |
            [default]
            service_plugins=ovn-router,port_forwarding
    "
    Note

    The port_forwarding service plug-in requires that you also set the router service plug-in.

    RHOSO users can now set up port forwarding for floating IPs.

Verification

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Ensure that the Networking service has successfully loaded the port_forwarding and router service plug-ins:

    $ openstack extension list --network -c Name -c Alias --max-width 74 | \
    grep -i -e 'Neutron L3 Router' -i -e floating-ip-port-forwarding \
    --os-cloud <cloud_name>
    • Replace <cloud_name> with the name of the cloud on which you are running the command.

      Sample output

      A successful verification produces output similar to the following:

      | Floating IP Port Forwarding       | floating-ip-port-forwarding        |
      | Neutron L3 Router                 | router                             |
  3. Exit the openstackclient pod:

    $ exit

6.8. Bridging the physical network

In Red Hat OpenStack Services on OpenShift (RHOSO) environments you can bridge your virtual network to the physical network to enable connectivity to and from virtual instances.

In this procedure, the example physical interface, eth0, is mapped to the bridge, br-ex; the virtual bridge acts as the intermediary between the physical network and any virtual networks.

As a result, all traffic traversing eth0 uses the configured Open vSwitch to reach instances.

To map a physical NIC to the virtual Open vSwitch bridge, complete the following steps:

Procedure

  1. Open /etc/sysconfig/network-scripts/ifcfg-eth0 in a text editor, and update the following parameters with values appropriate for the network at your site:

    • IPADDR
    • NETMASK GATEWAY
    • DNS1 (name server)

      Here is an example:

      DEVICE=eth0
      TYPE=OVSPort
      DEVICETYPE=ovs
      OVS_BRIDGE=br-ex
      ONBOOT=yes
  2. Open /etc/sysconfig/network-scripts/ifcfg-br-ex in a text editor and update the virtual bridge parameters with the IP address values that were previously allocated to eth0:

    DEVICE=br-ex
    DEVICETYPE=ovs
    TYPE=OVSBridge
    BOOTPROTO=static
    IPADDR=192.168.120.10
    NETMASK=255.255.255.0
    GATEWAY=192.168.120.1
    DNS1=192.168.120.1
    ONBOOT=yes

    You can now assign floating IP addresses to instances and make them available to the physical network.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top