Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Creating networks with director Operator

download PDF

To create networks and bridges on OpenShift Virtualization worker nodes and connect your virtual machines (VMs) to these networks, you define your OpenStackNetConfig custom resource (CR) and specify all the subnets for the overcloud networks. You must create one control plane network for your overcloud. You can also optionally create additional networks to implement network isolation for your composable networks.

3.1. Creating an overcloud network with the OpenStackNetConfig CRD

You must use the OpenStackNetConfig CRD to define at least one control plane network for your overcloud. You can also optionally define VLAN networks to create network isolation for composable networks such as InternalAPI, Storage, and External. Each network definition must include the IP address assignment, and the mapping information for the OpenStackNetAttachment CRD. OpenShift Virtualization uses the network definition to attach any virtual machines (VMs) to the control plane and VLAN networks.

Tip

Use the following commands to view the OpenStackNetConfig CRD definition and specification schema:

$ oc describe crd openstacknetconfig

$ oc explain openstacknetconfig.spec

Procedure

  1. Create a file named openstacknetconfig.yaml on your workstation.
  2. Add the following configuration to openstacknetconfig.yaml to create the OpenStackNetConfig custom resource (CR):

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNetConfig
    metadata:
      name: openstacknetconfig
  3. Configure network attachment definitions for the bridges you require for your network. For example, add the following configuration to openstacknetconfig.yaml to create the RHOSP bridge network attachment definition br-osp, and set the nodeNetworkConfigurationPolicy option to create a Linux bridge:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNetConfig
    metadata:
      name: openstacknetconfig
    spec:
      attachConfigurations:
        br-osp: 1
          nodeNetworkConfigurationPolicy:
            nodeSelector:
              node-role.kubernetes.io/worker: ""
            desiredState:
              interfaces:
              - bridge:
                  options:
                    stp:
                      enabled: false
                  port:
                  - name: enp6s0 2
                description: Linux bridge with enp6s0 as a port
                name: br-osp 3
                state: up
                type: linux-bridge
                mtu: 1500 4
      # optional DnsServers list
      dnsServers:
      - 192.168.25.1
      # optional DnsSearchDomains list
      dnsSearchDomains:
      - osptest.test.metalkube.org
      - some.other.domain
      # DomainName of the OSP environment
      domainName: osptest.test.metalkube.org
    1
    The network attachment definition for br-osp.
    2
    The NIC / Ethernet device to attach to on each host.
    3
    The interface name.
    4
    The maximum amount of data that can be transferred in a single network packet.
  4. Optional: To use Jumbo Frames for a bridge, configure the bridge interface to use Jumbo Frames and update the value of mtu for the bridge:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackNetConfig
    metadata:
      name: openstacknetconfig
    spec:
      attachConfigurations:
        br-osp:
          nodeNetworkConfigurationPolicy:
            nodeSelector:
              node-role.kubernetes.io/worker: ""
            desiredState:
              interfaces:
              - bridge:
                  options:
                    stp:
                      enabled: false
                  port:
                  - name: enp6s0
                description: Linux bridge with enp6s0 as a port
                name: br-osp
                state: up
                type: linux-bridge
                mtu: 9000 1
              - name: enp6s0 2
                description: Configuring enp6s0 on workers
                type: ethernet
                state: up
                mtu: 9000
      ...
    1
    Update the maximum amount of data that can be transferred in a single network packet with Jumbo Frames.
    2
    Configure the bridge interface to use Jumbo Frames.
  5. Define each overcloud network. The following example creates a control plane network and an isolated network for InternalAPI traffic:

    spec:
      ...
      networks:
      - name: Control 1
        nameLower: ctlplane 2
        subnets: 3
        - name: ctlplane 4
          ipv4: 5
            allocationEnd: 172.22.0.250
            allocationStart: 172.22.0.100
            cidr: 172.22.0.0/24
            gateway: 172.22.0.1
          attachConfiguration: br-osp 6
      - name: InternalApi 7
        nameLower: internal_api
        mtu: 1350
        subnets:
        - name: internal_api
          attachConfiguration: br-osp
          vlan: 20 8
          ipv4:
            allocationEnd: 172.17.0.250
            allocationStart: 172.17.0.10
            cidr: 172.17.0.0/24
      ...
    1
    The name of the network, for example, Control.
    2
    The lowercase version of the network name, for example, ctlplane.
    3
    The subnet specifications.
    4
    The name of the subnet, for example, ctlplane.
    5
    Details of the IPv4 subnet with allocationStart, allocationEnd, cidr, gateway, and an optional list of routes with destination and nexthop.
    6
    The network attachment definition to connect the network to. In this example, the RHOSP bridge, br-osp, is connected to a NIC on each worker.
    7
    The network definition for a composable network. To use the default RHOSP networks, you must create an OpenStackNetConfig resource for each network. For information on the default RHOSP networks, see Default Red Hat OpenStack Platform networks. To use different networks, you must create a custom network_data.yaml file. For information on creating a custom network_data.yaml file, see Configuring overcloud networking.
    8
    The network VLAN. For information on the default RHOSP networks, see Default Red Hat OpenStack Platform networks. For more information on virtual machine bridging with the OpenStackNetConfig CRD, see Understanding virtual machine bridging with the OpenStackNetConfig CRD.
  6. Optional: Reserve static IP addresses for networks on specific nodes:

    spec:
      ...
      reservations:
        controller-0:
          ipReservations:
            ctlplane: 172.22.0.120
        compute-0:
          ipReservations:
            ctlplane: 172.22.0.140
            internal_api: 172.17.0.40
            storage: 172.18.0.40
            tenant: 172.20.0.40
    Note

    Reservations have precedence over any autogenerated IP addresses.

  7. Save the openstacknetconfig.yaml definition file.
  8. Create the overcloud network:

    $ oc create -f osnetconfig.yaml -n openstack
  9. To verify that the overcloud network is created, view the resources for the overcloud network:

    $ oc get openstacknetconfig/openstacknetconfig
  10. View the OpenStackNetConfig API and child resources:

    $ oc get openstacknetconfig/openstacknetconfig -n openstack
    $ oc get openstacknetattachment -n openstack
    $ oc get openstacknet -n openstack

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp

3.1.1. Default Red Hat OpenStack Platform networks

NetworkVLANCIDRAllocation

External

10

10.0.0.0/24

10.0.0.10 - 10.0.0.250

InternalApi

20

172.17.0.0/24

172.17.0.10 - 172.17.0.250

Storage

30

172.18.0.0/24

172.18.0.10 - 172.18.0.250

StorageMgmt

40

172.19.0.0/24

172.19.0.10 - 172.19.250

Tenant

50

172.20.0.0/24

172.20.0.10 - 172.20.0.250

3.2. Understanding virtual machine bridging with the OpenStackNetConfig CRD

When you create virtual machines (VMs) with the OpenStackVMSet CRD, you must connect these VMs to the relevant Red Hat OpenStack Platform (RHOSP) networks. You can use the OpenStackNetConfig CRD to create the required bridges on the Red Hat OpenShift Container Platform (RHOCP) worker nodes and connect your Controller VMs to your RHOSP overcloud networks. RHOSP requires dedicated NICs to deploy.

The OpenStackNetConfig CRD includes an attachConfigurations option, which is a hash of nodeNetworkConfigurationPolicy. Each specified attachConfiguration in an OpenStackNetConfig custom resource (CR) creates a NetworkAttachmentDefinition object, which passes network interface data to the NodeNetworkConfigurationPolicy resource in the RHOCP cluster. The NodeNetworkConfigurationPolicy resource uses the nmstate API to configure the end state of the network configuration on each RHOCP worker node. The NetworkAttachmentDefinition object for each network defines the Multus CNI plugin configuration. When you specify the VLAN ID for the NetworkAttachmentDefinition object, the Multus CNI plugin enables vlan-filtering on the bridge. Each network configured in the OpenStackNetConfig CR references one of the attachConfigurations. Inside the VMs, there is one interface for each network.

The following example creates a br-osp attachConfiguration, and configures the nodeNetworkConfigurationPolicy option to create a Linux bridge and connect the bridge to a NIC on each worker. When you apply this configuration, the NodeNetworkConfigurationPolicy object configures each RHOCP worker node to match the required end state: each worker contains a new bridge named br-osp, which is connected to the enp6s0 NIC on each host. All RHOSP Controller VMs can connect to the br-osp bridge for control plane network traffic.

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackNetConfig
metadata:
  name: openstacknetconfig
spec:
  attachConfigurations:
    br-osp:
      nodeNetworkConfigurationPolicy:
        nodeSelector:
          node-role.kubernetes.io/worker: ""
        desiredState:
          interfaces:
          - bridge:
              options:
                stp:
                  enabled: false
              port:
              - name: enp6s0
            description: Linux bridge with enp6s0 as a port
            name: br-osp
            state: up
            type: linux-bridge
            mtu: 1500
…
  networks:
  - name: Control
    nameLower: ctlplane
    subnets:
    - name: ctlplane
      ipv4:
        allocationEnd: 192.168.25.250
        allocationStart: 192.168.25.100
        cidr: 192.168.25.0/24
        gateway: 192.168.25.1
      attachConfiguration: br-osp

If you specify an Internal API network through VLAN 20, you can set the attachConfiguration option to modify the networking configuration on each RHOCP worker node and connect the VLAN to the existing br-osp bridge:

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackNetConfig
metadata:
  name: openstacknetconfig
spec:
  attachConfigurations:
    br-osp:
  ...
  networks:
  ...
  - isControlPlane: false
    mtu: 1500
    name: InternalApi
    nameLower: internal_api
    subnets:
    - attachConfiguration: br-osp
      ipv4:
        allocationEnd: 172.17.0.250
        allocationStart: 172.17.0.10
        cidr: 172.17.0.0/24
        gateway: 172.17.0.1
        routes:
        - destination: 172.17.1.0/24
          nexthop: 172.17.0.1
        - destination: 172.17.2.0/24
          nexthop: 172.17.0.1
      name: internal_api
      vlan: 20

The br-osp already exists and is connected to the enp6s0 NIC on each host, so no change occurs to the bridge itself. However, the InternalAPI OpenStackNet associates VLAN 20 to this network, which means RHOSP Controller VMs can connect to the VLAN 20 on the br-osp bridge for Internal API network traffic.

When you create VMs with the OpenStackVMSet CRD, the VMs use multiple Virtio devices connected to each network. OpenShift Virtualization sorts the network names in alphabetical order except for the default network, which is always the first interface. For example, if you create the default RHOSP networks with OpenStackNetConfig, the following interface configuration is generated for Controller VMs:

interfaces:
  - masquerade: {}
    model: virtio
    name: default
  - bridge: {}
    model: virtio
    name: ctlplane
  - bridge: {}
    model: virtio
    name: external
  - bridge: {}
    model: virtio
    name: internalapi
  - bridge: {}
    model: virtio
    name: storage
  - bridge: {}
    model: virtio
    name: storagemgmt
  - bridge: {}
    model: virtio
    name: tenant

This configuration results in the following network-to-interface mapping for Controller nodes:

Table 3.1. Default network-to-interface mapping
NetworkInterface

default

nic1

ctlplane

nic2

external

nic3

internalapi

nic4

storage

nic5

storagemgmt

nic6

tenant

nic7

Note

The role NIC template used by OpenStackVMSet is auto generated. You can overwrite the default configuration in a custom NIC template file, <role>-nic-template.j2, for example, controller-nic-template.j2. You must add your custom NIC file to the tarball file that contains your overcloud configuration, which is implemented by using an OpenShift ConfigMap object. For more information, see Chapter 4. Customizing the overcloud with director Operator.

3.3. Example OpenStackNetConfig custom resource file

The following example OpenStackNetConfig custom resource (CR) file defines an overcloud network which includes a control plane network and isolated VLAN networks for a default RHOSP deployment. The example also reserves static IP addresses for networks on specific nodes.

apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackNetConfig
metadata:
  name: openstacknetconfig
spec:
  attachConfigurations:
    br-osp:
      nodeNetworkConfigurationPolicy:
        nodeSelector:
          node-role.kubernetes.io/worker: ""
        desiredState:
          interfaces:
          - bridge:
              options:
                stp:
                  enabled: false
              port:
              - name: enp7s0
            description: Linux bridge with enp7s0 as a port
            name: br-osp
            state: up
            type: linux-bridge
            mtu: 9000
          - name: enp7s0
            description: Configuring enp7s0 on workers
            type: ethernet
            state: up
            mtu: 9000
    br-ex:
      nodeNetworkConfigurationPolicy:
        nodeSelector:
          node-role.kubernetes.io/worker: ""
        desiredState:
          interfaces:
          - bridge:
              options:
                stp:
                  enabled: false
              port:
              - name: enp6s0
            description: Linux bridge with enp6s0 as a port
            name: br-ex
            state: up
            type: linux-bridge
            mtu: 1500
  # optional DnsServers list
  dnsServers:
  - 172.22.0.1
  # optional DnsSearchDomains list
  dnsSearchDomains:
  - osptest.test.metalkube.org
  - some.other.domain
  # DomainName of the OSP environment
  domainName: osptest.test.metalkube.org
networks:
- name: Control
  nameLower: ctlplane
  subnets:
  - name: ctlplane
    ipv4:
      allocationEnd: 172.22.0.250
      allocationStart: 172.22.0.10
      cidr: 172.22.0.0/24
      gateway: 172.22.0.1
    attachConfiguration: br-osp
- name: InternalApi
  nameLower: internal_api
  mtu: 1350
  subnets:
  - name: internal_api
    attachConfiguration: br-osp
    vlan: 20
    ipv4:
      allocationEnd: 172.17.0.250
      allocationStart: 172.17.0.10
      cidr: 172.17.0.0/24
      gateway: 172.17.0.1
      routes:
      - destination: 172.17.1.0/24
        nexthop: 172.17.0.1
      - destination: 172.17.2.0/24
        nexthop: 172.17.0.1
- name: External
  nameLower: external
  subnets:
  - name: external
    ipv4:
      allocationEnd: 10.0.0.250
      allocationStart: 10.0.0.10
      cidr: 10.0.0.0/24
      gateway: 10.0.0.1
    attachConfiguration: br-ex
- name: Storage
  nameLower: storage
  mtu: 1500
  subnets:
  - name: storage
    ipv4:
      allocationEnd: 172.18.0.250
      allocationStart: 172.18.0.10
      cidr: 172.18.0.0/24
    vlan: 30
    attachConfiguration: br-osp
- name: StorageMgmt
  nameLower: storage_mgmt
  mtu: 1500
  subnets:
  - name: storage_mgmt
    ipv4:
      allocationEnd: 172.19.0.250
      allocationStart: 172.19.0.10
      cidr: 172.19.0.0/24
    vlan: 40
    attachConfiguration: br-osp
- name: Tenant
  nameLower: tenant
  vip: False
  mtu: 1500
  subnets:
  - name: tenant
    ipv4:
      allocationEnd: 172.20.0.250
      allocationStart: 172.20.0.10
      cidr: 172.20.0.0/24
    vlan: 50
    attachConfiguration: br-osp
reservations:
  compute-0:
    ipReservations:
      ctlplane: 172.22.0.140
      internal_api: 172.17.0.40
      storage: 172.18.0.40
      tenant: 172.20.0.40
    macReservations: {}
  controller-0:
    ipReservations:
      ctlplane: 172.22.0.120
      external: 10.0.0.20
      internal_api: 172.17.0.20
      storage: 172.18.0.20
      storage_mgmt: 172.19.0.20
      tenant: 172.20.0.20
    macReservations: {}
  controller-1:
    ipReservations:
      ctlplane: 172.22.0.130
      external: 10.0.0.30
      internal_api: 172.17.0.30
      storage: 172.18.0.30
      storage_mgmt: 172.19.0.30
      tenant: 172.20.0.30
    macReservations: {}

  //The key for the ctlplane VIPs
  controlplane:
    ipReservations:
      ctlplane: 172.22.0.110
      external: 10.0.0.10
      internal_api: 172.17.0.10
      storage: 172.18.0.10
      storage_mgmt: 172.19.0.10
    macReservations: {}
  openstackclient-0:
    ipReservations:
      ctlplane: 172.22.0.251
      external: 10.0.0.251
      internal_api: 172.17.0.251
    macReservations: {}
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.