Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 1. Planning the new deployment

download PDF

Just like you did back when you installed your director-deployed Red Hat OpenStack Platform, the upgrade/migration to the control plane requires planning various aspects of the environment such as node roles, planning your network topology, and storage.

This document covers some of this planning, but it is recommended to read the whole adoption guide before actually starting the process to be sure that there is a global understanding of the whole process.

1.1. Service configurations

There is a fundamental difference between the director and operator deployments regarding the configuration of the services.

In director deployments many of the service configurations are abstracted by director-specific configuration options. A single director option may trigger changes for multiple services and support for drivers, for example, the Block Storage service (cinder), that require patches to the director code base.

In operator deployments this approach has changed: reduce the installer specific knowledge and leverage Red Hat OpenShift Container Platform (RHOCP) and Red Hat OpenStack Platform (RHOSP) service specific knowledge whenever possible.

To this effect RHOSP services will have sensible defaults for RHOCP deployments and human operators will provide configuration snippets to provide necessary configuration, such as the Block Storage service backend configuration, or to override the defaults.

This shortens the distance between a service specific configuration file (such as cinder.conf) and what the human operator provides in the manifests.

These configuration snippets are passed to the operators in the different customServiceConfig sections available in the manifests, and then they are layered in the services available in the following levels. To illustrate this, if you were to set a configuration at the top Block Storage service level (spec: cinder: template:) then it would be applied to all the Block Storage services; for example to enable debug in all the Block Storage services you would do:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      customServiceConfig: |
        [DEFAULT]
        debug = True
< . . . >

If you only want to set it for one of the Block Storage services, for example the scheduler, then you use the cinderScheduler section instead:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      cinderScheduler:
        customServiceConfig: |
          [DEFAULT]
          debug = True
< . . . >

In Red Hat OpenShift Container Platform it is not recommended to store sensitive information like the credentials to the Block Storage service storage array in the CRs, so most RHOSP operators have a mechanism to use the Red Hat OpenShift Container Platform Secrets for sensitive configuration parameters of the services and then use them by reference in the customServiceConfigSecrets section which is analogous to the customServiceConfig.

The contents of the Secret references passed in the customServiceConfigSecrets will have the same format as customServiceConfig: a snippet with the section/s and configuration options.

When there are sensitive information in the service configuration then it becomes a matter of personal preference whether to store all the configuration in the Secret or only the sensitive parts. However, if you split the configuration between Secret and customServiceConfig you still need the section header (eg: [DEFAULT]) to be present in both places.

Attention should be paid to each service’s adoption process as they may have some particularities regarding their configuration.

1.2. About node roles

In director deployments you had 4 different standard roles for the nodes: Controller, Compute, Ceph Storage, Swift Storage, but in the control plane you make a distinction based on where things are running, in Red Hat OpenShift Container Platform (RHOCP) or external to it.

When adopting a director Red Hat OpenStack Platform (RHOSP) your Compute nodes will directly become external nodes, so there should not be much additional planning needed there.

In many deployments being adopted the Controller nodes will require some thought because you have many RHOCP nodes where the Controller services could run, and you have to decide which ones you want to use, how you are going to use them, and make sure those nodes are ready to run the services.

In most deployments running RHOSP services on master nodes can have a seriously adverse impact on the RHOCP cluster, so it is recommended that you place RHOSP services on non master nodes.

By default RHOSP Operators deploy RHOSP services on any worker node, but that is not necessarily what’s best for all deployments, and there may be even services that won’t even work deployed like that.

When planing a deployment it’s good to remember that not all the services on an RHOSP deployments are the same as they have very different requirements.

Looking at the Block Storage service (cinder) component you can clearly see different requirements for its services: the cinder-scheduler is a very light service with low memory, disk, network, and CPU usage; cinder-api service has a higher network usage due to resource listing requests; the cinder-volume service will have a high disk and network usage since many of its operations are in the data path (offline volume migration, create volume from image, etc.), and then you have the cinder-backup service which has high memory, network, and CPU (to compress data) requirements.

The Image Service (glance) and Object Storage service (swift) components are in the data path, as well as RabbitMQ and Galera services.

Given these requirements it may be preferable not to let these services wander all over your RHOCP worker nodes with the possibility of impacting other workloads, or maybe you don’t mind the light services wandering around but you want to pin down the heavy ones to a set of infrastructure nodes.

There are also hardware restrictions to take into consideration, because if you are using a Fibre Channel (FC) Block Storage service backend you need the cinder-volume, cinder-backup, and maybe even the Image Service (glance) (if it’s using the Block Storage service as a backend) services to run on a RHOCP host that has an HBA.

The RHOSP Operators allow a great deal of flexibility on where to run the RHOSP services, as you can use node labels to define which RHOCP nodes are eligible to run the different RHOSP services. Refer to the About node selector to learn more about using labels to define placement of the RHOSP services.

1.3. About node selector

There are a variety of reasons why you might want to restrict the nodes where Red Hat OpenStack Platform (RHOSP) services can be placed:

  • Hardware requirements: System memory, Disk space, Cores, HBAs
  • Limit the impact of the RHOSP services on other Red Hat OpenShift Container Platform workloads.
  • Avoid collocating RHOSP services.

The mechanism provided by the RHOSP operators to achieve this is through the use of labels.

You either label the RHOCP nodes or use existing labels, and then use those labels in the RHOSP manifests in the nodeSelector field.

The nodeSelector field in the RHOSP manifests follows the standard RHOCP nodeSelector field. For more information, see About node selectors in OpenShift Container Platform 4.15 Documentation.

This field is present at all the different levels of the RHOSP manifests:

  • Deployment: The OpenStackControlPlane object.
  • Component: For example the cinder element in the OpenStackControlPlane.
  • Service: For example the cinderVolume element within the cinder element in the OpenStackControlPlane.

This allows a fine grained control of the placement of the RHOSP services with minimal repetition.

Values of the nodeSelector are propagated to the next levels unless they are overwritten. This means that a nodeSelector value at the deployment level will affect all the RHOSP services.

For example, you can add label type: openstack to any 3 RHOCP nodes:

$ oc label nodes worker0 type=openstack
$ oc label nodes worker1 type=openstack
$ oc label nodes worker2 type=openstack

And then in our OpenStackControlPlane you can use the label to place all the services in those 3 nodes:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  secret: osp-secret
  storageClass: local-storage
  nodeSelector:
    type: openstack
< . . . >

You can use the selector for specific services. For example, you might want to place your the Block Storage service (cinder) volume and backup services on certain nodes if you are using FC and only have HBAs on a subset of nodes. The following example assumes that you have the label fc_card: true:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  secret: osp-secret
  storageClass: local-storage
  cinder:
    template:
      cinderVolumes:
          pure_fc:
            nodeSelector:
              fc_card: true
< . . . >
          lvm-iscsi:
            nodeSelector:
              fc_card: true
< . . . >
      cinderBackup:
          nodeSelector:
            fc_card: true
< . . . >

The Block Storage service operator does not currently have the possibility of defining the nodeSelector in cinderVolumes, so you need to specify it on each of the backends.

It is possible to leverage labels added by the Node Feature Discovery (NFD) Operator to place RHOSP services. For more information, see Node Feature Discovery Operator in OpenShift Container Platform 4.15 Documentation.

1.4. About machine configs

Some services require you to have services or kernel modules running on the hosts where they run, for example iscsid or multipathd daemons, or the nvme-fabrics kernel module.

For those cases you use MachineConfig manifests, and if you are restricting the nodes that you are placing the Red Hat OpenStack Platform services on using the nodeSelector then you also want to limit where the MachineConfig is applied.

To define where the MachineConfig can be applied, you need to use a MachineConfigPool that links the MachineConfig to the nodes.

For example to be able to limit MachineConfig to the 3 Red Hat OpenShift Container Platform (RHOCP) nodes that you marked with the type: openstack label, you create the MachineConfigPool like this:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: openstack
spec:
  machineConfigSelector:
    matchLabels:
      machineconfiguration.openshift.io/role: openstack
  nodeSelector:
    matchLabels:
      type: openstack

And then you use it in the MachineConfig:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: openstack
< . . . >

Refer to the Postinstallation machine configuration tasks in OpenShift Container Platform 4.15 Documentation.

Warning

Applying a MachineConfig to an RHOCP node makes the node reboot.

1.5. Key Manager service support for crypto plug-ins

The Key Manager service (barbican) does not yet support all of the crypto plug-ins available in director.

1.6. Configuring the network for the RHOSO deployment

With Red Hat OpenShift Container Platform (RHOCP), the network is a very important aspect of the deployment, and it is important to plan it carefully. The general network requirements for the Red Hat OpenStack Platform (RHOSP) services are not much different from the ones in a director deployment, but the way you handle them is.

Note

For more information about the network architecture and configuration, see Deploying Red Hat OpenStack Platform 18.0 Development Preview 3 on Red Hat OpenShift Container Platform and About networking in OpenShift Container Platform 4.15 Documentation. This document will address concerns specific to adoption.

When adopting a new RHOSP deployment, it is important to align the network configuration with the adopted cluster to maintain connectivity for existing workloads.

The following logical configuration steps will incorporate the existing network configuration:

  • configure RHOCP worker nodes to align VLAN tags and IPAM configuration with the existing deployment.
  • configure Control Plane services to use compatible IP ranges for service and load balancing IPs.
  • configure Data Plane nodes to use corresponding compatible configuration for VLAN tags and IPAM.

Specifically,

  • IPAM configuration will either be reused from the existing deployment or, depending on IP address availability in the existing allocation pools, new ranges will be defined to be used for the new control plane services. If so, IP routing will be configured between the old and new ranges. For more information, see Planning your IPAM configuration.
  • VLAN tags will be reused from the existing deployment.

1.6.1. Retrieving the network configuration from your existing deployment

Let’s first determine which isolated networks are defined in the existing deployment. You can find the network configuration in the network_data.yaml file. For example,

- name: InternalApi
  mtu: 1500
  vip: true
  vlan: 20
  name_lower: internal_api
  dns_domain: internal.mydomain.tld.
  service_net_map_replace: internal
  subnets:
    internal_api_subnet:
      ip_subnet: '172.17.0.0/24'
      allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}]

You should make a note of the VLAN tag used (vlan key) and the IP range (ip_subnet key) for each isolated network. The IP range will later be split into separate pools for control plane services and load balancer IP addresses.

You should also determine the list of IP addresses already consumed in the adopted environment. Consult tripleo-ansible-inventory.yaml file to find this information. In the file, for each listed host, note IP and VIP addresses consumed by the node.

For example,

Standalone:
  hosts:
    standalone:
      ...
      internal_api_ip: 172.17.0.100
    ...
  ...
standalone:
  children:
    Standalone: {}
  vars:
    ...
    internal_api_vip: 172.17.0.2
    ...

In the example above, note that the 172.17.0.2 and 172.17.0.100 are consumed and won’t be available for the new control plane services, at least until the adoption is complete.

Repeat the process for each isolated network and each host in the configuration.

At the end of this process, you should have the following information:

  • A list of isolated networks used in the existing deployment.
  • For each of the isolated networks, the VLAN tag and IP ranges used for dynamic address allocation.
  • A list of existing IP address allocations used in the environment. You will later exclude these addresses from allocation pools available for the new control plane services.

1.6.2. Planning your IPAM configuration

The new deployment model puts additional burden on the size of IP allocation pools available for Red Hat OpenStack Platform (RHOSP) services. This is because each service deployed on Red Hat OpenShift Container Platform (RHOCP) worker nodes will now require an IP address from the IPAM pool (in the previous deployment model, all services hosted on a controller node shared the same IP address.)

Since the new control plane deployment has different requirements as to the number of IP addresses available for services, it may even be impossible to reuse the existing IP ranges used in adopted environment, depending on its size. Prudent planning is required to determine which options are available in your particular case.

The total number of IP addresses required for the new control plane services, in each isolated network, is calculated as a sum of the following:

  • The number of RHOCP worker nodes. (Each node will require 1 IP address in NodeNetworkConfigurationPolicy custom resources (CRs).)
  • The number of IP addresses required for the data plane nodes. (Each node will require an IP address from NetConfig CRs.)
  • The number of IP addresses required for control plane services. (Each service will require an IP address from NetworkAttachmentDefinition CRs.) This number depends on the number of replicas for each service.
  • The number of IP addresses required for load balancer IP addresses. (Each service will require a VIP address from IPAddressPool CRs.)

As of the time of writing, the simplest single worker node RHOCP deployment (CRC) has the following IP ranges defined (for the internalapi network):

  • 1 IP address for the single worker node;
  • 1 IP address for the data plane node;
  • NetworkAttachmentDefinition CRs for control plane services: X.X.X.30-X.X.X.70 (41 addresses);
  • IPAllocationPool CRs for load balancer IPs: X.X.X.80-X.X.X.90 (11 addresses).

Which comes to a total of 54 IP addresses allocated to the internalapi allocation pools.

The exact requirements may differ depending on the list of RHOSP services to be deployed, their replica numbers, as well as the number of RHOCP worker nodes and data plane nodes.

Additional IP addresses may be required in future RHOSP releases, so it is advised to plan for some extra capacity, for each of the allocation pools used in the new environment.

Once you know the required IP pool size for the new deployment, you can choose one of the following scenarios to handle IPAM allocation in the new environment.

The first listed scenario is more general and implies using new IP ranges, while the second scenario implies reusing the existing ranges. The end state of the former scenario is using the new subnet ranges for control plane services, but keeping the old ranges, with their node IP address allocations intact, for data plane nodes.

Regardless of the IPAM scenario, the VLAN tags used in the existing deployment will be reused in the new deployment. Depending on the scenario, the IP address ranges to be used for control plane services will be either reused from the old deployment or defined anew. Adjust the configuration as described in Configuring isolated networks.

1.6.2.1. Scenario 1: Using new subnet ranges

This scenario is compatible with any existing subnet configuration, and can be used even when the existing cluster subnet ranges don’t have enough free IP addresses for the new control plane services.

The general idea here is to define new IP ranges for control plane services that belong to a different subnet that was not used in the existing cluster. Then, configure link local IP routing between the old and new subnets to allow old and new service deployments to communicate. This involves using director mechanism on pre-adopted cluster to configure additional link local routes there. This will allow EDP deployment to reach out to adopted nodes using their old subnet addresses.

The new subnet should be sized appropriately to accommodate the new control plane services, but otherwise doesn’t have any specific requirements as to the existing deployment allocation pools already consumed. Actually, the requirements as to the size of the new subnet are lower than in the second scenario, as the old subnet ranges are kept for the adopted nodes, which means they don’t consume any IP addresses from the new range.

In this scenario, you will configure NetworkAttachmentDefinition custom resources (CRs) to use a different subnet from what will be configured in NetConfig CR for the same networks. The former range will be used for control plane services, while the latter will be used to manage IPAM for data plane nodes.

During the process, you will need to make sure that adopted node IP addresses don’t change during the adoption process. This is achieved by listing the addresses in fixedIP fields in OpenstackDataplaneNodeSet per-node section.

Before proceeding, configure host routes on the adopted nodes for the control plane subnets.

To achieve this, you will need to re-run tripleo deploy with additional routes entries added to network_config. (This change should be applied for every adopted node configuration.) For example, you may add the following to net_config.yaml:

network_config:
  - type: ovs_bridge
    name: br-ctlplane
    routes:
    - ip_netmask: 0.0.0.0/0
      next_hop: 192.168.1.1
    - ip_netmask: 172.31.0.0/24  # <- new ctlplane subnet
      next_hop: 192.168.1.100    # <- adopted node ctlplane IP address

Do the same for other networks that will need to use different subnets for the new and old parts of the deployment.

Once done, run tripleo deploy to apply the new configuration.

Note that network configuration changes are not applied by default to avoid risk of network disruption. You will have to enforce the changes by setting the StandaloneNetworkConfigUpdate: true in the director configuration files.

Once tripleo deploy is complete, you should see new link local routes to the new subnet on each node. For example,

# ip route | grep 172
172.31.0.0/24 via 192.168.122.100 dev br-ctlplane

The next step is to configure similar routes for the old subnet for control plane services attached to the networks. This is done by adding routes entries to NodeNetworkConfigurationPolicy CRs for each network. For example,

      - destination: 192.168.122.0/24
        next-hop-interface: ospbr

Once applied, you should eventually see the following route added to your Red Hat OpenShift Container Platform (RHOCP) nodes.

# ip route | grep 192
192.168.122.0/24 dev ospbr proto static scope link

At this point, you should be able to ping the adopted nodes from RHOCP nodes using their old subnet addresses; and vice versa.

Finally, during the data plane adoption, you will have to take care of several aspects:

  • in network_config, add link local routes to the new subnets, for example:
  nodeTemplate:
    ansible:
      ansibleUser: root
      ansibleVars:
        additional_ctlplane_host_routes:
        - ip_netmask: 172.31.0.0/24
          next_hop: '{{ ctlplane_ip }}'
        edpm_network_config_template: |
          network_config:
          - type: ovs_bridge
            routes: {{ ctlplane_host_routes + additional_ctlplane_host_routes }}
            ...
  • list the old IP addresses as ansibleHost and fixedIP, for example:
  nodes:
    standalone:
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: ""
      hostName: standalone
      networks:
      - defaultRoute: true
        fixedIP: 192.168.122.100
        name: ctlplane
        subnetName: subnet1
  • expand SSH range for the firewall configuration to include both subnets:
        edpm_sshd_allowed_ranges:
        - 192.168.122.0/24
        - 172.31.0.0/24

This is to allow SSH access from the new subnet to the adopted nodes as well as the old one.

Since you are applying new network configuration to the nodes, consider also setting edpm_network_config_update: true to enforce the changes.

Note that the examples above are incomplete and should be incorporated into your general configuration.

1.6.2.2. Scenario 2: Reusing existing subnet ranges

This scenario is only applicable when the existing subnet ranges have enough IP addresses for the new control plane services. On the other hand, it allows to avoid additional routing configuration between the old and new subnets, as in Scenario 1: Using new subnet ranges.

The general idea here is to instruct the new control plane services to use the same subnet as in the adopted environment, but define allocation pools used by the new services in a way that would exclude IP addresses that were already allocated to existing cluster nodes.

This scenario implies that the remaining IP addresses in the existing subnet is enough for the new control plane services. If not, Scenario 1: Using new subnet ranges should be used instead. For more information, see Planning your IPAM configuration.

No special routing configuration is required in this scenario; the only thing to pay attention to is to make sure that already consumed IP addresses don’t overlap with the new allocation pools configured for Red Hat OpenStack Platform control plane services.

If you are especially constrained by the size of the existing subnet, you may have to apply elaborate exclusion rules when defining allocation pools for the new control plane services. For more information, see

1.6.3. Configuring isolated networks

At this point, you should have a good idea about VLAN and IPAM configuration you would like to replicate in the new environment.

Before proceeding, you should have a list of the following IP address allocations to be used for the new control plane services:

Important

Make sure you have the information listed above before proceeding with the next steps.

Note

The exact list and configuration of isolated networks in the examples listed below should reflect the actual adopted environment. The number of isolated networks may differ from the example below. IPAM scheme may differ. Only relevant parts of the configuration are shown. Examples are incomplete and should be incorporated into the general configuration for the new deployment, as described in the general Red Hat OpenStack Platform documentation.

1.6.3.1. Configuring Red Hat OpenShift Container Platform worker nodes

Red Hat OpenShift Container Platform worker nodes that run Red Hat OpenStack Platform services need a way to connect the service pods to isolated networks. This requires physical network configuration on the hypervisor.

This configuration is managed by the NMState operator, which uses the custom resources (CRs) to define the desired network configuration for the nodes.

For each node, define a NodeNetworkConfigurationPolicy CR that describes the desired network configuration. See the example below.

apiVersion: v1
items:
- apiVersion: nmstate.io/v1
  kind: NodeNetworkConfigurationPolicy
  spec:
      interfaces:
      - description: internalapi vlan interface
        ipv4:
          address:
          - ip: 172.17.0.10
            prefix-length: 24
          dhcp: false
          enabled: true
        ipv6:
          enabled: false
        name: enp6s0.20
        state: up
        type: vlan
        vlan:
          base-iface: enp6s0
          id: 20
          reorder-headers: true
      - description: storage vlan interface
        ipv4:
          address:
          - ip: 172.18.0.10
            prefix-length: 24
          dhcp: false
          enabled: true
        ipv6:
          enabled: false
        name: enp6s0.21
        state: up
        type: vlan
        vlan:
          base-iface: enp6s0
          id: 21
          reorder-headers: true
      - description: tenant vlan interface
        ipv4:
          address:
          - ip: 172.19.0.10
            prefix-length: 24
          dhcp: false
          enabled: true
        ipv6:
          enabled: false
        name: enp6s0.22
        state: up
        type: vlan
        vlan:
          base-iface: enp6s0
          id: 22
          reorder-headers: true
    nodeSelector:
      kubernetes.io/hostname: ocp-worker-0
      node-role.kubernetes.io/worker: ""

1.6.3.2. Configuring the networking for control plane services

Once NMState operator created the desired hypervisor network configuration for isolated networks, we need to configure Red Hat OpenStack Platform (RHOSP) services to use configured interfaces. This is achieved by defining NetworkAttachmentDefinition custom resources (CRs) for each isolated network. (In some clusters, these CRs are managed by the Cluster Network Operator in which case Network CRs should be used instead. For more information, see Cluster Network Operator in OpenShift Container Platform 4.15 Documentation.)

For example,

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "name": "internalapi",
      "type": "macvlan",
      "master": "enp6s0.20",
      "ipam": {
        "type": "whereabouts",
        "range": "172.17.0.0/24",
        "range_start": "172.17.0.20",
        "range_end": "172.17.0.50"
      }
    }

Make sure that the interface name and IPAM range match the configuration used in NodeNetworkConfigurationPolicy CRs.

When reusing existing IP ranges, you may exclude part of the range defined by range_start and range_end that was already consumed in the existing deployment. Please use exclude as follows.

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "name": "internalapi",
      "type": "macvlan",
      "master": "enp6s0.20",
      "ipam": {
        "type": "whereabouts",
        "range": "172.17.0.0/24",
        "range_start": "172.17.0.20",
        "range_end": "172.17.0.50",
        "exclude": [
          "172.17.0.24/32",
          "172.17.0.44/31"
        ]
      }
    }

The example above would exclude addresses 172.17.0.24 as well as 172.17.0.44 and 172.17.0.45 from the allocation pool.

Some RHOSP services require load balancer IP addresses. These IP addresses belong to the same IP range as the control plane services, and are managed by MetalLB. The IP address pool is defined by IPAllocationPool CRs. This pool should also be aligned with the adopted configuration.

For example,

- apiVersion: metallb.io/v1beta1
  kind: IPAddressPool
  spec:
    addresses:
    - 172.17.0.60-172.17.0.70

Define IPAddressPool CRs for each isolated network that requires load balancer IP addresses.

When reusing existing IP ranges, you may exclude part of the range by listing multiple addresses entries.

For example,

- apiVersion: metallb.io/v1beta1
  kind: IPAddressPool
  spec:
    addresses:
    - 172.17.0.60-172.17.0.64
    - 172.17.0.66-172.17.0.70

The example above would exclude the 172.17.0.65 address from the allocation pool.

1.6.3.3. Configuring data plane nodes

A complete Red Hat OpenStack Platform (RHOSP) cluster consists of Red Hat OpenShift Container Platform (RHOCP) nodes and data plane nodes. The former use NodeNetworkConfigurationPolicy custom resource (CR) to configure physical interfaces. Since data plane nodes are not RHOCP nodes, a different approach to configure their network connectivity is used.

Instead, data plane nodes are configured by dataplane-operator and its CRs. The CRs define desired network configuration for the nodes.

In case of adoption, the configuration should reflect the existing network setup. You should be able to pull net_config.yaml files from each node and reuse it when defining OpenstackDataplaneNodeSet. The format of the configuration hasn’t changed (os-net-config is still being used under the hood), so you should be able to put network templates under edpm_network_config_template variables (either common for all nodes, or per-node).

To make sure the latest network configuration is used during the data plane adoption, you should also set edpm_network_config_update: true in the nodeTemplate.

You will proceed with the data plane adoption once the RHOCP control plane is deployed in the RHOCP cluster. When doing so, you will configure NetConfig and OpenstackDataplaneNodeSet CRs, using the same VLAN tags and IPAM configuration as determined in the previous steps.

For example,

apiVersion: network.openstack.org/v1beta1
kind: NetConfig
metadata:
  name: netconfig
spec:
  networks:
  - name: internalapi
    dnsDomain: internalapi.example.com
    subnets:
    - name: subnet1
      allocationRanges:
      - end: 172.17.0.250
        start: 172.17.0.100
      cidr: 172.17.0.0/24
      vlan: 20
  - name: storage
    dnsDomain: storage.example.com
    subnets:
    - name: subnet1
      allocationRanges:
      - end: 172.18.0.250
        start: 172.18.0.100
      cidr: 172.18.0.0/24
      vlan: 21
  - name: tenant
    dnsDomain: tenant.example.com
    subnets:
    - name: subnet1
      allocationRanges:
      - end: 172.19.0.250
        start: 172.19.0.100
      cidr: 172.19.0.0/24
      vlan: 22

List multiple allocationRanges entries to exclude some of the IP addresses, e.g. to accommodate for addresses already consumed by the adopted environment.

apiVersion: network.openstack.org/v1beta1
kind: NetConfig
metadata:
  name: netconfig
spec:
  networks:
  - name: internalapi
    dnsDomain: internalapi.example.com
    subnets:
    - name: subnet1
      allocationRanges:
      - end: 172.17.0.199
        start: 172.17.0.100
      - end: 172.17.0.250
        start: 172.17.0.201
      cidr: 172.17.0.0/24
      vlan: 20

The example above would exclude the 172.17.0.200 address from the pool.

1.7. Storage requirements

When looking into the storage in an Red Hat OpenStack Platform (RHOSP) deployment you can differentiate two kinds, the storage requirements of the services themselves and the storage used for the RHOSP users that the services will manage.

These requirements may drive your Red Hat OpenShift Container Platform (RHOCP) node selection, as mentioned above, and may require you to do some preparations on the RHOCP nodes before you can deploy the services.

1.7.1. Storage driver certification

Before you adopt your Red Hat OpenStack Platform 17.1 deployment to a Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment, confirm that your deployed storage drivers are certified for use with RHOSO 18.0.

1.7.2. Block Storage service requirements

The Block Storage service (cinder) has both local storage used by the service and Red Hat OpenStack Platform (RHOSP) user requirements.

Local storage is used for example when downloading a Image Service (glance) image for the create volume from image operation, which can become considerable when having concurrent operations and not using the Block Storage service volume cache.

In the Operator deployed RHOSP, there is a way to configure the location of the conversion directory to be an NFS share (using the extra volumes feature), something that needed to be done manually before.

Even if it’s an adoption and it may seem that there’s nothing to consider regarding the Block Storage service backends, because you are using the same ones that you are using in your current deployment, you should still evaluate it, because it may not be so straightforward.

First you need to check the transport protocol the Block Storage service backends are using: RBD, iSCSI, FC, NFS, NVMe-oF, etc.

Once you know all the transport protocols that you are using, you can make sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the Red Hat OpenShift Container Platform nodes.

Detailed information about the specifics for each storage transport protocol can be found in the Red Hat OpenShift Container Platform preparation for Block Storage service adoption.

1.8. Comparing configuration files between deployments

In order to help users to handle the configuration for the director and Red Hat OpenStack Platform services the tool: https://github.com/openstack-k8s-operators/os-diff has been develop to compare the configuration files between the director deployment and the Red Hat OpenStack Services on OpenShift (RHOSO) cloud. Make sure Golang is installed and configured on your environment:

dnf install -y golang-github-openstack-k8s-operators-os-diff

Then configure the /etc/os-diff/os-diff.cfg file and the /etc/os-diff/ssh.config file according to your environment. To allow os-diff to connect to your clouds and pull files from the services that you describe in the config.yaml file you need to properly set the option in the os-diff.cfg file:

[Default]

local_config_dir=/tmp/
service_config_file=config.yaml

[Tripleo]

ssh_cmd=ssh -F ssh.config
director_host=standalone
container_engine=podman
connection=ssh
remote_config_path=/tmp/tripleo
local_config_path=/tmp/

[Openshift]

ocp_local_config_path=/tmp/ocp
connection=local
ssh_cmd=""

Os-diff uses ssh_cmd to access your director host via SSH, or the host where your cloud is accessible and the podman/docker binary is installed and allowed to interact with the running containers. This option could have a different form:

ssh_cmd=ssh -F ssh.config standalone
director_host=
ssh_cmd=ssh -F ssh.config
director_host=standalone

or without an SSH config file:

ssh_cmd=ssh -i /home/user/.ssh/id_rsa stack@my.undercloud.local
director_host=

or

ssh_cmd=ssh -i /home/user/.ssh/id_rsa stack@
director_host=my.undercloud.local

Note that the result of using ssh_cmd and director_host should be a "successful ssh access".

Configure or generate the ssh.config file from inventory or hosts file, for example:

Host *
    IdentitiesOnly yes

Host virthost
    Hostname virthost
    IdentityFile ~/.ssh/id_rsa
    User root
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null


Host standalone
    Hostname standalone
    IdentityFile <path to SSH key>
    User root
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

Host crc
    Hostname crc
    IdentityFile ~/.ssh/id_rsa
    User stack
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

Os-diff can use an ssh.config file for getting access to your Red Hat OpenStack Platform environment. The following command can help you generate this SSH config file from your Ansible inventory, for example, tripleo-ansible-inventory.yaml file:

os-diff configure -i tripleo-ansible-inventory.yaml -o ssh.config --yaml
Note

You must set the IdentityFile key in the file to get full working access:

Host standalone
  HostName standalone
  User root
  IdentityFile ~/.ssh/id_rsa
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host undercloud
  HostName undercloud
  User root
  IdentityFile ~/.ssh/id_rsa
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

And test your connection:

ssh -F ssh.config standalone
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.