Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift
You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat Openshift Container Platform (RHOCP) cluster, version 4.12 or later. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.
2.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment
Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane
custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane
CR by using the nodeSelector
field.
For example, the Block Storage service (cinder) has different requirements for each of its services:
-
The
cinder-scheduler
service is a very light service with low memory, disk, network, and CPU usage. -
The
cinder-api
service has high network usage due to resource listing requests. -
The
cinder-volume
service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. -
The
cinder-backup
service has high memory, network, and CPU requirements.
2.2. Providing secure access to the Red Hat OpenStack Services on OpenShift services
You must create a Secret
custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.
Procedure
-
Create a
Secret
CR file on your workstation, for example,openstack-service-secret.yaml
. Add the following initial configuration to
openstack-service-secret.yaml
:apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> AodhDatabasePassword: <base64_password> BarbicanDatabasePassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_password> CeilometerPassword: <base64_password> CinderDatabasePassword: <base64_password> CinderPassword: <base64_password> DatabasePassword: <base64_password> DbRootPassword: <base64_password> DesignateDatabasePassword: <base64_password> DesignatePassword: <base64_password> GlanceDatabasePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password_heat> HeatDatabasePassword: <base64_password> HeatPassword: <base64_password> IronicDatabasePassword: <base64_password> IronicInspectorDatabasePassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> KeystoneDatabasePassword: <base64_password> ManilaDatabasePassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronDatabasePassword: <base64_password> NeutronPassword: <base64_password> NovaAPIDatabasePassword: <base64_password> NovaAPIMessageBusPassword: <base64_password> NovaCell0DatabasePassword: <base64_password> NovaCell0MessageBusPassword: <base64_password> NovaCell1DatabasePassword: <base64_password> NovaCell1MessageBusPassword: <base64_password> NovaPassword: <base64_password> OctaviaDatabasePassword: <base64_password> OctaviaPassword: <base64_password> PlacementDatabasePassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: Opaque
Replace
<base64_password>
with a base64 encoded string. Use the following command to generate a base64 encoded password:$ echo -n <password> | base64
-
Replace
<base64_password_heat>
with a base64 encoded password for Orchestration service (heat) authentication that is at least 32 characters long.
Create the
Secret
CR in the cluster:$ oc create -f openstack-service-secret.yaml
Verify that the
Secret
CR is created:$ oc describe secret osp-secret -n openstack
2.3. Default Red Hat OpenStack Platform networks
The following physical data center networks are typically implemented on the control plane:
- Control plane network: This network is used by the DataPlane Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment.
External network: (Optional) You can configure an external network if one is required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
- To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
- Internal API network: This network is used for internal communication between Red Hat OpenStack Services on OpenShift (RHOSO) components.
- Storage network: This network is used for block storage, RBD, NFS, FC, and iSCSI.
- Tenant (project) network: This network is used for data communication between virtual machine instances within the cloud deployment.
Storage Management network: (Optional) This network is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_network
to replicate data.NoteFor more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.
The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
Network name | VLAN | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | nad ipam range | OCP worker nncp range |
---|---|---|---|---|---|---|
| n/a | 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
| n/a | 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | |
| 20 | 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
| 21 | 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
| 22 | 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
| 23 | 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
2.4. Preparing RHOCP for RHOSO network isolation
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You create a NetworkAttachmentDefinition
(nad
) custom resource (CR) for each isolated network to attach service pods to the isolated networks, where needed. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.
You must also create a L2Advertisement
resource to define how the VIPs are announced, and an IpAddressPool
resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create a
NodeNetworkConfigurationPolicy
(nncp
) CR file on your workstation, for example,openstack-nncp.yaml
. Retrieve the names of the worker nodes in the RHOCP cluster:
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
Discover the network configuration:
$ oc get nns/<worker_node> -o yaml | more
-
Replace
<worker_node>
with the name of a worker node retrieved in step 2, for example,worker-1
. Repeat this step for each worker node.
-
Replace
In the
nncp
CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Platform networks.In the following example, the
nncp
CR configures theenp6s0
interface for worker node 1,osp-enp6s0-worker-1
, to use VLANs for network isolation:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: enp6s0.20 state: up type: vlan vlan: base-iface: enp6s0 id: 20 - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: enp6s0.21 state: up type: vlan vlan: base-iface: enp6s0 id: 21 - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: enp6s0.22 state: up type: vlan vlan: base-iface: enp6s0 id: 22 - description: Configuring enp6s0 ipv4: address: - ip: 192.168.122.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false mtu: 1500 name: enp6s0 state: up type: ethernet nodeSelector: kubernetes.io/hostname: worker-10 node-role.kubernetes.io/worker: ""
Create the
nncp
CR in the cluster:$ oc apply -f openstack-nncp.yaml
Verify that the
nncp
CR is created:$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
-
Create a
NetworkAttachmentDefinition
(nad
) CR file on your workstation, for example,openstack-nad.yaml
. In the
nad
CR file, configure anad
resource for each isolated network to attach a service deployment pod to the network. The following examples create anad
resource for theinternalapi
,storage
,ctlplane
, andtenant
networks of typemacvlan
:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack 1 spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "enp6s0.20", 2 "ipam": { 3 "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.30", 4 "range_end": "172.17.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: ctlplane name: ctlplane namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "ctlplane", "type": "macvlan", "master": "enp6s0", "ipam": { "type": "whereabouts", "range": "192.168.122.0/24", "range_start": "192.168.122.30", "range_end": "192.168.122.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "storage", "type": "macvlan", "master": "enp6s0.21", "ipam": { "type": "whereabouts", "range": "172.18.0.0/24", "range_start": "172.18.0.30", "range_end": "172.18.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: tenant name: tenant namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "tenant", "type": "macvlan", "master": "enp6s0.22", "ipam": { "type": "whereabouts", "range": "172.19.0.0/24", "range_start": "172.19.0.30", "range_end": "172.19.0.70" } }
- 1
- The namespace where the services are deployed.
- 2
- The worker node interface to use for the VLAN.
- 3
- The
whereabouts
CNI IPAM plugin to assign IPs to the created pods from the range ``.30 - .70`. - 4
- The IP address pool range must not overlap with the MetalLB
IPAddressPool
range and theNetConfig allocationRange
.
Create the
nad
CR in the cluster:$ oc apply -f openstack-nad.yaml
Verify that the
nad
CR is created:$ oc get network-attachment-definitions -n openstack
-
Create an
IPAddressPool
CR file on your workstation, for example,openstack-ipaddresspools.yaml
. In the
IPAddressPool
CR file, configure anIPAddressPool
resource on the isolated network to specify the IP address ranges over which MetalLB has authority:apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 1 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: storage spec: addresses: - 172.18.0.80-172.18.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: tenant spec: addresses: - 172.19.0.80-172.19.0.90 autoAssign: true avoidBuggyIPs: false
- 1
- The
IPAddressPool
range must not overlap with thewhereabouts
IPAM range and the NetConfigallocationRange
.
For information about how to configure the other
IPAddressPool
resource parameters, see Configuring MetalLB address pools.Create the
IPAddressPool
CR in the cluster:$ oc apply -f openstack-ipaddresspools.yaml
Verify that the
IPAddressPool
CR is created:$ oc describe -n metallb-system IPAddressPool
-
Create a
L2Advertisement
CR file on your workstation, for example,openstack-l2advertisement.yaml
. In the
L2Advertisement
CR file, configure aL2Advertisement
resource to define which node advertises a service to the local network. Create one L2Advertisement resource for each network.In the following example, the
L2Advertisement
CR specifies that the VIPs requested from theinternalapi
address pool are announced on the interface that is attached to theinternalapi
VLAN:apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - internalapi interfaces: - enp6s0.20 1 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane namespace: metallb-system spec: ipAddressPools: - ctlplane interfaces: - enp6s0 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: storage namespace: metallb-system spec: ipAddressPools: - storage interfaces: - enp6s0.21 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: tenant namespace: metallb-system spec: ipAddressPools: - tenant interfaces: - enp6s0.22
- 1
- The interface that the VIPs requested from the VLAN address pool are announced on.
Create the
L2Advertisement
CR in the cluster:$ oc apply -f openstack-l2advertisement.yaml
Verify that the
L2Advertisement
CR is created:$ oc describe -n metallb-system L2Advertisement l2advertisement
If your cluster is RHOCP 4.14 or later and it has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
2.5. Configuring the data plane network
To create the data plane network, you define a NetConfig
custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI
, Storage
, and External
. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig
CRD definition and specification schema:
$ oc describe crd netconfig $ oc explain netconfig.spec
Procedure
-
Create a file named
openstacknetconfig.yaml
on your workstation. Add the following configuration to
openstacknetconfig.yaml
to create theNetConfig
CR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack
In the
openstacknetconfig.yaml
file, define the topology for each data plane network. To use the default RHOSO networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Platform networks. The following example creates isolated networks for the data plane:spec: networks: - name: CtlPlane 1 dnsDomain: ctlplane.example.com subnets: 2 - name: subnet1 3 allocationRanges: 4 - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: InternalApi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 5 - name: External dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: Storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: StorageMgmt dnsDomain: storagemgmt.example.com subnets: - name: subnet1 allocationRanges: - end: 172.20.0.250 start: 172.20.0.100 cidr: 172.20.0.0/24 vlan: 23 - name: Tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22
- 1
- The name of the network, for example,
CtlPlane
. - 2
- The IPv4 subnet specification.
- 3
- The name of the subnet, for example,
subnet1
. - 4
- The
NetConfig
allocationRange
. TheallocationRange
must not overlap with the MetalLBIpAddressPool
range and the IP address pool range. - 5
- The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Platform networks.
-
Save the
openstacknetconfig.yaml
definition file. Create the data plane network:
$ oc create -f openstacknetconfig.yaml
To verify that the dataplane network is created, view the
openstacknetconfig
resource:$ oc get netconfig/openstacknetconfig -n openstack
View the
NetConfig
API and child resources:$ oc get netconfig/openstacknetconfig
If you see errors, check the underlying
network-attach-definition
and node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp
2.6. Creating a storage class
You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end, to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. Red Hat recommends that you use the Logical Volume Manager Storage (LVMS) storage class with RHOSO, although you can use other implementations, such as Container Storage Interface (CSI) or OpenShift Data Foundation (ODF). You specify this storage class as the cluster storage back end for the RHOSO deployment.
For more information about how to configure the LVMS storage class, see Persistent storage using Logical Volume Manager Storage in Configuring and managing storage in OpenShift Container Platform.