Este contenido no está disponible en el idioma seleccionado.
Deploying Red Hat OpenStack Services on OpenShift
Deploying a Red Hat OpenStack Services on OpenShift environment on a Red Hat OpenShift Container Platform cluster
Abstract
Providing feedback on Red Hat documentation Copiar enlaceEnlace copiado en el portapapeles!
We appreciate your input on our documentation. Tell us how we can make it better.
Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.
To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.
- Click the following link to open a Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Installing and preparing the Operators Copiar enlaceEnlace copiado en el portapapeles!
You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator (openstack-operator) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP web console. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.
For information about mapping RHOSO versions to OpenStack Operators and OpenStackVersion Custom Resources (CRs), see the Red Hat knowledge base article at https://access.redhat.com/articles/7125383.
1.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
An operational RHOCP cluster, version 4.18. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
- For the minimum RHOCP hardware requirements for hosting your RHOSO control plane, see Minimum RHOCP hardware requirements.
- For the minimum RHOCP network requirements, see RHOCP network requirements.
-
For a list of the Operators that must be installed before you install the
openstack-operator, see RHOCP software requirements.
-
The
occommand line tool is installed on your workstation. -
You are logged in to the RHOCP cluster as a user with
cluster-adminprivileges.
1.2. Installing the OpenStack Operator Copiar enlaceEnlace copiado en el portapapeles!
You use OperatorHub on the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator (openstack-operator) on your RHOCP cluster. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource to start the OpenStack Operator on your cluster.
Procedure
-
Log in to the RHOCP web console as a user with
cluster-adminpermissions. - Select Operators → OperatorHub.
-
In the Filter by keyword field, type
OpenStack. -
Click the OpenStack Operator tile with the
Red Hatsource label. - Read the information about the Operator and click Install.
- On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list.
- On the Install Operator page, select "Manual" from the Update approval list. For information about how to manually approve a pending Operator update, see Manually approving a pending Operator update in the RHOCP Operators guide.
-
Click Install to make the Operator available to the
openstack-operatorsnamespace. The OpenStack Operator is installed when the Status isSucceeded. - Click Create OpenStack to open the Create OpenStack page.
-
On the Create OpenStack page, click Create to create an instance of the OpenStack Operator initialization resource. The OpenStack Operator is ready to use when the Status of the
openstackinstance isConditions: Ready.
Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift Copiar enlaceEnlace copiado en el portapapeles!
You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.
2.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment.
You can create new labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services:
-
The
cinder-schedulerservice is a very light service with low memory, disk, network, and CPU usage. -
The
cinder-apiservice has high network usage due to resource listing requests. -
The
cinder-volumeservice has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. -
The
cinder-backupservice has high memory, network, and CPU requirements.
Therefore, you can pin the cinder-api, cinder-volume, and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity.
Alternatively, you can create Topology CRs and use the topologyRef field in your OpenStackControlPlane CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology CRs.
2.2. Creating a storage class Copiar enlaceEnlace copiado en el portapapeles!
You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. You specify this storage class as the cluster storage back end for the RHOSO control plane deployment. Use a storage back end based on SSD or NVMe drives for the storage class.
If you do not have an existing storage class that can provide persistent volumes, you can use the Logical Volume Manager Storage Operator to provide a storage class for RHOSO. If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes.
To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage":
oc get node -l "topology.topolvm.io/node in ($(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\n' ',' | sed 's/.\{1\}$//'))" -o=jsonpath='{.items[*].metadata.annotations.capacity\.topolvm\.io/local-storage}' | tr ' ' '\n'
# oc get node -l "topology.topolvm.io/node in ($(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\n' ',' | sed 's/.\{1\}$//'))" -o=jsonpath='{.items[*].metadata.annotations.capacity\.topolvm\.io/local-storage}' | tr ' ' '\n'
The storage is ready when this command returns three non-zero values.
Additional resources
2.3. Creating the openstack namespace Copiar enlaceEnlace copiado en el portapapeles!
You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
Procedure
Create the
openstackproject for the deployed RHOSO environment:oc new-project openstack
$ oc new-project openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the
openstacknamespace is labeled to enable privileged pod creation by the OpenStack Operators:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the security context constraint (SCC) is not "privileged", use the following commands to change it:
oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwriteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To remove the need to specify the namespace when executing commands on the
openstacknamespace, set the defaultnamespacetoopenstack:oc project openstack
$ oc project openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Providing secure access to the Red Hat OpenStack Services on OpenShift services Copiar enlaceEnlace copiado en el portapapeles!
You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.
For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.
You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.
Prerequisites
- You have installed python3-cryptography.
Procedure
-
Create a
SecretCR on your workstation, for example,openstack_service_secret.yaml. Add the following initial configuration to
openstack_service_secret.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_password>with a 32-character key that is base64 encoded.NoteThe
HeatAuthEncryptionKeypassword must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that theHeatAuthEncryptionKeypassword remains at length 32.You can use the following command to manually generate a base64 encoded password:
echo -n <password> | base64
$ echo -n <password> | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, if you are using a Linux workstation and you are generating the
SecretCR by using a Bash command such ascat, you can replace<base64_password>with the following command to auto-generate random passwords for each service:$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
<base64_fernet_key>with a base64 encoded fernet key. You can use the following command to manually generate it:$(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)$(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
SecretCR in the cluster:oc create -f openstack_service_secret.yaml -n openstack
$ oc create -f openstack_service_secret.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
SecretCR is created:oc describe secret osp-secret -n openstack
$ oc describe secret osp-secret -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.1. Example Secret CR for secure access to the RHOSO service pods Copiar enlaceEnlace copiado en el portapapeles!
You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.
If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:
Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift Copiar enlaceEnlace copiado en el portapapeles!
To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.
If you need a centralized gateway for connection to external networks, you can add OVN gateways to the control plane or to dedicated Networker nodes on the data plane. For information about adding optional OVN gateways to the control plane, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.
3.1. Default Red Hat OpenStack Services on OpenShift networks Copiar enlaceEnlace copiado en el portapapeles!
The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:
- Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
- To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
- Internal API network: used for internal communication between RHOSO components.
- Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
- Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
- Octavia controller network: used to connect Load-balancing service (octavia) controllers running in the control plane.
- Designate network: used internally by designate to manage the DNS servers.
- Designateext network: used to provide external access to the DNS service resolver and the DNS servers.
Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_networkto replicate data.NoteFor more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.
The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
| Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
|---|---|---|---|---|---|
|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
|
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
|
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
|
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
|
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
|
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
|
| 172.26.0.0/24 | n/a | n/a | 172.26.0.30 - 172.26.0.70 | 172.26.0.10 - 172.26.0.20 |
|
| 172.34.0.0/24 | n/a | 172.34.0.80 - 172.34.0.120 | 172.34.0.30 - 172.34.0.70 | 172.34.0.10 - 172.34.0.20 |
|
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
The following table specifies the networks that establish connectivity to the fabric using eth2 and eth3 with different IP addresses per zone and rack and also a global bgpmainnet that is used as a source for the traffic:
| Network name | Zone 0 | Zone 1 | Zone 2 |
|---|---|---|---|
|
BGP Net1 ( | 100.64.0.0/24 | 100.64.1.0 | 100.64.2.0 |
|
BGP Net2 ( | 100.65.0.0/24 | 100.65.1.0/24 | 100.65.2.0 |
| Bgpmainnet (loopback) | 99.99.0.0/24 | 99.99.1.0/24 | 99.99.2.0/24 |
3.2. Preparing RHOCP for RHOSO networks Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. A RHOSO environment uses isolated networks to separate different types of network traffic, which improves security, performance, and management. You must connect the RHOCP worker nodes to your isolated networks and expose the internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
3.2.1. Preparing RHOCP with isolated network interfaces Copiar enlaceEnlace copiado en el portapapeles!
You use the NMState Operator to connect the RHOCP worker nodes to your isolated networks. Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy(nncp) CR file on your workstation, for example,openstack-nncp.yaml. Retrieve the names of the worker nodes in the RHOCP cluster:
oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Discover the network configuration:
oc get nns/<worker_node> -o yaml | more
$ oc get nns/<worker_node> -o yaml | moreCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<worker_node>with the name of a worker node retrieved in step 2, for example,worker-1. Repeat this step for each worker node.
-
Replace
In the
nncpCR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks.In the following example, the
nncpCR configures theenp6s0interface for worker node 1,osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
nncpCR in the cluster:oc apply -f openstack-nncp.yaml
$ oc apply -f openstack-nncp.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
nncpCR is created:oc get nncp -w
$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfiguredCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.2. Attaching service pods to the isolated networks Copiar enlaceEnlace copiado en el portapapeles!
Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.
Procedure
-
Create a
NetworkAttachmentDefinition(net-attach-def) CR file on your workstation, for example,openstack-net-attach-def.yaml. In the
NetworkAttachmentDefinitionCR file, configure aNetworkAttachmentDefinitionresource for each isolated network to attach a service deployment pod to the network. The following examples create aNetworkAttachmentDefinitionresource for the following networks:-
internalapi,storage,ctlplane, andtenantnetworks of typemacvlan. -
octavia, the load-balancing management network, of typebridge. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. -
designatenetwork used internally by the DNS service (designate) to manage the DNS servers. -
designateextnetwork used to provide external access to the DNS service resolver and the DNS servers.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.namespace: The namespace where the services are deployed. -
"master": The node interface name associated with the network, as defined in thenncpCR. -
"ipam": ThewhereaboutsCNI IPAM plug-in assigns IPs to the created pods from the range.30 - .70. -
"range_start" - "range_end": The IP address pool range must not overlap with the MetalLBIPAddressPoolrange and theNetConfig allocationRange.
-
Create the
NetworkAttachmentDefinitionCR in the cluster:oc apply -f openstack-net-attach-def.yaml
$ oc apply -f openstack-net-attach-def.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
NetworkAttachmentDefinitionCR is created:oc get net-attach-def -n openstack
$ oc get net-attach-def -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.3. Preparing RHOCP for RHOSO network VIPS Copiar enlaceEnlace copiado en el portapapeles!
You use the MetalLB Operator to expose internal service endpoints on the isolated networks. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPoolCR file on your workstation, for example,openstack-ipaddresspools.yaml. In the
IPAddressPoolCR file, configure anIPAddressPoolresource on the isolated network to specify the IP address ranges over which MetalLB has authority:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.addresses: TheIPAddressPoolrange must not overlap with thewhereaboutsIPAM range and the NetConfigallocationRange.
For information about how to configure the other
IPAddressPoolresource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.-
Create the
IPAddressPoolCR in the cluster:oc apply -f openstack-ipaddresspools.yaml
$ oc apply -f openstack-ipaddresspools.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
IPAddressPoolCR is created:oc describe -n metallb-system IPAddressPool
$ oc describe -n metallb-system IPAddressPoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
L2AdvertisementCR file on your workstation, for example,openstack-l2advertisement.yaml. In the
L2AdvertisementCR file, configureL2AdvertisementCRs to define which node advertises a service to the local network. Create oneL2Advertisementresource for each network.In the following example, each
L2AdvertisementCR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisementresource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.-
Create the
L2AdvertisementCRs in the cluster:oc apply -f openstack-l2advertisement.yaml
$ oc apply -f openstack-l2advertisement.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
L2AdvertisementCRs are created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Creating the data plane network Copiar enlaceEnlace copiado en el portapapeles!
To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI, Storage, and External. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig CRD definition and specification schema:
oc describe crd netconfig oc explain netconfig.spec
$ oc describe crd netconfig
$ oc explain netconfig.spec
Procedure
-
Create a file named
openstack_netconfig.yamlon your workstation. Add the following configuration to
openstack_netconfig.yamlto create theNetConfigCR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack
apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
openstack_netconfig.yamlfile, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks. The following example creates isolated networks for the data plane:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
spec.networks.name: The name of the network, for example,CtlPlane. -
spec.networks.subnets: The IPv4 subnet specification. -
spec.networks.subnets.name: The name of the subnet, for example,subnet1. -
spec.networks.subnets.allocationRanges: TheNetConfigallocationRange. TheallocationRangemust not overlap with the MetalLBIPAddressPoolrange and the IP address pool range. -
spec.networks.subnets.excludeAddresses: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. -
spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks.
-
-
Save the
openstack_netconfig.yamldefinition file. Create the data plane network:
oc create -f openstack_netconfig.yaml -n openstack
$ oc create -f openstack_netconfig.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the data plane network is created, view the
openstacknetconfigresource:oc get netconfig/openstacknetconfig -n openstack
$ oc get netconfig/openstacknetconfig -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you see errors, check the underlying
network-attach-definitionand node network configuration policies:oc get network-attachment-definitions -n openstack oc get nncp
$ oc get network-attachment-definitions -n openstack $ oc get nncpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Additional resources Copiar enlaceEnlace copiado en el portapapeles!
Chapter 4. Creating the control plane Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.
Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.
4.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
-
The OpenStack Operator (
openstack-operator) is installed. For more information, see Installing and preparing the Operators. - The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks.
The RHOCP cluster is not configured with any network policies that prevent communication between the
openstack-operatorsnamespace and the control plane namespace (defaultopenstack). Use the following command to check the existing network policies on the cluster:oc get networkpolicy -n openstack
$ oc get networkpolicy -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the
openstack-operatorsnamespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide.-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
4.2. Creating the control plane Copiar enlaceEnlace copiado en el portapapeles!
You must define an OpenStackControlPlane custom resource (CR) to create the control plane and enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.
The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.
Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:
oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
Procedure
Create a file on your workstation named
openstack_control_plane.yamlto define theOpenStackControlPlaneCR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
SecretCR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
storageClassyou created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:spec: secret: osp-secret storageClass: <RHOCP_storage_class>
spec: secret: osp-secret storageClass: <RHOCP_storage_class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<RHOCP_storage_class>with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
-
Replace
Add the following service configurations:
Note-
The following service examples use IP addresses from the default RHOSO MetalLB
IPAddressPoolrange for theloadBalancerIPsfield. Update theloadBalancerIPsfield with the IP address from the MetalLBIPAddressPoolrange that you created. - You cannot override the default public service endpoint. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
Block Storage service (cinder):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
cinderBackup.replicas: You can deploy the initial control plane without activating thecinderBackupservice. To deploy the service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage. -
cinderVolumes.replicas: You can deploy the initial control plane without activating thecinderVolumesservice. To deploy the service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for thecinderVolumesservice and how to configure a back end for the service, see Configuring the volume service in Configuring persistent storage.
-
Compute service (nova):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0andcell1:nova-api,nova-metadata,nova-scheduler, andnova-conductor. Thenovncproxyservice is also enabled forcell1by default.DNS service for the data plane:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
dns.template.options: Defines thednsmasqinstances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to. dns.template.options.key: Specifies thednsmasqparameter to customize for the deployeddnsmasqinstance. Set to one of the following valid values:-
server -
rev-server -
srv-host -
txt-record -
ptr-record -
rebind-domain-ok -
naptr-record -
cname -
host-record -
caa-record -
dns-rr -
auth-zone -
synth-domain -
no-negcache -
local
-
dns.template.options.values: Specifies the values for thednsmasqparameter. You can specify a generic DNS server as the value, for example,1.1.1.1, or a DNS server for a specific domain, for example,/google.com/8.8.8.8.NoteThis DNS service,
dnsmasq, provides DNS services for nodes on the RHOSO data plane.dnsmasqis different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.
-
Identity service (keystone)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Image service (glance):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
glanceAPIs.default.replicas: You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
-
Key Management service (barbican):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Networking service (neutron):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Object Storage service (swift):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OVN:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Placement service (placement):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Telemetry service (ceilometer, prometheus):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
telemetry.template.metricStorage.dataplaneNetwork: Defines the network that you use to scrape dataplanenode_exporterendpoints. -
telemetry.template.metricStorage.networkAttachments: Lists the networks that each service pod is attached to by using theNetworkAttachmentDefinitionresource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create anetworkAttachmentthat matches the network that you specify as thedataplaneNetwork, so that Prometheus can scrape data from the dataplane nodes. -
telemetry.template.autoscaling: You must have theautoscalingfield present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
-
-
The following service examples use IP addresses from the default RHOSO MetalLB
Add the following service configurations to implement high availability (HA):
A MariaDB Galera cluster for use by all RHOSO services (
openstack), and a MariaDB Galera cluster for use by the Compute service forcell1(openstack-cell1):Copy to Clipboard Copied! Toggle word wrap Toggle overflow A single memcached cluster that contains three memcached servers:
memcached: templates: memcached: replicas: 3memcached: templates: memcached: replicas: 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow A RabbitMQ cluster for use by all RHOSO services (
rabbitmq), and a RabbitMQ cluster for use by the Compute service forcell1(rabbitmq-cell1):Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
Create the control plane:
oc create -f openstack_control_plane.yaml -n openstack
$ oc create -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.NoteCreating the control plane also creates an
OpenStackClientpod that you can access through a remote shell (rsh) to run OpenStack CLI commands.oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
OpenStackClientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Example OpenStackControlPlane CR Copiar enlaceEnlace copiado en el portapapeles!
The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.
-
spec.storageClass: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. -
spec.cinder: Service-specific parameters for the Block Storage service (cinder). -
spec.cinder.template.cinderBackup: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. -
spec.cinder.template.cinderVolumes: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. spec.cinder.template.cinderVolumes.networkAttachments: The list of networks that each service pod is directly attached to, specified by using theNetworkAttachmentDefinitionresource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBClusterservice uses theinternalapinetwork; and theovnControllerservice uses thetenantnetwork.-
spec.nova: Service-specific parameters for the Compute service (nova). -
spec.nova.apiOverride: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Setroute:to{}to apply the default route template. -
metallb.universe.tf/address-pool: The internal service API endpoint registered as a MetalLB service with theIPAddressPool internalapi. -
metallb.universe.tf/loadBalancerIPs: The virtual IP (VIP) address for the service. The IP is shared with other services by default. spec.rabbitmq: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in theloadBalancerIPsannotation, as indicated in 11 and 12.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
-
rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
4.4. Removing a service from the control plane Copiar enlaceEnlace copiado en el portapapeles!
You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.
Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.
Procedure
-
Open the
OpenStackControlPlaneCR file on your workstation. Locate the service you want to remove from the control plane and disable it:
cinder: enabled: false apiOverride: route: {} ...cinder: enabled: false apiOverride: route: {} ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:
oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresource is updated with the disabled service when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the service is removed:
oc get cinder -n openstack
$ oc get cinder -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the following message when the service is successfully removed:
No resources found in openstack namespace.
No resources found in openstack namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the API endpoints for the service are removed from the Identity service (keystone):
oc rsh -n openstack openstackclient openstack endpoint list --service volumev3
$ oc rsh -n openstack openstackclient $ openstack endpoint list --service volumev3Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the following message when the API endpoints for the service are successfully removed:
No service with a type, name or ID of 'volumev3' exists.
No service with a type, name or ID of 'volumev3' exists.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Additional resources Copiar enlaceEnlace copiado en el portapapeles!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- About advertising for the IP address pools
- Dynamic provisioning
- Configuring the Block Storage backup service
- Configuring the Image service (glance)
Chapter 5. Creating the data plane Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles.
You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane, you must perform the following tasks:
-
Create a
SecretCR for each node set for Ansible to use to execute commands on the data plane nodes. -
Create the
OpenStackDataPlaneNodeSetCRs that define the nodes and layout of the data plane. -
Create the
OpenStackDataPlaneDeploymentCR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSetCRs.
The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7 and 8. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using Red Hat Ceph Storage 8, adjust the configuration examples accordingly.
5.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
5.2. Creating the data plane secrets Copiar enlaceEnlace copiado en el portapapeles!
You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.
To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:
An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each
OpenStackDataPlaneNodeSetCR in your data plane.- An SSH key to enable migration of instances between Compute nodes.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keysfile for a user with passwordlesssudoprivileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>with the name to use for the key pair.
-
Replace
Create the
SecretCR for Ansible and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keysoption for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
SecretCR for migration and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For nodes that have not been registered to the Red Hat Customer Portal, create the
SecretCR for subscription-manager credentials to register the nodes:oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<subscription_manager_username>with the username you set forsubscription-manager. -
Replace
<subscription_manager_password>with the password you set forsubscription-manager.
-
Replace
Create a
SecretCR that contains the Red Hat registry credentials:oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<username>and<password>with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yamlto define the libvirt secret:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_password>with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:echo -n <password> | base64
$ echo -n <password> | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create the
SecretCR:oc apply -f secret_libvirt.yaml -n openstack
$ oc apply -f secret_libvirt.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the
SecretCRs are created:oc describe secret dataplane-ansible-ssh-private-key-secret oc describe secret nova-migration-ssh-key oc describe secret subscription-manager oc describe secret redhat-registry oc describe secret libvirt-secret
$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes Copiar enlaceEnlace copiado en el portapapeles!
You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes.
Procedure
Create a file on your workstation named
openstack_preprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name: TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
spec.env: An optional list of environment variables to pass to the pod.
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplanespec: ... networkAttachments: - ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify that the nodes in this set are pre-provisioned:
preProvisioned: true
preProvisioned: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret-key>with the name of the SSH keySecretCR you created for this node set in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplanenodeTemplate: ... managementNetwork: ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
SecretCRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
ansibleUser: The user associated with the secret you created in Creating the data plane secrets. -
ansibleVars: The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/. For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log intoregistry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
nic1: The MAC address assigned to the NIC to use for network configuration on the Compute node. -
edpm_network_config_nmstate: Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider now. In a future release afternmstatelimitations are resolved, theifcfgprovider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with thenmstateprovider is not supported. For this and other limitations of RHOSOnmstatesupport, see https://issues.redhat.com/browse/OSPRH-11309. edpm_network_config_update: When deploying a node set for the first time, set theedpm_network_config_updatevariable tofalse. When updating or adopting a node set, setedpm_network_config_updatetotrue.ImportantAfter an update or an adoption, you must reset
edpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.-
dns_servers: Autogenerated from IPAM and DNS and no user input is required. -
domain: Autogenerated from IPAM and DNS and no user input is required. -
routes: Autogenerated from IPAM and DNS and no user input is required.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCRspecproperties. Define each node in this node set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
edpm-compute-0: The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks: Defines the IPAM and the DNS records for the node. -
fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
networkData: TheSecretthat contains network configuration that is node-specific. -
userData.name: TheSecretthat contains user data that is node-specific. -
ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for thehostNamefield for the node or the node definition reference, for example,edpm-compute-0. ansibleVars: Node-specific Ansible variables that customize the node.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR properties.
-
Nodes defined within the
-
-
Save the
openstack_preprovisioned_node_set.yamldefinition file. Create the data plane resources:
oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstack
$ oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the data plane resources have been created by confirming that the status is
SetupReady:oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:oc get secret | grep openstack-data-plane
$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the services were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes Copiar enlaceEnlace copiado en el portapapeles!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
5.4. Creating a data plane with unprovisioned nodes Copiar enlaceEnlace copiado en el portapapeles!
To create a data plane with unprovisioned nodes, you must perform the following tasks:
-
Create a
BareMetalHostcustom resource (CR) for each bare-metal data plane node. -
Define an
OpenStackDataPlaneNodeSetCR for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking.
5.4.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Your RHOCP cluster supports provisioning bare-metal nodes. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
- Your Cluster Baremetal Operator (CBO) is configured for provisioning. For more information, see Provisioning [metal3.io/v1alpha1] in the RHOCP API Reference.
5.4.2. Creating the BareMetalHost CRs for unprovisioned nodes Copiar enlaceEnlace copiado en el portapapeles!
You must create a BareMetalHost custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.
If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. For information about how to prevent traffic being dropped because of the RPF filter, see How to prevent asymmetric routing.
Procedure
The Bare Metal Operator (BMO) manages
BareMetalHostcustom resources (CRs) in theopenshift-machine-apinamespace by default. Update theProvisioningCR to watch all namespaces:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the
ProvisioningCR to enablevirtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file on your workstation that defines the
SecretCR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_username>and<base64_password>with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:echo -n <string> | base64
$ echo -n <string> | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create a file named
bmh_nodes.yamlon your workstation, that defines theBareMetalHostCR for each bare-metal data plane node. The following example creates aBareMetalHostCR with the provisioning method Redfish virtual media:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.labels: Key-value pairs, such asapp,workload, andnodeName, that provide varying levels of granularity for labelling nodes. You can use these labels when you create anOpenStackDataPlaneNodeSetCR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set. -
spec.bmc.address: The URL for communicating with the BMC controller of the node. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Installing on bare metal guide. -
spec.bmc.credentialsName: The name of theSecretCR you created in the previous step for accessing the BMC of the node. -
preprovisioningNetworkDataName: An optional field that specifies the name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be innmstateformat.
For more information about how to create a
BareMetalHostCR, see About the BareMetalHost resource in the RHOCP Installing on bare metal guide.-
Create the
BareMetalHostresources:oc create -f bmh_nodes.yaml
$ oc create -f bmh_nodes.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
BareMetalHostresources have been created and are in theAvailablestate:oc wait --for=jsonpath='{.status.provisioning.state}'=available bmh/edpm-compute-baremetal-00 --timeout=<timeout_value>$ oc wait --for=jsonpath='{.status.provisioning.state}'=available bmh/edpm-compute-baremetal-00 --timeout=<timeout_value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<timeout_value>with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value60m. Use a value that is appropriate to the size of your deployment. Give large deployments more time to complete deployment tasks.
-
Replace
5.4.3. Creating an OpenStackDataPlaneNodeSet CR with unprovisioned nodes Copiar enlaceEnlace copiado en el portapapeles!
You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
To set a root password for the data plane nodes during provisioning, use the passwordSecret field in the OpenStackDataPlaneNodeSet CR. For more information, see How to set a root password for the Dataplane Node on Red Hat OpenStack Services on OpenShift.
For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes.
Prerequisites
-
A
BareMetalHostCR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating theBareMetalHostCRs for unprovisioned nodes.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name: TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
spec.env: An optional list of environment variables to pass to the pod.
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplanespec: ... networkAttachments: - ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:
preProvisioned: false
preProvisioned: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the
baremetalSetTemplatefield to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bmh_namespace>with the namespace defined in the correspondingBareMetalHostCR for the node, for example,openstack. -
Replace
<ansible_ssh_user>with the username of the Ansible SSH user, for example,cloud-admin. -
Replace
<bmh_label>with the metadata label defined in the correspondingBareMetalHostCR for the node, for example,openstack. Metadata labels, such asapp,workload, andnodeNameare key-value pairs for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHostCR. -
Replace
<interface>with the control plane interface the node connects to, for example,enp6s0.
-
Replace
If you created a custom
OpenStackProvisionServerCR, add it to yourbaremetalSetTemplatedefinition:baremetalSetTemplate: ... provisionServerName: my-os-provision-serverbaremetalSetTemplate: ... provisionServerName: my-os-provision-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret-key>with the name of the SSH keySecretCR you created in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplanenodeTemplate: ... managementNetwork: ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
SecretCRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
ansibleUser: The user associated with the secret you created in Creating the data plane secrets. -
ansibleVars: The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/. For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log intoregistry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
nic1: The MAC address assigned to the NIC to use for network configuration on the Compute node. -
edpm_network_config_nmstate: Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider now. In a future release afternmstatelimitations are resolved, theifcfgprovider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with thenmstateprovider is not supported. For this and other limitations of RHOSOnmstatesupport, see https://issues.redhat.com/browse/OSPRH-11309. edpm_network_config_update: When deploying a node set for the first time, set theedpm_network_config_updatevariable tofalse. When updating or adopting a node set, setedpm_network_config_updatetotrue.ImportantAfter an update or an adoption, you must reset
edpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.-
dns_servers: Autogenerated from IPAM and DNS and no user input is required. -
domain: Autogenerated from IPAM and DNS and no user input is required. -
routes: Autogenerated from IPAM and DNS and no user input is required.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCR properties. Define each node in this node set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
edpm-compute-0: The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks: Defines the IPAM and the DNS records for the node. -
fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
networkData: TheSecretthat contains network configuration that is node-specific. -
userData.name: TheSecretthat contains user data that is node-specific. -
ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for thehostNamefield for the node or the node definition reference, for example,edpm-compute-0. -
ansibleVars: Node-specific Ansible variables that customize the node. bmhLabelSelector: Metadata labels, such asapp,workload, andnodeNameare key-value pairs for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHostCR.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR properties.
-
Nodes defined within the
-
-
Save the
openstack_unprovisioned_node_set.yamldefinition file. Create the data plane resources:
oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
$ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the data plane resources have been created by confirming that the status is
SetupReady:oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:oc get secret -n openstack | grep openstack-data-plane
$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the nodes have transitioned to the
provisionedstate:oc get bmh
$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-data-plane true 3d21hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the services were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.4. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes Copiar enlaceEnlace copiado en el portapapeles!
The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
5.4.5. How to prevent asymmetric routing Copiar enlaceEnlace copiado en el portapapeles!
If the Red Hat OpenShift Container Platform (RHOCP) cluster nodes have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning, it causes asymmetric traffic. Therefore, if you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. Implement one of the following methods to prevent traffic being dropped because of the RPF filter:
- Use a dedicated NIC on the network where RHOCP binds the provisioning service, that is, the RHOCP machine network or a dedicated RHOCP provisioning network.
- Use a dedicated NIC on a network that is reachable through routing on the RHOCP master nodes. For information about how to add routes to your RHOCP networks, see Adding routes to the RHOCP networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.
Use a shared NIC for provisioning and the RHOSO
ctlplaneinterface. You can use one of the following methods to configure a shared NIC:-
Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the
ctlplanenetwork and the other in the address range that you use for provisioning. -
Configure a DHCP server to allocate an address range for provisioning that is different from the
ctlplaneaddress range. -
If a DHCP server is not available, configure the
preprovisioningNetworkDatafield on theBareMetalHostCRs. For information about how to configure thepreprovisioningNetworkDatafield, see ConfiguringpreprovisioningNetworkDataon theBareMetalHostCRs.
-
Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the
-
If your environment has RHOCP master and worker nodes that are not connected to the network used by the EDPM nodes, you can set the
nodeSelectorfield on theOpenStackProvisionServerCR to place it on a worker node that does not have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning.
5.4.6. Configuring preprovisioningNetworkData on the BareMetalHost CRs Copiar enlaceEnlace copiado en el portapapeles!
If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. To prevent traffic being dropped because of the RPF filter, you can configure the preprovisioningNetworkData field on the BareMetalHost CRs.
Procedure
Create a
SecretCR withpreprovisioningNetworkDatainnmstateformat for eachBareMetalHostCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Secretresources:oc create -f secret_leaf0-0.yaml
$ oc create -f secret_leaf0-0.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open the
BareMetalHostCR file, for example,bmh_nodes.yaml. Add the
preprovisioningNetworkDataNamefield to eachBareMetalHostCR defined for each node in thebmh_nodes.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
BareMetalHostCRs:oc apply -f bmh_nodes.yaml
$ oc apply -f bmh_nodes.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. OpenStackDataPlaneNodeSet CR spec properties Copiar enlaceEnlace copiado en el portapapeles!
The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.
5.5.1. nodeTemplate Copiar enlaceEnlace copiado en el portapapeles!
Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.
| Field | Description |
|---|---|
|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
|
Name of the network to use for management (SSH/Ansible). Default: |
|
|
Network definitions for the |
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
|
UserData configuration for the |
|
|
NetworkData configuration for the |
5.5.2. nodes Copiar enlaceEnlace copiado en el portapapeles!
Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.
| Field | Description |
|---|---|
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
| The node name. |
|
| Name of the network to use for management (SSH/Ansible). |
|
| NetworkData configuration for the node. |
|
| Instance networks. |
|
| Node-specific user data. |
5.5.3. ansible Copiar enlaceEnlace copiado en el portapapeles!
Defines the group of Ansible configuration options.
| Field | Description |
|---|---|
|
|
The user associated with the secret you created in Creating the data plane secrets. Default: |
|
| SSH host for the Ansible connection. |
|
| SSH port for the Ansible connection. |
|
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
|
|
A list of sources to populate Ansible variables from. Values defined by an |
5.5.4. ansibleVarsFrom Copiar enlaceEnlace copiado en el portapapeles!
Defines the list of sources to populate Ansible variables from.
| Field | Description |
|---|---|
|
|
An optional identifier to prepend to each key in the |
|
|
The |
|
|
The |
5.6. Deploying the data plane Copiar enlaceEnlace copiado en el portapapeles!
You use the OpenStackDataPlaneDeployment custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment CR to deploy each of your OpenStackDataPlaneNodeSet CRs.
When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yamlto define theOpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name: TheOpenStackDataPlaneDeploymentCR name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
-
Add all the
OpenStackDataPlaneNodeSetCRs that you want to deploy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yamldeployment file. Deploy the data plane:
oc create -f openstack_data_plane_deploy.yaml -n openstack
$ oc create -f openstack_data_plane_deploy.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the Ansible logs while the deployment executes:
oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value> oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
$ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value> $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<timeout_value>with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value60m. If the completion status ofSetupReadyforoc wait openstackdataplanedeploymentorNodeSetReadyforoc wait openstackdataplanenodesetis not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Map the Compute nodes to the Compute cell that they are connected to:
oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you did not create additional cells, this command maps the Compute nodes to
cell1.Access the remote shell for the
openstackclientpod and verify that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor listCopy to Clipboard Copied! Toggle word wrap Toggle overflow If some Compute nodes are missing form the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the
nova-computeservices on the deployed data plane nodes.Verify that the hypervisor hostname is a fully qualified domain name (FQDN):
hostname -f
$ hostname -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.
5.7. Data plane conditions and states Copiar enlaceEnlace copiado en el portapapeles!
Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.
| Condition | Description |
|---|---|
|
|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
|
| "True": The NodeSet has been successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
| "True": DNSData resources are ready. |
|
| "True": The IPSet resources are ready. |
|
| "True": Bare-metal nodes are provisioned and ready. |
| Status field | Description |
|---|---|
|
|
|
|
| |
|
|
| Condition | Description |
|---|---|
|
|
|
|
| "True": The data plane is successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
|
"True": The deployment has succeeded for the named |
|
|
"True": The deployment has succeeded for the named |
| Status field | Description |
|---|---|
|
|
|
| Condition | Description |
|---|---|
|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
5.8. Troubleshooting data plane creation and deployment Copiar enlaceEnlace copiado en el portapapeles!
To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.
5.8.1. Checking the job condition message for a service Copiar enlaceEnlace copiado en el portapapeles!
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example output shows two deployments currently in progress:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-compute ["openstack-edpm-ipam"] False Deployment in progressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve and inspect Ansible execution jobs.
The Kubernetes jobs are labelled with the name of the
OpenStackDataPlaneDeployment. You can list jobs for eachOpenStackDataPlaneDeploymentby using the label:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check logs by using
oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
$ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2 PLAY RECAP ********************************************************************* edpm-compute-0 : ok=22 changed=0 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1.1. Job condition messages Copiar enlaceEnlace copiado en el portapapeles!
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:
-
Job not started: The job has not started. -
Job not found: The job could not be found. -
Job is running: The job is currently running. -
Job complete: The job execution is complete. -
Job error occurred <error_message>: The job stopped executing unexpectedly. The<error_message>is replaced with a specific error message.
To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.
5.8.2. Checking the logs for a node set Copiar enlaceEnlace copiado en el portapapeles!
You can access the logs for a node set to check for deployment issues.
Procedure
Retrieve pods with the
OpenStackAnsibleEElabel:oc get pods -l app=openstackansibleee
$ oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13sCopy to Clipboard Copied! Toggle word wrap Toggle overflow SSH into the pod you want to check:
Pod that is running:
oc rsh validate-network-edpm-compute-6g7n9
$ oc rsh validate-network-edpm-compute-6g7n9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pod that is not running:
oc debug configure-network-edpm-compute-j6r4l
$ oc debug configure-network-edpm-compute-j6r4lCopy to Clipboard Copied! Toggle word wrap Toggle overflow
List the directories in the
/runner/artifactsmount:ls /runner/artifacts
$ ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
stdoutfor the required artifact:cat /runner/artifacts/configure-network-edpm-compute/stdout
$ cat /runner/artifacts/configure-network-edpm-compute/stdoutCopy to Clipboard Copied! Toggle word wrap Toggle overflow