Chapter 5. Creating the data plane
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles.
You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane, you must perform the following tasks:
-
Create a
SecretCR for each node set for Ansible to use to execute commands on the data plane nodes. -
Create the
OpenStackDataPlaneNodeSetCRs that define the nodes and layout of the data plane. -
Create the
OpenStackDataPlaneDeploymentCR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSetCRs.
The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7 and 8. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using Red Hat Ceph Storage 8, adjust the configuration examples accordingly.
5.1. Prerequisites Copy linkLink copied to clipboard!
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
5.2. Creating the data plane secrets Copy linkLink copied to clipboard!
You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.
To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:
An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each
OpenStackDataPlaneNodeSetCR in your data plane.- An SSH key to enable migration of instances between Compute nodes.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keysfile for a user with passwordlesssudoprivileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>with the name to use for the key pair.
-
Replace
Create the
SecretCR for Ansible and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keysoption for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
SecretCR for migration and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For nodes that have not been registered to the Red Hat Customer Portal, create the
SecretCR for subscription-manager credentials to register the nodes:oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<subscription_manager_username>with the username you set forsubscription-manager. -
Replace
<subscription_manager_password>with the password you set forsubscription-manager.
-
Replace
Create a
SecretCR that contains the Red Hat registry credentials:oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<username>and<password>with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yamlto define the libvirt secret:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_password>with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:echo -n <password> | base64
$ echo -n <password> | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create the
SecretCR:oc apply -f secret_libvirt.yaml -n openstack
$ oc apply -f secret_libvirt.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the
SecretCRs are created:oc describe secret dataplane-ansible-ssh-private-key-secret oc describe secret nova-migration-ssh-key oc describe secret subscription-manager oc describe secret redhat-registry oc describe secret libvirt-secret
$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes Copy linkLink copied to clipboard!
You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes.
Procedure
Create a file on your workstation named
openstack_preprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name: TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
spec.env: An optional list of environment variables to pass to the pod.
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplanespec: ... networkAttachments: - ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify that the nodes in this set are pre-provisioned:
preProvisioned: true
preProvisioned: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret-key>with the name of the SSH keySecretCR you created for this node set in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplanenodeTemplate: ... managementNetwork: ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
SecretCRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
ansibleUser: The user associated with the secret you created in Creating the data plane secrets. -
ansibleVars: The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/. For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log intoregistry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
nic1: The MAC address assigned to the NIC to use for network configuration on the Compute node. -
edpm_network_config_nmstate: Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider now. In a future release afternmstatelimitations are resolved, theifcfgprovider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with thenmstateprovider is not supported. For this and other limitations of RHOSOnmstatesupport, see https://issues.redhat.com/browse/OSPRH-11309. edpm_network_config_update: When deploying a node set for the first time, set theedpm_network_config_updatevariable tofalse. When updating or adopting a node set, setedpm_network_config_updatetotrue.ImportantAfter an update or an adoption, you must reset
edpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.-
dns_servers: Autogenerated from IPAM and DNS and no user input is required. -
domain: Autogenerated from IPAM and DNS and no user input is required. -
routes: Autogenerated from IPAM and DNS and no user input is required.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCRspecproperties. Define each node in this node set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
edpm-compute-0: The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks: Defines the IPAM and the DNS records for the node. -
fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
networkData: TheSecretthat contains network configuration that is node-specific. -
userData.name: TheSecretthat contains user data that is node-specific. -
ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for thehostNamefield for the node or the node definition reference, for example,edpm-compute-0. ansibleVars: Node-specific Ansible variables that customize the node.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR properties.
-
Nodes defined within the
-
-
Save the
openstack_preprovisioned_node_set.yamldefinition file. Create the data plane resources:
oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstack
$ oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the data plane resources have been created by confirming that the status is
SetupReady:oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:oc get secret | grep openstack-data-plane
$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the services were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
5.4. Creating a data plane with unprovisioned nodes Copy linkLink copied to clipboard!
To create a data plane with unprovisioned nodes, you must perform the following tasks:
-
Create a
BareMetalHostcustom resource (CR) for each bare-metal data plane node. -
Define an
OpenStackDataPlaneNodeSetCR for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking.
5.4.1. Prerequisites Copy linkLink copied to clipboard!
- Your RHOCP cluster supports provisioning bare-metal nodes. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
- Your Cluster Baremetal Operator (CBO) is configured for provisioning. For more information, see Provisioning [metal3.io/v1alpha1] in the RHOCP API Reference.
5.4.2. Creating the BareMetalHost CRs for unprovisioned nodes Copy linkLink copied to clipboard!
You must create a BareMetalHost custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.
If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. For information about how to prevent traffic being dropped because of the RPF filter, see How to prevent asymmetric routing.
Procedure
The Bare Metal Operator (BMO) manages
BareMetalHostcustom resources (CRs) in theopenshift-machine-apinamespace by default. Update theProvisioningCR to watch all namespaces:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the
ProvisioningCR to enablevirtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file on your workstation that defines the
SecretCR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_username>and<base64_password>with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:echo -n <string> | base64
$ echo -n <string> | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create a file named
bmh_nodes.yamlon your workstation, that defines theBareMetalHostCR for each bare-metal data plane node. The following example creates aBareMetalHostCR with the provisioning method Redfish virtual media:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.labels: Key-value pairs, such asapp,workload, andnodeName, that provide varying levels of granularity for labelling nodes. You can use these labels when you create anOpenStackDataPlaneNodeSetCR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set. -
spec.bmc.address: The URL for communicating with the BMC controller of the node. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Installing on bare metal guide. -
spec.bmc.credentialsName: The name of theSecretCR you created in the previous step for accessing the BMC of the node. -
preprovisioningNetworkDataName: An optional field that specifies the name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be innmstateformat.
For more information about how to create a
BareMetalHostCR, see About the BareMetalHost resource in the RHOCP Installing on bare metal guide.-
Create the
BareMetalHostresources:oc create -f bmh_nodes.yaml
$ oc create -f bmh_nodes.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
BareMetalHostresources have been created and are in theAvailablestate:oc wait --for=jsonpath='{.status.provisioning.state}'=available bmh/edpm-compute-baremetal-00 --timeout=<timeout_value>$ oc wait --for=jsonpath='{.status.provisioning.state}'=available bmh/edpm-compute-baremetal-00 --timeout=<timeout_value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<timeout_value>with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value60m. Use a value that is appropriate to the size of your deployment. Give large deployments more time to complete deployment tasks.
-
Replace
5.4.3. Creating an OpenStackDataPlaneNodeSet CR with unprovisioned nodes Copy linkLink copied to clipboard!
You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
To set a root password for the data plane nodes during provisioning, use the passwordSecret field in the OpenStackDataPlaneNodeSet CR. For more information, see How to set a root password for the Dataplane Node on Red Hat OpenStack Services on OpenShift.
For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes.
Prerequisites
-
A
BareMetalHostCR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating theBareMetalHostCRs for unprovisioned nodes.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name: TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. -
spec.env: An optional list of environment variables to pass to the pod.
-
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplanespec: ... networkAttachments: - ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:
preProvisioned: false
preProvisioned: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the
baremetalSetTemplatefield to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bmh_namespace>with the namespace defined in the correspondingBareMetalHostCR for the node, for example,openstack. -
Replace
<ansible_ssh_user>with the username of the Ansible SSH user, for example,cloud-admin. -
Replace
<bmh_label>with the metadata label defined in the correspondingBareMetalHostCR for the node, for example,openstack. Metadata labels, such asapp,workload, andnodeNameare key-value pairs for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHostCR. -
Replace
<interface>with the control plane interface the node connects to, for example,enp6s0.
-
Replace
If you created a custom
OpenStackProvisionServerCR, add it to yourbaremetalSetTemplatedefinition:baremetalSetTemplate: ... provisionServerName: my-os-provision-serverbaremetalSetTemplate: ... provisionServerName: my-os-provision-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret-key>with the name of the SSH keySecretCR you created in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplanenodeTemplate: ... managementNetwork: ctlplaneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
SecretCRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
ansibleUser: The user associated with the secret you created in Creating the data plane secrets. -
ansibleVars: The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/. For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log intoregistry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
nic1: The MAC address assigned to the NIC to use for network configuration on the Compute node. -
edpm_network_config_nmstate: Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider now. In a future release afternmstatelimitations are resolved, theifcfgprovider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with thenmstateprovider is not supported. For this and other limitations of RHOSOnmstatesupport, see https://issues.redhat.com/browse/OSPRH-11309. edpm_network_config_update: When deploying a node set for the first time, set theedpm_network_config_updatevariable tofalse. When updating or adopting a node set, setedpm_network_config_updatetotrue.ImportantAfter an update or an adoption, you must reset
edpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.-
dns_servers: Autogenerated from IPAM and DNS and no user input is required. -
domain: Autogenerated from IPAM and DNS and no user input is required. -
routes: Autogenerated from IPAM and DNS and no user input is required.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCR properties. Define each node in this node set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
edpm-compute-0: The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks: Defines the IPAM and the DNS records for the node. -
fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
networkData: TheSecretthat contains network configuration that is node-specific. -
userData.name: TheSecretthat contains user data that is node-specific. -
ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for thehostNamefield for the node or the node definition reference, for example,edpm-compute-0. -
ansibleVars: Node-specific Ansible variables that customize the node. bmhLabelSelector: Metadata labels, such asapp,workload, andnodeNameare key-value pairs for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHostCR.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR properties.
-
Nodes defined within the
-
-
Save the
openstack_unprovisioned_node_set.yamldefinition file. Create the data plane resources:
oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
$ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the data plane resources have been created by confirming that the status is
SetupReady:oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:oc get secret -n openstack | grep openstack-data-plane
$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the nodes have transitioned to the
provisionedstate:oc get bmh
$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-data-plane true 3d21hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the services were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.4. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
5.4.5. How to prevent asymmetric routing Copy linkLink copied to clipboard!
If the Red Hat OpenShift Container Platform (RHOCP) cluster nodes have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning, it causes asymmetric traffic. Therefore, if you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. Implement one of the following methods to prevent traffic being dropped because of the RPF filter:
- Use a dedicated NIC on the network where RHOCP binds the provisioning service, that is, the RHOCP machine network or a dedicated RHOCP provisioning network.
- Use a dedicated NIC on a network that is reachable through routing on the RHOCP master nodes. For information about how to add routes to your RHOCP networks, see Adding routes to the RHOCP networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.
Use a shared NIC for provisioning and the RHOSO
ctlplaneinterface. You can use one of the following methods to configure a shared NIC:-
Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the
ctlplanenetwork and the other in the address range that you use for provisioning. -
Configure a DHCP server to allocate an address range for provisioning that is different from the
ctlplaneaddress range. -
If a DHCP server is not available, configure the
preprovisioningNetworkDatafield on theBareMetalHostCRs. For information about how to configure thepreprovisioningNetworkDatafield, see ConfiguringpreprovisioningNetworkDataon theBareMetalHostCRs.
-
Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the
-
If your environment has RHOCP master and worker nodes that are not connected to the network used by the EDPM nodes, you can set the
nodeSelectorfield on theOpenStackProvisionServerCR to place it on a worker node that does not have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning.
5.4.6. Configuring preprovisioningNetworkData on the BareMetalHost CRs Copy linkLink copied to clipboard!
If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. To prevent traffic being dropped because of the RPF filter, you can configure the preprovisioningNetworkData field on the BareMetalHost CRs.
Procedure
Create a
SecretCR withpreprovisioningNetworkDatainnmstateformat for eachBareMetalHostCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Secretresources:oc create -f secret_leaf0-0.yaml
$ oc create -f secret_leaf0-0.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open the
BareMetalHostCR file, for example,bmh_nodes.yaml. Add the
preprovisioningNetworkDataNamefield to eachBareMetalHostCR defined for each node in thebmh_nodes.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
BareMetalHostCRs:oc apply -f bmh_nodes.yaml
$ oc apply -f bmh_nodes.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. OpenStackDataPlaneNodeSet CR spec properties Copy linkLink copied to clipboard!
The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.
5.5.1. nodeTemplate Copy linkLink copied to clipboard!
Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.
| Field | Description |
|---|---|
|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
|
Name of the network to use for management (SSH/Ansible). Default: |
|
|
Network definitions for the |
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
|
UserData configuration for the |
|
|
NetworkData configuration for the |
5.5.2. nodes Copy linkLink copied to clipboard!
Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.
| Field | Description |
|---|---|
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
| The node name. |
|
| Name of the network to use for management (SSH/Ansible). |
|
| NetworkData configuration for the node. |
|
| Instance networks. |
|
| Node-specific user data. |
5.5.3. ansible Copy linkLink copied to clipboard!
Defines the group of Ansible configuration options.
| Field | Description |
|---|---|
|
|
The user associated with the secret you created in Creating the data plane secrets. Default: |
|
| SSH host for the Ansible connection. |
|
| SSH port for the Ansible connection. |
|
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
|
|
A list of sources to populate Ansible variables from. Values defined by an |
5.5.4. ansibleVarsFrom Copy linkLink copied to clipboard!
Defines the list of sources to populate Ansible variables from.
| Field | Description |
|---|---|
|
|
An optional identifier to prepend to each key in the |
|
|
The |
|
|
The |
5.6. Deploying the data plane Copy linkLink copied to clipboard!
You use the OpenStackDataPlaneDeployment custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment CR to deploy each of your OpenStackDataPlaneNodeSet CRs.
When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yamlto define theOpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name: TheOpenStackDataPlaneDeploymentCR name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
-
Add all the
OpenStackDataPlaneNodeSetCRs that you want to deploy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yamldeployment file. Deploy the data plane:
oc create -f openstack_data_plane_deploy.yaml -n openstack
$ oc create -f openstack_data_plane_deploy.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the Ansible logs while the deployment executes:
oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value> oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
$ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value> $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<timeout_value>with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value60m. If the completion status ofSetupReadyforoc wait openstackdataplanedeploymentorNodeSetReadyforoc wait openstackdataplanenodesetis not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Map the Compute nodes to the Compute cell that they are connected to:
oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you did not create additional cells, this command maps the Compute nodes to
cell1.Access the remote shell for the
openstackclientpod and verify that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor listCopy to Clipboard Copied! Toggle word wrap Toggle overflow If some Compute nodes are missing form the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the
nova-computeservices on the deployed data plane nodes.Verify that the hypervisor hostname is a fully qualified domain name (FQDN):
hostname -f
$ hostname -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.
5.7. Data plane conditions and states Copy linkLink copied to clipboard!
Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.
| Condition | Description |
|---|---|
|
|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
|
| "True": The NodeSet has been successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
| "True": DNSData resources are ready. |
|
| "True": The IPSet resources are ready. |
|
| "True": Bare-metal nodes are provisioned and ready. |
| Status field | Description |
|---|---|
|
|
|
|
| |
|
|
| Condition | Description |
|---|---|
|
|
|
|
| "True": The data plane is successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
|
"True": The deployment has succeeded for the named |
|
|
"True": The deployment has succeeded for the named |
| Status field | Description |
|---|---|
|
|
|
| Condition | Description |
|---|---|
|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
5.8. Troubleshooting data plane creation and deployment Copy linkLink copied to clipboard!
To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.
5.8.1. Checking the job condition message for a service Copy linkLink copied to clipboard!
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example output shows two deployments currently in progress:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-compute ["openstack-edpm-ipam"] False Deployment in progressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve and inspect Ansible execution jobs.
The Kubernetes jobs are labelled with the name of the
OpenStackDataPlaneDeployment. You can list jobs for eachOpenStackDataPlaneDeploymentby using the label:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check logs by using
oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
$ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2 PLAY RECAP ********************************************************************* edpm-compute-0 : ok=22 changed=0 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1.1. Job condition messages Copy linkLink copied to clipboard!
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:
-
Job not started: The job has not started. -
Job not found: The job could not be found. -
Job is running: The job is currently running. -
Job complete: The job execution is complete. -
Job error occurred <error_message>: The job stopped executing unexpectedly. The<error_message>is replaced with a specific error message.
To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.
5.8.2. Checking the logs for a node set Copy linkLink copied to clipboard!
You can access the logs for a node set to check for deployment issues.
Procedure
Retrieve pods with the
OpenStackAnsibleEElabel:oc get pods -l app=openstackansibleee
$ oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13sCopy to Clipboard Copied! Toggle word wrap Toggle overflow SSH into the pod you want to check:
Pod that is running:
oc rsh validate-network-edpm-compute-6g7n9
$ oc rsh validate-network-edpm-compute-6g7n9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pod that is not running:
oc debug configure-network-edpm-compute-j6r4l
$ oc debug configure-network-edpm-compute-j6r4lCopy to Clipboard Copied! Toggle word wrap Toggle overflow
List the directories in the
/runner/artifactsmount:ls /runner/artifacts
$ ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
stdoutfor the required artifact:cat /runner/artifacts/configure-network-edpm-compute/stdout
$ cat /runner/artifacts/configure-network-edpm-compute/stdoutCopy to Clipboard Copied! Toggle word wrap Toggle overflow