Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Creating the data plane with a routed spine-leaf network topology
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet
custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet
CR is a logical grouping of nodes of a similar type.
To create and deploy a data plane with a routed spine-leaf network topology, you must perform the following tasks:
-
Create a
Secret
CR for each node set for Ansible to use to execute commands on the data plane nodes. Create a
BareMetalHost
CR for each node in each node set, with virtual media as the boot method. You must configure theBareMetalHost
CRs to use one of the following options to provide the base network connectivity for your spine-leaf environment:- External base networking: An external DHCP or Stateless Address Auto-Configuration (SLAAC) server that is not managed by Metal3, and that routes IP traffic to the Red Hat OpenShift Container Platform (RHOCP) cluster. If you use this option, the DHCP and SLAAC are used for the Ironic Python Agent (IPA), but they are not required in the final configuration of the deployed data plane node.
- Network configuration on the ramdisk: The network configuration is embedded in the virtual media ramdisk. You can use this method if your RHOSO deployment does not use automatic network configuration through DHCP or SLAAC. You provide the network configuration for the ramdisk and the bare-metal node in advance to configure the interface addresses and to allow network traffic to flow to facilitate deployment.
-
Create the
OpenStackDataPlaneNodeSet
CRs for each group of unprovisioned nodes in a leaf. You can define as many node sets as necessary for your deployment. -
Create the
OpenStackDataPlaneDeployment
CR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSet
CRs.
You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap
CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
6.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- An operational control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
- Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
-
A
Provisioning
CR is available in RHOCP. For more information about creating aProvisioning
CR, see Configuring a provisioning resource to scale user-provisioned clusters in the Red Hat OpenShift Container Platform (RHOCP) Installing on bare metal guide. - IP connectivity exists between the Red Hat OpenShift Container Platform (RHOCP) cluster and the Baseboard Management Controller (BMC) of the bare-metal node, so that commands can be transmitted to the BMC, and the BMC can download the Virtual Media image.
- Your network environment DHCP environment must match the cluster IP version.
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin
privileges.
6.2. Creating the data plane secrets Link kopierenLink in die Zwischenablage kopiert!
The data plane requires several Secret
custom resources (CRs) to operate. The Secret
CRs are used by the data plane nodes for the following functionality:
To enable secure access between nodes:
-
You must generate an SSH key and create an SSH key
Secret
CR for each key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for eachOpenStackDataPlaneNodeSet
CR in your data plane. -
You must generate an SSH key and create an SSH key
Secret
CR for each key to enable migration of instances between Compute nodes.
-
You must generate an SSH key and create an SSH key
- To register the operating system of the nodes that are not registered to the Red Hat Customer Portal.
- To enable repositories for the nodes.
- To provide Compute nodes with access to libvirt.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keys
file for a user with passwordlesssudo
privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>
with the name to use for the key pair.
-
Replace
Create the
Secret
CR for Ansible and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<key_file_name>
with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keys
option for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Secret
CR for migration and apply it to the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For nodes that have not been registered to the Red Hat Customer Portal, create the
Secret
CR for subscription-manager credentials to register the nodes:oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<subscription_manager_username>
with the username you set forsubscription-manager
. -
Replace
<subscription_manager_password>
with the password you set forsubscription-manager
.
-
Replace
Create a
Secret
CR that contains the Red Hat registry credentials:oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<username>
and<password>
with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yaml
to define the libvirt secret:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_password>
with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:echo -n <password> | base64
$ echo -n <password> | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you do not want to base64-encode the username and password, you can use the
stringData
field instead of thedata
field to set the username and password.
Create the
Secret
CR:oc apply -f secret_libvirt.yaml -n openstack
$ oc apply -f secret_libvirt.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the
Secret
CRs are created:oc describe secret dataplane-ansible-ssh-private-key-secret oc describe secret nova-migration-ssh-key oc describe secret subscription-manager oc describe secret redhat-registry oc describe secret libvirt-secret
$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Creating the BareMetalHost CRs with external base networking Link kopierenLink in die Zwischenablage kopiert!
You can use Redfish Virtual Media to create your spine-leaf network topology with base connectivity provided by an external DHCP or Stateless Address Auto-Configuration (SLAAC) server that is not managed by Metal3. The network must route IP traffic to the Red Hat OpenShift Container Platform (RHOCP) cluster. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.
If you use the ctlplane
interface for provisioning, to avoid the kernel rp_filter
logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane
address range. This ensures that the return traffic remains on the machine network interface.
Procedure
The Bare Metal Operator (BMO) manages
BareMetalHost
custom resources (CRs) in theopenshift-machine-api
namespace by default. Update theProvisioning
CR to watch all namespaces:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
Provisioning
CR to enablevirtualMediaViaExternalNetwork
, which enables bare-metal connectivity through the external network:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file on your workstation named
bmh_leaf1_nodes.yaml
that defines theSecret
CR with the credentials for accessing the BMC of each bare-metal data plane node in the node set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_username>
and<base64_password>
with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:echo -n <string> | base64
$ echo -n <string> | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you don’t want to base64-encode the username and password, you can use the
stringData
field instead of thedata
field to set the username and password.
Create a file on your workstation that defines the
BareMetalHost
CR for each bare-metal data plane node, with virtual media as the boot method:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The URL for communicating with the node’s Baseboard Management Controller (BMC) controller. For more information about how to create a
BareMetalHost
CR, see About the BareMetalHost resource in the RHOCP Postinstallation configuration guide. For information on BMC addressing for other boot methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide. - 2
- The name of the
Secret
CR you created in the previous step for accessing the BMC of the node.
Create the
BareMetalHost
resources:oc create -f bmh_leaf1_nodes.yaml
$ oc create -f bmh_leaf1_nodes.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
BareMetalHost
resources have been created and are in theAvailable
state:oc get bmh
$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 Available openstack-edpm true 2d21h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Creating the BareMetalHost CRs with network configuration on the ramdisk Link kopierenLink in die Zwischenablage kopiert!
You can use Redfish Virtual Media to create your spine-leaf topology with network configuration on the ramdisk if your Red Hat OpenStack Services on OpenShift (RHOSO) deployment does not use automatic network configuration through DHCP or SLAAC.
Red Hat OpenShift Container Platform (RHOCP) uses nmstate
to report on and configure the state of the node network. You create a Secret
custom resource (CR) for each bare-metal data plane node and use the nmstate
schema to configure the pre-provisioning network configuration data that the ramdisk requires to add the bare-metal data plane node on the network. For more information on Nmstate, see Introduction to Nmstate.
If you use the ctlplane
interface for provisioning, to avoid the kernel rp_filter
logic from dropping traffic, configure the ramdisk network to use an address range different from the ctlplane
address range. This ensures that when the ramdisk connects to the provisioning service, the return traffic remains on the machine network interface.
Procedure
Create a
Secret
CR for each bare-metal data plane node in the node set, that defines the pre-provisioning network configuration data for the ramdisk innmstate
format:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bmh-name>
with the name of theBareMetalHost
CR the secret is for, for example,edpm-compute-0-preprovision-network-data
.
For more information about the
nmstate
schema, see https://nmstate.io/devel/yaml_api.html.-
Replace
The Bare Metal Operator (BMO) manages
BareMetalHost
custom resources (CRs) in theopenshift-machine-api
namespace by default. Update theProvisioning
CR to watch all namespaces:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
Provisioning
CR to enablevirtualMediaViaExternalNetwork
, which enables bare-metal connectivity through the external network:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file on your workstation that defines the
Secret
CR with the credentials for accessing the BMC of each bare-metal data plane node in the node set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<base64_username>
and<base64_password>
with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:echo -n <string> | base64
$ echo -n <string> | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you don’t want to base64-encode the username and password, you can use the
stringData
field instead of thedata
field to set the username and password.
Create a file on your workstation named
bmh_leaf1_nodes.yaml
that defines theBareMetalHost
CR for each bare-metal data plane node, with virtual media as the boot method:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The URL for communicating with the node’s Baseboard Management Controller (BMC) controller. For more information about how to create a
BareMetalHost
CR, see About the BareMetalHost resource in the RHOCP Postinstallation configuration guide. For information on BMC addressing for other boot methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide. - 2
- The name of the
Secret
CR you created in the previous step for accessing the BMC of the node.
Add the
preprovisioningNetworkDataName
field to eachBareMetalHost
CR to specify the pre-provisioning network configuration dataSecret
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pre_provision_network_secret>
with theSecret
CR you created in step 1 for the pre-provisioning network configuration data.
-
Replace
Create the
BareMetalHost
resources:oc create -f bmh_leaf1_nodes.yaml
$ oc create -f bmh_leaf1_nodes.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
BareMetalHost
resources have been created and are in theAvailable
state:oc get bmh
$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 Available openstack-edpm true 2d21h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.5. Creating the OpenStackDataPlaneNodeSet CRs for a routed spine-leaf topology Link kopierenLink in die Zwischenablage kopiert!
Create an OpenStackDataPlaneNodeSet
custom resource (CR) for each leaf on your data plane that defines the unprovisioned leaf nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet
CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1
. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet
CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate
field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet
CR, and the nodeTemplate.nodes
field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate
.
Procedure
Create YAML files on your workstation that define the
OpenStackDataPlaneNodeSet
CRs for each leaf in the spine-leaf topology:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
OpenStackDataPlaneNodeSet
CR name must be unique, contain only lower case alphanumeric characters and-
(hyphens) or.
(periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. - 2
- Optional: A list of environment variables to pass to the pod.
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplane
spec: ... networkAttachments: - ctlplane
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:
preProvisioned: false
preProvisioned: false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
baremetalSetTemplate
field to describe the configuration of the bare-metal nodes that are provisioned when the data plane is deployed:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bmh_namespace>
with the namespace defined in the correspondingBareMetalHost
CR for the node, for example,openstack
. -
Replace
<ansible_ssh_user>
with the username of the Ansible SSH user, for example,cloud-admin
. -
Replace
<bmh_label>
with the label defined in the correspondingBareMetalHost
CR for the node, for example,openstack
. -
Replace
<interface>
with the control plane interface the node connects to, for example,enp6s0
.
-
Replace
Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret-key>
with the name of the SSH keySecret
CR you created in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret
.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstack
namespace on your RHOCP cluster to store logs. Set thevolumeMode
toFilesystem
andaccessModes
toReadWriteOnce
. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runner
creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<pvc_name>
with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplane
nodeTemplate: ... managementNetwork: ctlplane
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
Secret
CRs that Ansible uses to source the usernames and passwords to register the operating system of the nodes to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The user associated with the secret you created in Creating the data plane secrets.
- 2
- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.
For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io
, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.
-
Add the common configuration for the set of nodes in this group under the
nodeTemplate
section. Each node in thisOpenStackDataPlaneNodeSet
inherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSet
CR properties. Define each node in this node set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The node definition reference, for example,
edpm-compute-0
. Each node in the node set must have a node definition. - 2
- Defines the IPAM and the DNS records for the node.
- 3
- Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the
NetConfig
CR. - 4
- Node-specific Ansible variables that customize the node.
Note-
Nodes defined within the
nodes
section can configure the same Ansible variables that are configured in thenodeTemplate
section. Where an Ansible variable is configured for both a specific node and within thenodeTemplate
section, the node-specific values override those from thenodeTemplate
section. -
You do not need to replicate all the
nodeTemplate
Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVars
includeedpm
in the name, which stands for "External Data Plane Management".
For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSet
CR properties.-
Save the
openstack_unprovisioned_node_set.yaml
definition file. Create the data plane resources:
oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
$ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane resources have been created by confirming that the status is
SetupReady
:oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the status is
SetupReady
, the command returns acondition met
message, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secret
resource was created for the node set:oc get secret -n openstack | grep openstack-data-plane
$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the nodes have transitioned to the
provisioned
state:oc get bmh
$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-data-plane true 3d21h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the services were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6. OpenStackDataPlaneNodeSet CR spec properties Link kopierenLink in die Zwischenablage kopiert!
The following sections detail the OpenStackDataPlaneNodeSet
CR spec
properties you can configure.
6.6.1. nodeTemplate Link kopierenLink in die Zwischenablage kopiert!
Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet
. You can override these common attributes in the definition for each individual node.
Field | Description |
---|---|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
Name of the network to use for management (SSH/Ansible). Default: |
|
Network definitions for the |
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
|
UserData configuration for the |
|
NetworkData configuration for the |
6.6.2. nodes Link kopierenLink in die Zwischenablage kopiert!
Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet
. Overrides the common attributes defined in the nodeTemplate
.
Field | Description |
---|---|
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
| The node name. |
| Name of the network to use for management (SSH/Ansible). |
| NetworkData configuration for the node. |
| Instance networks. |
| Node-specific user data. |
6.6.3. ansible Link kopierenLink in die Zwischenablage kopiert!
Defines the group of Ansible configuration options.
Field | Description |
---|---|
|
The user associated with the secret you created in Creating the data plane secrets. Default: |
| SSH host for the Ansible connection. |
| SSH port for the Ansible connection. |
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
|
A list of sources to populate Ansible variables from. Values defined by an |
6.6.4. ansibleVarsFrom Link kopierenLink in die Zwischenablage kopiert!
Defines the list of sources to populate Ansible variables from.
Field | Description |
---|---|
|
An optional identifier to prepend to each key in the |
|
The |
|
The |
6.7. Deploying the data plane Link kopierenLink in die Zwischenablage kopiert!
You use the OpenStackDataPlaneDeployment
custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment
custom resources (CRs). Each OpenStackDataPlaneDeployment
CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment
CR to deploy each of your OpenStackDataPlaneNodeSet
CRs.
When the OpenStackDataPlaneDeployment
successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment
or related OpenStackDataPlaneNodeSet
resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment
CR. Remove any failed OpenStackDataPlaneDeployment
CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment
to run Ansible with an updated Secret.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yaml
to define theOpenStackDataPlaneDeployment
CR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
metadata.name
: TheOpenStackDataPlaneDeployment
CR name must be unique, must consist of lower case alphanumeric characters,-
(hyphen) or.
(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
-
Add all the
OpenStackDataPlaneNodeSet
CRs that you want to deploy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yaml
deployment file. Deploy the data plane:
oc create -f openstack_data_plane_deploy.yaml -n openstack
$ oc create -f openstack_data_plane_deploy.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the Ansible logs while the deployment executes:
oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
oc logs
command returns an error similar to the following error, increase the--max-log-requests
value:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value> oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
$ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value> $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<timeout_value>
with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value60m
. If the completion status ofSetupReady
foroc wait openstackdataplanedeployment
orNodeSetReady
foroc wait openstackdataplanenodeset
is not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Map the Compute nodes to the Compute cell that they are connected to:
oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you did not create additional cells, this command maps the Compute nodes to
cell1
.Access the remote shell for the
openstackclient
pod and verify that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If some Compute nodes are missing form the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the
nova-compute
services on the deployed data plane nodes.Verify that the hypervisor hostname is a fully qualified domain name (FQDN):
hostname -f
$ hostname -f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.
6.8. Data plane conditions and states Link kopierenLink in die Zwischenablage kopiert!
Each data plane resource has a series of conditions within their status
subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet
, until an OpenStackDataPlaneDeployment
has been started and finished successfully, the Ready
condition is False
. When the deployment succeeds, the Ready
condition is set to True
. A subsequent deployment sets the Ready
condition to False
until the deployment succeeds, when the Ready
condition is set to True
.
Condition | Description |
---|---|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
| "True": The NodeSet has been successfully deployed. |
| "True": The required inputs are available and ready. |
| "True": DNSData resources are ready. |
| "True": The IPSet resources are ready. |
| "True": Bare-metal nodes are provisioned and ready. |
Status field | Description |
---|---|
|
|
| |
|
Condition | Description |
---|---|
|
|
| "True": The data plane is successfully deployed. |
| "True": The required inputs are available and ready. |
|
"True": The deployment has succeeded for the named |
|
"True": The deployment has succeeded for the named |
Status field | Description |
---|---|
|
|
Condition | Description |
---|---|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
6.9. Troubleshooting data plane creation and deployment Link kopierenLink in die Zwischenablage kopiert!
To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.
6.9.1. Checking the job condition message for a service Link kopierenLink in die Zwischenablage kopiert!
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeployment
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example output shows two deployments currently in progress:
oc get openstackdataplanedeployment
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-compute ["openstack-edpm-ipam"] False Deployment in progress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve and inspect Ansible execution jobs.
The Kubernetes jobs are labelled with the name of the
OpenStackDataPlaneDeployment
. You can list jobs for eachOpenStackDataPlaneDeployment
by using the label:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check logs by using
oc logs -f job/<job-name>
, for example, if you want to check the logs from the configure-network job:oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
$ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2 PLAY RECAP ********************************************************************* edpm-compute-0 : ok=22 changed=0 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.9.1.1. Job condition messages Link kopierenLink in die Zwischenablage kopiert!
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE
field of the oc get job <job_name>
command output. Jobs return one of the following conditions when queried:
-
Job not started
: The job has not started. -
Job not found
: The job could not be found. -
Job is running
: The job is currently running. -
Job complete
: The job execution is complete. -
Job error occurred <error_message>
: The job stopped executing unexpectedly. The<error_message>
is replaced with a specific error message.
To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>
. For example, to view the logs for the repo-setup-openstack-edpm
service, use the command oc logs job/repo-setup-openstack-edpm
.
6.9.2. Checking the logs for a node set Link kopierenLink in die Zwischenablage kopiert!
You can access the logs for a node set to check for deployment issues.
Procedure
Retrieve pods with the
OpenStackAnsibleEE
label:oc get pods -l app=openstackansibleee
$ oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SSH into the pod you want to check:
Pod that is running:
oc rsh validate-network-edpm-compute-6g7n9
$ oc rsh validate-network-edpm-compute-6g7n9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pod that is not running:
oc debug configure-network-edpm-compute-j6r4l
$ oc debug configure-network-edpm-compute-j6r4l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
List the directories in the
/runner/artifacts
mount:ls /runner/artifacts
$ ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-compute
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the
stdout
for the required artifact:cat /runner/artifacts/configure-network-edpm-compute/stdout
$ cat /runner/artifacts/configure-network-edpm-compute/stdout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow