Chapter 5. Creating the data plane
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet
custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet
CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet
CRs to define groups of nodes with different configurations and roles. You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet
CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane, you must perform the following tasks:
-
Create a
Secret
CR for each node set for Ansible to use to execute commands on the data plane nodes. -
Create the
OpenStackDataPlaneNodeSet
CRs that define the nodes and layout of the data plane. -
Create the
OpenStackDataPlaneDeployment
CR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSet
CRs.
The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap
CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
5.1. Prerequisites
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-admin
privileges.
5.2. Creating the data plane secrets
The data plane requires several Secret
custom resources (CRs) to operate. The Secret
CRs are used by the data plane nodes for the following functionality:
To enable secure access between nodes:
-
You must generate an SSH key and create an SSH key
Secret
CR for each key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each node set in your data plane. -
You must generate an SSH key and create an SSH key
Secret
CR for each key to enable migration of instances between Compute nodes.
-
You must generate an SSH key and create an SSH key
- To register the operating system of the nodes that are not registered to the Red Hat Customer Portal.
- To enable repositories for the nodes.
- To provide access to libvirt.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keys
file for a user with passwordlesssudo
privileges. For information, see Configuring reserved user and group IDs in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
-
Replace
<key_file_name>
with the name to use for the key pair.
-
Replace
Create the
Secret
CR for Ansible and apply it to the cluster:$ oc create secret generic dataplane-ansible-ssh-private-key-secret \ --save-config \ --dry-run=client \ [--from-file=authorized_keys=<key_file_name>.pub \] --from-file=ssh-privatekey=<key_file_name> \ --from-file=ssh-publickey=<key_file_name>.pub \ -n openstack \ -o yaml | oc apply -f -
-
Replace
<key_file_name>
with the name and location of your SSH key pair file. -
Include the
--from-file=authorized_keys
option for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
Create the SSH key pair for instance migration:
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
Create the
Secret
CR for migration and apply it to the cluster:$ oc create secret generic nova-migration-ssh-key \ --save-config \ --from-file=ssh-privatekey=nova-migration-ssh-key \ --from-file=ssh-publickey=nova-migration-ssh-key.pub \ -n openstack \ -o yaml | oc apply -f -
Create a file on your workstation named
secret_subscription.yaml
that contains thesubscription-manager
credentials for registering the operating system of the nodes that are not registered to the Red Hat Customer Portal:apiVersion: v1 kind: Secret metadata: name: subscription-manager namespace: openstack data: username: <base64_username> password: <base64_password>
Replace
<base64_username>
and<base64_password>
with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:$ echo -n <string> | base64
TipIf you don’t want to base64-encode the username and password, you can use the
stringData
field instead of thedata
field to set the username and password.
Create the
Secret
CR:$ oc create -f secret_subscription.yaml -n openstack
Create a
Secret
CR that contains the Red Hat registry credentials:$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
-
Replace
<username>
and<password>
with your Red Hat registry username and password credentials.
For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
-
Replace
Create a file on your workstation named
secret_libvirt.yaml
to define the libvirt secret:apiVersion: v1 data: LibvirtPassword: <base64_password> kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque
Replace
<base64_password>
with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:$ echo -n <password> | base64
TipIf you don’t want to base64-encode the username and password, you can use the
stringData
field instead of thedata
field to set the username and password.
Create the
Secret
CR:$ oc apply -f secret_libvirt.yaml -n openstack
Verify that the
Secret
CRs are created:$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secret
5.3. Creating an OpenStackDataPlaneNodeSet
CR with pre-provisioned nodes
Define an OpenStackDataPlaneNodeSet
custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet
CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1
. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet
CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate
field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet
CR, and the nodeTemplate.nodes
field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate
.
For an example OpenStackDataPlaneNodeSet
CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet
CR for pre-provisioned nodes.
Procedure
Create a file on your workstation named
openstack_preprovisioned_node_set.yaml
to define theOpenStackDataPlaneNodeSet
CR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane 1 namespace: openstack spec: env: 2 - name: ANSIBLE_FORCE_COLOR value: "True"
- 1
- The
OpenStackDataPlaneNodeSet
CR name must be unique, must consist of lower case alphanumeric characters,-
(hyphen) or.
(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set. - 2
- Optional: A list of environment variables to pass to the pod.
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplane
Specify that the nodes in this set are pre-provisioned:
preProvisioned: true
Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
-
Replace
<secret-key>
with the name of the SSH keySecret
CR you created for this node set in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret
.
-
Replace
-
Create a Persistent Volume Claim (PVC) on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the
volumeType
toFilesystem
andaccessModes
toReadWriteOnce
. For information on how to create a PVC, see Understanding persistent storage in the RHOCP Storage guide. Enable persistent logging for the data plane nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"
-
Replace
<pvc_name>
with the name of the Persistent Volume Claim (PVC) storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplane
Specify the
Secret
CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ... ansible: ansibleUser: cloud-admin 1 ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: 2 edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: []
- 1
- The user associated with the secret you created in Creating the data plane secrets.
- 2
- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.
For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io
, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:
nodeTemplate: ... ansible: ... ansibleVars: ... edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 1 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_update: false 2 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}
NoteYou must reset the
edpm_network_config_update
variable tofalse
after the updated network configuration is applied in a newOpenStackDataPlaneDeployment
CR, otherwise the updated network configuration is reapplied every time anOpenStackDataPlaneDeployment
CR is created that includes theconfigure-network
service.For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.
-
Add the common configuration for the set of nodes in this group under the
nodeTemplate
section. Each node in thisOpenStackDataPlaneNodeSet
inherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSet
CRspec
properties. Define each node in this node set:
nodes: edpm-compute-0: 1 hostName: edpm-compute-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com
Note-
Nodes defined within the
nodes
section can configure the same Ansible variables that are configured in thenodeTemplate
section. Where an Ansible variable is configured for both a specific node and within thenodeTemplate
section, the node-specific values override those from thenodeTemplate
section. -
You do not need to replicate all the
nodeTemplate
Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVars
includeedpm
in the name, which stands for "External Data Plane Management".
For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSet
CR properties.-
Nodes defined within the
-
Save the
openstack_preprovisioned_node_set.yaml
definition file. Create the data plane resources:
$ oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstack
Verify that the data plane resources have been created by confirming that the status is
SetupReady
:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
When the status is
SetupReady
the command returns acondition met
message, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secret
resource was created for the node set:$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s
Verify the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 46m ceph-client 46m ceph-hci-pre 46m configure-network 46m configure-os 46m ...
5.3.1. Example OpenStackDataPlaneNodeSet
CR for pre-provisioned nodes
The following example OpenStackDataPlaneNodeSet
CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. Update the name of the OpenStackDataPlaneNodeSet
CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet
CR name must be unique, must consist of lower case alphanumeric characters, -
(hyphen) or .
(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True" networkAttachments: - ctlplane preProvisioned: true nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts" managementNetwork: ctlplane ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: [] edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com
5.4. Creating a data plane with unprovisioned nodes
To create a data plane with unprovisioned nodes, you must perform the following tasks:
-
Create a
BareMetalHost
custom resource (CR) for each bare-metal data plane node. -
Define an
OpenStackDataPlaneNodeSet
CR for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking.
For more information about provisioning bare-metal nodes, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
Prerequisites
- Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
To provision data plane nodes with PXE network boot, a bare-metal provisioning network must be available in your Red Hat OpenShift Container Platform (RHOCP) cluster.
NoteYou do not need a provisioning network to provision nodes with virtual media.
-
A
Provisioning
CR is available in RHOCP. For more information about creating aProvisioning
CR, see Configuring a provisioning resource to scale user-provisioned clusters in the Red Hat OpenShift Container Platform (RHOCP) Installing on bare metal guide.
5.4.1. Creating the BareMetalHost
CRs for unprovisioned nodes
You must create a BareMetalHost
custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.
If you use the ctlplane
interface for provisioning, to avoid the kernel rp_filter
logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane
address range. This ensures that the return traffic remains on the machine network interface.
Procedure
The Bare Metal Operator (BMO) manages
BareMetalHost
custom resources (CRs) in theopenshift-machine-api
namespace by default. Update theProvisioning
CR to watch all namespaces:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the
Provisioning
CR to enablevirtualMediaViaExternalNetwork
, which enables bare-metal connectivity through the external network:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
Create a file on your workstation that defines the
Secret
CR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set:apiVersion: v1 kind: Secret metadata: name: edpm-compute-0-bmc-secret namespace: openstack type: Opaque data: username: <base64_username> password: <base64_password>
Replace
<base64_username>
and<base64_password>
with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:$ echo -n <string> | base64
TipIf you don’t want to base64-encode the username and password, you can use the
stringData
field instead of thedata
field to set the username and password.
Create a file named
bmh_nodes.yaml
on your workstation, that defines theBareMetalHost
CR for each bare-metal data plane node. The following example creates aBareMetalHost
CR with the provisioning method Redfish virtual media:apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: edpm-compute-0 namespace: openstack labels: app: openstack workload: compute spec: bmc: address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d 1 credentialsName: edpm-compute-0-bmc-secret 2 bootMACAddress: 00:c7:e4:a7:e7:f3 bootMode: UEFI online: false
- 1
- The URL for communicating with the node’s BMC controller. For information on BMC addressing for other boot methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide.
- 2
- The name of the
Secret
CR you created in the previous step for accessing the BMC of the node.
For more information about how to create a
BareMetalHost
CR, see About the BareMetalHost resource in the RHOCP Postinstallation configuration guide.Create the
BareMetalHost
resources:$ oc create -f bmh_nodes.yaml
Verify that the
BareMetalHost
resources have been created and are in theAvailable
state:$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 Available openstack-edpm true 2d21h edpm-compute-1 Available openstack-edpm true 2d21h ...
5.4.2. Creating an OpenStackDataPlaneNodeSet
CR with unprovisioned nodes
Define an OpenStackDataPlaneNodeSet
custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet
CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1
. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet
CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate
field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet
CR, and the nodeTemplate.nodes
field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate
.
For an example OpenStackDataPlaneNodeSet
CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet
CR for unprovisioned nodes.
Prerequisites
-
A
BareMetalHost
CR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating theBareMetalHost
CRs for unprovisioned nodes.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yaml
to define theOpenStackDataPlaneNodeSet
CR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane 1 namespace: openstack spec: tlsEnabled: true env: 2 - name: ANSIBLE_FORCE_COLOR value: "True"
- 1
- The
OpenStackDataPlaneNodeSet
CR name must be unique, must consist of lower case alphanumeric characters,-
(hyphen) or.
(period), must start and end with an alphanumeric character, and must have a maximum length of 20 characters. Update the name in this example to a name that reflects the nodes in the set. - 2
- Optional: A list of environment variables to pass to the pod.
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplane
Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:
preProvisioned: false
Define the
baremetalSetTemplate
field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface> dnsSearchDomains: - osptest.openstack.org
-
Replace
<bmh_namespace>
with the namespace defined in the correspondingBareMetalHost
CR for the node, for example,openshift-machine-api
. -
Replace
<ansible_ssh_user>
with the username of the Ansible SSH user, for example,cloud-admin
. -
Replace
<bmh_label>
with the label defined in the correspondingBareMetalHost
CR for the node, for example,openstack
. -
Replace
<interface>
with the control plane interface the node connects to, for example,enp6s0
.
-
Replace
Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
-
Replace
<secret-key>
with the name of the SSH keySecret
CR you created in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret
.
-
Replace
-
Create a Persistent Volume Claim (PVC) on your RHOCP cluster to store logs. Set the
volumeType
toFilesystem
andaccessModes
toReadWriteOnce
. For information on how to create a PVC, see Understanding persistent storage in the RHOCP Storage guide. Enable persistent logging for the data plane nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"
-
Replace
<pvc_name>
with the name of the Persistent Volume Claim (PVC) storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplane
Specify the
Secret
CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ansible: ansibleUser: cloud-admin 1 ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: 2 edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: []
- 1
- The user associated with the secret you created in Creating the data plane secrets.
- 2
- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.
For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io
, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:
nodeTemplate: ... ansible: ... ansibleVars: ... edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 1 edpm-compute-1: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_update: false 2 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}
NoteYou must reset the
edpm_network_config_update
variable tofalse
after the updated network configuration is applied in a newOpenStackDataPlaneDeployment
CR, otherwise the updated network configuration is reapplied every time anOpenStackDataPlaneDeployment
CR is created that includes theconfigure-network
service.For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.
-
Add the common configuration for the set of nodes in this group under the
nodeTemplate
section. Each node in thisOpenStackDataPlaneNodeSet
inherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSet
CR properties. Define each node in this node set:
nodes: edpm-compute-0: 1 hostName: edpm-compute-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-compute-0.example.com bmhLabelSelector: 5 nodeName: edpm-compute-0 edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com bmhLabelSelector: nodeName: edpm-compute-1
Note-
Nodes defined within the
nodes
section can configure the same Ansible variables that are configured in thenodeTemplate
section. Where an Ansible variable is configured for both a specific node and within thenodeTemplate
section, the node-specific values override those from thenodeTemplate
section. -
You do not need to replicate all the
nodeTemplate
Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. -
Many
ansibleVars
includeedpm
in the name, which stands for "External Data Plane Management".
- Optional: The
BareMetalHost
CR label that selects theBareMetalHost
CR for the data plane node. The label can be any label that is defined for theBareMetalHost
CR. The label is used with thebmhLabelSelector
label configured in thebaremetalSetTemplate
definition to select theBareMetalHost
for the node.
For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSet
CR properties.-
Nodes defined within the
-
Save the
openstack_unprovisioned_node_set.yaml
definition file. Create the data plane resources:
$ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
Verify that the data plane resources have been created by confirming that the status is
SetupReady
:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
When the status is
SetupReady
the command returns acondition met
message, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secret
resource was created for the node set:$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s
Verify that the nodes have transitioned to the
provisioned
state:$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-data-plane true 3d21h
Verify that the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 8m40s ceph-client 8m40s ceph-hci-pre 8m40s configure-network 8m40s configure-os 8m40s ...
5.4.3. Example OpenStackDataPlaneNodeSet
CR for unprovisioned nodes
The following example OpenStackDataPlaneNodeSet
CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet
CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet
CR name must be unique, must consist of lower case alphanumeric characters, -
(hyphen) or .
(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True" networkAttachments: - ctlplane preProvisioned: false baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: openshift-machine-api cloudUserName: cloud-admin bmhLabelSelector: app: openstack ctlplaneInterface: enp1s0 dnsSearchDomains: - osptest.openstack.org nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts" managementNetwork: ctlplane ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: [] edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 edpm-compute-1: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com
5.5. OpenStackDataPlaneNodeSet
CR spec
properties
The following sections detail the OpenStackDataPlaneNodeSet
CR spec
properties you can configure.
5.5.1. nodeTemplate
Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet
. You can override these common attributes in the definition for each individual node.
Field | Description |
---|---|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
Name of the network to use for management (SSH/Ansible). Default: |
|
Network definitions for the |
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
|
UserData configuration for the |
|
NetworkData configuration for the |
5.5.2. nodes
Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet
. Overrides the common attributes defined in the nodeTemplate
.
Field | Description |
---|---|
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
| The node name. |
| Name of the network to use for management (SSH/Ansible). |
| NetworkData configuration for the node. |
| Instance networks. |
| Node-specific user data. |
5.5.3. ansible
Defines the group of Ansible configuration options.
Field | Description |
---|---|
|
The user associated with the secret you created in Creating the data plane secrets. Default: |
| SSH host for the Ansible connection. |
| SSH port for the Ansible connection. |
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
|
A list of sources to populate Ansible variables from. Values defined by an |
5.5.4. ansibleVarsFrom
Defines the list of sources to populate Ansible variables from.
Field | Description |
---|---|
|
An optional identifier to prepend to each key in the |
|
The |
|
The |
5.6. Deploying the data plane
You use the OpenStackDataPlaneDeployment
CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment
custom resources (CRs). Each OpenStackDataPlaneDeployment
CR models a single Ansible execution. When the OpenStackDataPlaneDeployment
successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment
or related OpenStackDataPlaneNodeSet
resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment
CR.
Create an OpenStackDataPlaneDeployment
(CR) that deploys each of your OpenStackDataPlaneNodeSet
CRs.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yaml
to define theOpenStackDataPlaneDeployment
CR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy 1 namespace: openstack
- 1
- The
OpenStackDataPlaneDeployment
CR name must be unique, must consist of lower case alphanumeric characters,-
(hyphen) or.
(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
Add all the
OpenStackDataPlaneNodeSet
CRs that you want to deploy:spec: nodeSets: - openstack-data-plane - <nodeSet_name> - ... - <nodeSet_name>
-
Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yaml
deployment file. Deploy the data plane:
$ oc create -f openstack_data_plane_deploy.yaml -n openstack
You can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10
If the
oc logs
command returns an error similar to the following error, increase the--max-log-requests
value:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
Verify that the data plane is deployed:
$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE data-plane-deploy True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready
For information about the meaning of the returned status, see Data plane conditions and states.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment.
Map the Compute nodes to the Compute cell that they are connected to:
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
If you did not create additional cells, this command maps the Compute nodes to
cell1
.Access the remote shell for the
openstackclient
pod and verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
5.7. Data plane conditions and states
Each data plane resource has a series of conditions within their status
subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet
, until an OpenStackDataPlaneDeployment
has been started and finished successfully, the Ready
condition is False
. When the deployment succeeds, the Ready
condition is set to True
. A subsequent deployment sets the Ready
condition to False
until the deployment succeeds, when the Ready
condition is set to True
.
Condition | Description |
---|---|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
| "True": The NodeSet has been successfully deployed. |
| "True": The required inputs are available and ready. |
| "True": DNSData resources are ready. |
| "True": The IPSet resources are ready. |
| "True": Bare-metal nodes are provisioned and ready. |
Status field | Description |
---|---|
|
|
| |
|
Condition | Description |
---|---|
|
|
| "True": The data plane is successfully deployed. |
| "True": The required inputs are available and ready. |
|
"True": The deployment has succeeded for the named |
|
"True": The deployment has succeeded for the named |
Status field | Description |
---|---|
|
|
Condition | Description |
---|---|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
5.8. Troubleshooting data plane creation and deployment
To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.
5.8.1. Checking the job condition message for a service
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. This information can be used to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
$ oc get openstackdataplanedeployment
The following example output shows two deployments currently in progress:
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE data-plane-deploy ["openstack-data-plane-1"] False Deployment in progress data-plane-deploy ["openstack-data-plane-2"] False Deployment in progress
Determine the name and status of all services and their job condition:
$ oc get openstackansibleee
The following example output shows all services and their job condition for all current deployments:
$ oc get openstackansibleee NAME NETWORKATTACHMENTS STATUS MESSAGE bootstrap-openstack-edpm ["ctlplane"] True Job complete download-cache-openstack-edpm ["ctlplane"] False Job is running repo-setup-openstack-edpm ["ctlplane"] True Job complete validate-network-another-osdpd ["ctlplane"] False Job is running
For information on the job condition messages, see Job condition messages.
Filter for the name and service for a specific deployment:
$ oc get openstackansibleee -l \ openstackdataplanedeployment=<deployment_name>
Replace
<deployment_name>
with the name of the deployment to use to filter the services list.The following example filters the list to only show services and their job condition for the
data-plane-deploy
deployment:$ oc get openstackansibleee -l \ openstackdataplanedeployment=data-plane-deploy NAME NETWORKATTACHMENTS STATUS MESSAGE bootstrap-openstack-edpm ["ctlplane"] True Job complete download-cache-openstack-edpm ["ctlplane"] False Job is running repo-setup-openstack-edpm ["ctlplane"] True Job complete
5.8.1.1. Job condition messages
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE
field of the oc get openstackansibleee
command output. Jobs return one of the following conditions when queried:
-
Job not started
: The job has not started. -
Job not found
: The job could not be found. -
Job is running
: The job is currently running. -
Job complete
: The job execution is complete. -
Job error occured <error_message>
: The job stopped executing unexpectedly. The<error_message>
is replaced with a specific error message.
To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>
. For example, to view the logs for the repo-setup-openstack-edpm
service, use the command oc logs job/repo-setup-openstack-edpm
.
5.8.2. Checking the logs for a node set
You can access the logs for a node set to check for deployment issues.
Procedure
Retrieve pods with the
OpenStackAnsibleEE
label:$ oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13s
SSH into the pod you want to check:
Pod that is running:
$ oc rsh validate-network-edpm-compute-6g7n9
Pod that is not running:
$ oc debug configure-network-edpm-compute-j6r4l
List the directories in the
/runner/artifacts
mount:$ ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-compute
View the
stdout
for the required artifact:$ cat /runner/artifacts/configure-network-edpm-compute/stdout