Deploying Red Hat OpenStack Services on OpenShift
Deploying a Red Hat OpenStack Services on OpenShift environment on a Red Hat OpenShift Container Platform cluster
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback. Tell us how we can improve the documentation.
To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.
Procedure
- Log in to the Red Hat Atlassian Jira.
- Click the following link to open a Create Issue page: Create issue
- Select Red Hat OpenStack Services on OpenShift as the Project.
- Select Bug as the Issue Type.
- Click Next.
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Select documentation as the Component.
- Click Create.
- Review the details of the bug you created.
Chapter 1. Installing and preparing the OpenStack Operator Copy linkLink copied to clipboard!
You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator (openstack-operator) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP OperatorHub. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.
For information about mapping RHOSO versions to OpenStack Operators and OpenStackVersion Custom Resources (CRs), see the Red Hat Knowledgebase article How RHOSO versions map to OpenStack Operators and OpenStackVersion CRs.
1.1. Prerequisites Copy linkLink copied to clipboard!
An operational RHOCP cluster, version 4.18. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
- For the minimum RHOCP hardware requirements for hosting your RHOSO control plane, see Minimum RHOCP hardware requirements.
- For the minimum RHOCP network requirements, see RHOCP network requirements.
-
For a list of the Operators that must be installed before you install the
openstack-operator, see RHOCP software requirements.
-
The
occommand line tool is installed on your workstation. -
You are logged in to the RHOCP cluster as a user with
cluster-adminprivileges.
1.2. Installing the OpenStack Operator by using the web console Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.
Procedure
-
Log in to the RHOCP web console as a user with
cluster-adminpermissions. - Select Operators → OperatorHub.
-
In the Filter by keyword field, type
OpenStack. -
Click the OpenStack Operator tile with the
Red Hatsource label. - Read the information about the Operator and click Install.
- On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list.
- On the Install Operator page, select "Manual" from the Update approval list. For information about how to manually approve a pending Operator update, see Manually approving a pending Operator update in the RHOCP Operators guide.
-
Click Install to make the Operator available to the
openstack-operatorsnamespace. The OpenStack Operator is installed when the Status isSucceeded. - Click Create OpenStack to open the Create OpenStack page.
-
On the Create OpenStack page, click Create to create an instance of the OpenStack Operator initialization resource. The OpenStack Operator is ready to use when the Status of the
openstackinstance isConditions: Ready.
1.3. Installing the OpenStack Operator by using the CLI Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Container Platform (RHOCP) CLI (oc) to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub.
To install the OpenStack Operator by using the CLI, you create the openstack-operators namespace for the Red Hat OpenStack Platform (RHOSP) service Operators. You then create the OperatorGroup and Subscription custom resources (CRs) within the namespace. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.
Procedure
Create the
openstack-operatorsnamespace for the RHOSP operators:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openstack-operators spec: finalizers: - kubernetes EOFCreate the
OperatorGroupCR in theopenstack-operatorsnamespace:$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openstack namespace: openstack-operators EOFCreate the
SubscriptionCR that subscribes toopenstack-operator:$ cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openstack-operator namespace: openstack-operators spec: name: openstack-operator channel: stable-v1.0 source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual EOFWait for the install plan to be created:
$ oc get installplan -n openstack-operators -o json | jq -r '.items[] | select(.spec.approval=="Manual" and .spec.approved==false) | .metadata.name' | head -n1Approve the install plan:
$ oc patch installplan <install_plan_name> -n openstack-operators --type merge -p '{"spec":{"approved":true}}'Verify that the OpenStack Operator is installed:
$ oc wait csv -n openstack-operators \ -l operators.coreos.com/openstack-operator.openstack-operators="" \ --for jsonpath='{.status.phase}'=SucceededCreate an instance of the
openstack-operator:$ cat << EOF | oc apply -f - apiVersion: operator.openstack.org/v1beta1 kind: OpenStack metadata: name: openstack namespace: openstack-operators EOFConfirm that the Openstack Operator is deployed:
$ oc wait openstack/openstack -n openstack-operators --for condition=Ready --timeout=500s
Additional resources
Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift Copy linkLink copied to clipboard!
You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.
2.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment.
You can create new labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services:
-
The
cinder-schedulerservice is a very light service with low memory, disk, network, and CPU usage. -
The
cinder-apiservice has high network usage due to resource listing requests. -
The
cinder-volumeservice has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. -
The
cinder-backupservice has high memory, network, and CPU requirements.
Therefore, you can pin the cinder-api, cinder-volume, and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity.
Alternatively, you can create Topology CRs and use the topologyRef field in your OpenStackControlPlane CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology CRs.
2.2. Creating the openstack namespace Copy linkLink copied to clipboard!
You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
Procedure
Create the
openstackproject for the deployed RHOSO environment:$ oc new-project openstackEnsure the
openstacknamespace is labeled to enable privileged pod creation by the OpenStack Operators:$ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { "kubernetes.io/metadata.name": "openstack", "pod-security.kubernetes.io/enforce": "privileged", "security.openshift.io/scc.podSecurityLabelSync": "false" }If the security context constraint (SCC) is not "privileged", use the following commands to change it:
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwriteOptional: To remove the need to specify the namespace when executing commands on the
openstacknamespace, set the defaultnamespacetoopenstack:$ oc project openstack
2.3. Providing secure access to the Red Hat OpenStack Services on OpenShift services Copy linkLink copied to clipboard!
You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.
For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.
You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.
Prerequisites
- You have installed python3-cryptography.
Procedure
-
Create a
SecretCR on your workstation, for example,openstack_service_secret.yaml. Add the following initial configuration to
openstack_service_secret.yaml:apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderPassword: <base64_password> DbRootPassword: <base64_password> DesignatePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatPassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronPassword: <base64_password> NovaPassword: <base64_password> OctaviaPassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: OpaqueReplace
<base64_password>with a 32-character key that is base64 encoded.NoteThe
HeatAuthEncryptionKeypassword must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that theHeatAuthEncryptionKeypassword remains at length 32.You can use the following command to manually generate a base64 encoded password:
$ echo -n <password> | base64Alternatively, if you are using a Linux workstation and you are generating the
SecretCR by using a Bash command such ascat, you can replace<base64_password>with the following command to auto-generate random passwords for each service:$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)Replace the
<base64_fernet_key>with a base64 encoded fernet key. You can use the following command to manually generate it:$(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
Create the
SecretCR in the cluster:$ oc create -f openstack_service_secret.yaml -n openstackVerify that the
SecretCR is created:$ oc describe secret osp-secret -n openstack
2.3.1. Example Secret CR for secure access to the RHOSO service pods Copy linkLink copied to clipboard!
You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.
If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:
$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
OctaviaHeartbeatKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
name: osp-secret
namespace: openstack
type: Opaque
EOF
Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift Copy linkLink copied to clipboard!
To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.
If you need a centralized gateway for connection to external networks, you can add OVN gateways to the control plane or to dedicated Networker nodes on the data plane. For information about adding optional OVN gateways to the control plane, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.
3.1. Networks for Red Hat OpenStack Services on OpenShift Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) requires the following physical data center networks.
- Control plane network
- Used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
- Designate network
- Used internally by the RHOSO DNS service (designate) to manage the DNS servers. For more information, see Designate networks in Configuring DNS as a service.
- Designateext network
- Used to provide external access to the DNS service resolver and the DNS servers.
- External network
An optional network that is used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
NoteWhen an external network is used for workloads, an OVN gateway is required in some use cases. For more information, see on use cases and available options, see Configuring a control plane OVN gateway with a dedicated NIC in Configuring networking services.
- Internal API network
- Used for internal communication between RHOSO components.
- Octavia network
- Used to connect Load-balancing service (octavia) controllers running in the control plane. For more information, see Octavia network in Configuring load balancing as a service.
- Storage network
- Used for block storage, RBD, NFS, FC, and iSCSI.
- Storage Management network
An optional network that is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_networkto replicate data.NoteFor more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:
- Tenant (project) network
- Used for data communication between virtual machine instances within the cloud deployment.
Figure 3.1. Physical networks for RHOSO
The following table details the default networks used in a RHOSO deployment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
| Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
|---|---|---|---|---|---|
|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
|
| 172.26.0.0/24 | n/a | n/a | 172.26.0.30 - 172.26.0.70 | 172.26.0.10 - 172.26.0.20 |
|
| 172.34.0.0/24 | n/a | 172.34.0.80 - 172.34.0.120 | 172.34.0.30 - 172.34.0.70 | 172.34.0.10 - 172.34.0.20 |
|
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
|
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
|
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
|
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
|
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
|
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
3.2. Preparing RHOCP for RHOSO networks Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. A RHOSO environment uses isolated networks to separate different types of network traffic, which improves security, performance, and management. You must connect the RHOCP worker nodes to your isolated networks and expose the internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
3.2.1. Preparing RHOCP with isolated network interfaces Copy linkLink copied to clipboard!
You use the NMState Operator to connect the RHOCP worker nodes to your isolated networks. Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interfaces for each isolated network on each worker node in the RHOCP cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy(nncp) CR file on your workstation, for example,openstack-nncp.yaml. Retrieve the names of the worker nodes in the RHOCP cluster:
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"Discover the network configuration:
$ oc get nns/<worker_node> -o yaml | more-
Replace
<worker_node>with the name of a worker node retrieved in step 2, for example,worker-1. Repeat this step for each worker node.
-
Replace
In the
nncpCR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Networks for Red Hat OpenStack Services on OpenShift.In the following example, the
nncpCR configures theenp6s0interface for worker node 1,osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: internalapi state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: storage state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: tenant state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true - description: Configuring enp6s0 ipv4: address: - ip: 192.168.122.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false mtu: 1500 name: enp6s0 state: up type: ethernet - description: octavia vlan interface name: octavia state: up type: vlan vlan: base-iface: enp6s0 id: 24 reorder-headers: true - bridge: options: stp: enabled: false port: - name: enp6s0.24 description: Configuring bridge octbr mtu: 1500 name: octbr state: up type: linux-bridge - description: designate vlan interface ipv4: address: - ip: 172.26.0.10 prefix-length: "24" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: designate state: up type: vlan vlan: base-iface: enp7s0 id: "25" reorder-headers: true - description: designate external vlan interface ipv4: address: - ip: 172.34.0.10 prefix-length: "24" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: designateext state: up type: vlan vlan: base-iface: enp7s0 id: "26" reorder-headers: true nodeSelector: kubernetes.io/hostname: worker-1 node-role.kubernetes.io/worker: ""Create the
nncpCR in the cluster:$ oc apply -f openstack-nncp.yamlVerify that the
nncpCR is created:$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
3.2.2. Attaching service pods to the isolated networks Copy linkLink copied to clipboard!
Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.
If you frequently recreate pods in your environment then use the Whereabouts reconciler to manage dynamic IP address assignments for the pods. For more information, see Creating a whereabouts-reconciler daemon set in the RHOCP Multiple networks guide.
Procedure
-
Create a
NetworkAttachmentDefinition(net-attach-def) CR file on your workstation, for example,openstack-net-attach-def.yaml. In the
NetworkAttachmentDefinitionCR file, configure aNetworkAttachmentDefinitionresource for each isolated network to attach a service deployment pod to the network. The following examples create aNetworkAttachmentDefinitionresource for the following networks:-
internalapi,storage,ctlplane, andtenantnetworks of typemacvlan. -
octavia, the load-balancing management network, of typebridge. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. -
designatenetwork used internally by the DNS service (designate) to manage the DNS servers. -
designateextnetwork used to provide external access to the DNS service resolver and the DNS servers.
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "internalapi", "ipam": { "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.30", "range_end": "172.17.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "ctlplane", "type": "macvlan", "master": "enp6s0", "ipam": { "type": "whereabouts", "range": "192.168.122.0/24", "range_start": "192.168.122.30", "range_end": "192.168.122.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "storage", "type": "macvlan", "master": "storage", "ipam": { "type": "whereabouts", "range": "172.18.0.0/24", "range_start": "172.18.0.30", "range_end": "172.18.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "tenant", "type": "macvlan", "master": "tenant", "ipam": { "type": "whereabouts", "range": "172.19.0.0/24", "range_start": "172.19.0.30", "range_end": "172.19.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "octavia", "type": "bridge", "bridge": "octbr", "ipam": { "type": "whereabouts", "range": "172.23.0.0/24", "range_start": "172.23.0.30", "range_end": "172.23.0.70", "routes": [ { "dst": "172.24.0.0/16", "gw" : "172.23.0.150" } ] } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: designate namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "designate", "type": "macvlan", "master": "designate", "ipam": { "type": "whereabouts", "range": "172.26.0.0/16", "range_start": "172.26.0.30", "range_end": "172.26.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: designateext namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "designateext", "type": "macvlan", "master": "designateext", "ipam": { "type": "whereabouts", "range": "172.34.0.0/16", "range_start": "172.34.0.30", "range_end": "172.34.0.70" } }-
metadata.namespace: The namespace where the services are deployed. -
"master": The node interface name associated with the network, as defined in thenncpCR. -
"ipam": ThewhereaboutsCNI IPAM plug-in assigns IPs to the created pods from the range.30 - .70. -
"range_start" - "range_end": The IP address pool range must not overlap with the MetalLBIPAddressPoolrange and theNetConfig allocationRange.
-
Create the
NetworkAttachmentDefinitionCR in the cluster:$ oc apply -f openstack-net-attach-def.yamlVerify that the
NetworkAttachmentDefinitionCR is created:$ oc get net-attach-def -n openstack
3.2.3. Preparing RHOCP for RHOSO network VIPS Copy linkLink copied to clipboard!
You use the MetalLB Operator to expose internal service endpoints on the isolated networks. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPoolCR file on your workstation, for example,openstack-ipaddresspools.yaml. In the
IPAddressPoolCR file, configure anIPAddressPoolresource on the isolated network to specify the IP address ranges over which MetalLB has authority:apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false serviceAllocation: namespaces: - openstack --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: designateext spec: addresses: - 172.34.0.80-172.34.0.120 autoAssign: true avoidBuggyIPs: false ----
spec.addresses: TheIPAddressPoolrange must not overlap with thewhereaboutsIPAM range and the NetConfigallocationRange. -
spec.serviceAllocation: Specify the namespaces that can consume IP addresses from theIPAddressPoolrange. This is an optional field that you can configure to prevent non-RHOSO services hosted on your RHOCP cluster from consuming IP addresses from theIPAddressPoolrange.
For information about how to configure the other
IPAddressPoolresource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.-
Create the
IPAddressPoolCR in the cluster:$ oc apply -f openstack-ipaddresspools.yamlVerify that the
IPAddressPoolCR is created:$ oc describe -n metallb-system IPAddressPool-
Create a
L2AdvertisementCR file on your workstation, for example,openstack-l2advertisement.yaml. In the
L2AdvertisementCR file, configureL2AdvertisementCRs to define which node advertises a service to the local network. Create oneL2Advertisementresource for each network.In the following example, each
L2AdvertisementCR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane namespace: metallb-system spec: ipAddressPools: - ctlplane interfaces: - enp6s0 nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: internalapi namespace: metallb-system spec: ipAddressPools: - internalapi interfaces: - internalapi nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: designateext namespace: metallb-system spec: ipAddressPools: - designateext interfaces: - designateext nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: ""-
spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisementresource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.-
Create the
L2AdvertisementCRs in the cluster:$ oc apply -f openstack-l2advertisement.yamlVerify that the
L2AdvertisementCRs are created:$ oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES ctlplane ["ctlplane"] ["enp6s0"] designateext ["designateext"] ["designateext"] internalapi ["internalapi"] ["internalapi"] storage ["storage"] ["storage"] tenant ["tenant"] ["tenant"]If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
3.3. Creating the data plane network Copy linkLink copied to clipboard!
To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as internalapi, storage, and external. Each network definition must include the IP address assignment.
Use the following commands to view the NetConfig CRD definition and specification schema:
$ oc describe crd netconfig
$ oc explain netconfig.spec
Procedure
-
Create a file named
openstack_netconfig.yamlon your workstation. Add the following configuration to
openstack_netconfig.yamlto create theNetConfigCR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstackIn the
openstack_netconfig.yamlfile, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.NoteIf you are using pre-provisioned data plane nodes, then the control plane network and IP address must match the pre-provisioned data plane nodes. If the
ctlplanenetwork uses tagged VLANs, then the VLAN ID must also match the VLAN ID on the pre-provisioned data plane node.The following example creates isolated networks for the data plane:
spec: networks: - name: ctlplane dnsDomain: ctlplane.example.com subnets: - name: subnet1 allocationRanges: - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 - name: external dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22-
spec.networks.name: The name of the network, for example,ctlplane. -
spec.networks.subnets: The IPv4 subnet specification. -
spec.networks.subnets.name: The name of the subnet, for example,subnet1. -
spec.networks.subnets.allocationRanges: TheNetConfigallocationRange. TheallocationRangemust not overlap with the MetalLBIPAddressPoolrange and the IP address pool range. -
spec.networks.subnets.excludeAddresses: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. -
spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.
-
-
Save the
openstack_netconfig.yamldefinition file. Create the data plane network:
$ oc create -f openstack_netconfig.yaml -n openstackTo verify that the data plane network is created, view the
openstacknetconfigresource:$ oc get netconfig/openstacknetconfig -n openstackIf you see errors, check the underlying
network-attach-definitionand node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp
3.4. Additional resources Copy linkLink copied to clipboard!
Chapter 4. Creating the control plane Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.
Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.
4.1. Prerequisites Copy linkLink copied to clipboard!
-
The OpenStack Operator (
openstack-operator) is installed. For more information, see Installing and preparing the OpenStack Operator. - The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks.
The RHOCP cluster is not configured with any network policies that prevent communication between the
openstack-operatorsnamespace and the control plane namespace (defaultopenstack). Use the following command to check the existing network policies on the cluster:$ oc get networkpolicy -n openstackThis command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the
openstack-operatorsnamespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide.-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
4.2. Creating the control plane Copy linkLink copied to clipboard!
You must define an OpenStackControlPlane custom resource (CR) to create the control plane and enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.
The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.
Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:
$ oc describe crd openstackcontrolplane
$ oc explain openstackcontrolplane.spec
Procedure
Create a file on your workstation named
openstack_control_plane.yamlto define theOpenStackControlPlaneCR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstackSpecify the
SecretCR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secretSpecify the
storageClassyou created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:spec: secret: osp-secret storageClass: <RHOCP_storage_class>-
Replace
<RHOCP_storage_class>with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
-
Replace
Add the global RabbitMQ settings
messagingBusandnotificationsBusto specify the default RabbitMQ cluster for RHOSO services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: messagingBus: cluster: <rabbitmq-cluster> notificationsBus: cluster: <rabbitmq-cluster>-
Replace
<rabbitmq-cluster>with the default RabbitMQ cluster that all the RHOSO services use, in this example,rabbitmq. You can customize the RabbitMQ interface for OpenStack services. For more information, see Understand the RabbitMQ interface for OpenStack services in the Monitoring high availability services guide.
-
Replace
Add the following service configurations:
Note-
The following service examples use IP addresses from the default RHOSO MetalLB
IPAddressPoolrange for theloadBalancerIPsfield. Update theloadBalancerIPsfield with the IP address from the MetalLBIPAddressPoolrange that you created. - You cannot override the default public service endpoint. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
Block Storage service (cinder):
cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 cinderVolumes: volume1: networkAttachments: - storage replicas: 0-
cinderBackup.replicas: You can deploy the initial control plane without activating thecinderBackupservice. To deploy the service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage. -
cinderVolumes.replicas: You can deploy the initial control plane without activating thecinderVolumesservice. To deploy the service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for thecinderVolumesservice and how to configure a back end for the service, see Configuring the Block Storage volume service component in Configuring persistent storage.
-
Compute service (nova):
nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack messagingBus: cluster: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 messagingBus: cluster: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secretNoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0andcell1:nova-api,nova-metadata,nova-scheduler, andnova-conductor. Thenovncproxyservice is also enabled forcell1by default.DNS service for the data plane:
dns: template: options: - key: server values: - <IP address for DNS server reachable from dnsmasq pod> override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2-
options: Defines thednsmasqinstances required for each DNS server by using key-value pairs. In this example, there is one key-value pair defined because there is only one DNS server configured to forward requests to. key: Specifies thednsmasqparameter to customize for the deployeddnsmasqinstance. Set to one of the following valid values:-
server -
rev-server -
srv-host -
txt-record -
ptr-record -
rebind-domain-ok -
naptr-record -
cname -
host-record -
caa-record -
dns-rr -
auth-zone -
synth-domain -
no-negcache -
local
-
values: Specifies the value for the DNS server reachable from thednsmasqpod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example,1.1.1.1, or a DNS server for a specific domain, for example,/google.com/8.8.8.8.NoteThis DNS service,
dnsmasq, provides DNS services for nodes on the RHOSO data plane.dnsmasqis different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.
-
Identity service (keystone)
keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3Image service (glance):
glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # Configure back end; set to 3 when deploying service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage-
glanceAPIs.default.replicas: You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number ofreplicasfor the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
-
Key Management service (barbican):
barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1Networking service (neutron):
neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapiObject Storage service (swift):
swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 secret: osp-secret swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageRequest: 10GiOptimize service (watcher):
watcher: enabled: trueOVN:
ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {}Placement service (placement):
placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secretTelemetry service (ceilometer, prometheus):
telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true dataplaneNetwork: ctlplane networkAttachments: - ctlplane monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false-
telemetry.template.metricStorage.dataplaneNetwork: Defines the network that you use to scrape dataplanenode_exporterendpoints. -
telemetry.template.metricStorage.networkAttachments: Lists the networks that each service pod is attached to by using theNetworkAttachmentDefinitionresource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create anetworkAttachmentthat matches the network that you specify as thedataplaneNetwork, so that Prometheus can scrape data from the dataplane nodes. -
telemetry.template.autoscaling: You must have theautoscalingfield present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
-
-
The following service examples use IP addresses from the default RHOSO MetalLB
Add the following service configurations to implement high availability (HA):
A MariaDB Galera cluster for use by all RHOSO services (
openstack), and a MariaDB Galera cluster for use by the Compute service forcell1(openstack-cell1):galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3A single memcached cluster that contains three memcached servers:
memcached: templates: memcached: replicas: 3A RabbitMQ cluster for use by all RHOSO services (
rabbitmq), and a RabbitMQ cluster for use by the Compute service forcell1(rabbitmq-cell1):rabbitmq: templates: rabbitmq: persistence: storage: <rabbitmq_cluster_storage> replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: persistence: storage: <rabbitmq_cluster_storage> replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancerReplace
<rabbitmq_cluster_storage>with sufficient storage for each RabbitMQ cluster, for example,10Gi.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
By default, RabbitMQ uses
Quorumqueues to provide increased data safety and high availability at the expense of a slight increase in latency.WarningYou must not configure the RabbitMQ clusters of an existing RHOSO deployment to use
Quorumqueues! If you do so then your existing RHOSO services will not start or work properly.The RabbitMQ
Quorumqueue is a durable replicated queue based on the Raft consensus algorithm.NoteThe RabbitMQ
Quorumqueues must be saved to disk to ensure their durability.Quorumqueues can occupy significant disk space. Depending on the workload and settings the time taken to free up space after messages are consumed or expire can vary substantially. Ensure that sufficient storage is assigned to each RabbitMQ cluster.- Improved RabbitMQ failover behavior
The RabbitMQ service can be configured to provide an improved failover behavior that bypasses the default availability checks of OpenShift, which can wait up to five minutes to declare a service dead. By implementing this improved failover behavior, each OpenStack service checks to see if a RabbitMQ pod is alive and moves to the next pod if it is not.
NoteTo implement this improved failover behavior for a RabbitMQ cluster, you must reserve three free IP addresses from the default RHOSO MetalLB
IPAddressPoolrange already used for RabbitMQ.For example, if you want to improve the failover behavior of the
rabbitmq-cell1RabbitMQ cluster add the following lines to the end of therabbitmq.templates.rabbitmq-cell1section of theOpenStackControlPlaneCR:podOverride: services: - metadata: annotations: metallb.universe.tf/address-pool: "internalapi" metallb.universe.tf/loadBalancerIPs: "172.17.0.87" spec: type: LoadBalancer - metadata: annotations: metallb.universe.tf/address-pool: "internalapi" metallb.universe.tf/loadBalancerIPs: "172.17.0.88" spec: type: LoadBalancer ! - metadata: annotations: metallb.universe.tf/address-pool: "internalapi" metallb.universe.tf/loadBalancerIPs: "172.17.0.89" spec: type: LoadBalancer
Create the control plane:
$ oc create -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.NoteCreating the control plane also creates an
OpenStackClientpod that you can access through a remote shell (rsh) to run OpenStack CLI commands.$ oc rsh -n openstack openstackclientOptional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientConfirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | https://glance-internal.openstack.svc | | glance | public | https://glance-default-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+Exit the
OpenStackClientpod:$ exit
4.3. Example OpenStackControlPlane CR Copy linkLink copied to clipboard!
The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
spec:
messagingBus:
cluster: rabbitmq
notificationsBus:
cluster: rabbitmq
secret: osp-secret
storageClass: your-RHOCP-storage-class
cinder:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
cinderAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cinderScheduler:
replicas: 1
cinderBackup:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
cinderVolumes:
volume1:
networkAttachments:
- storage
replicas: 0 # backend needs to be configured to activate the service
nova:
apiOverride:
route: {}
template:
apiServiceTemplate:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
metadataServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
schedulerServiceTemplate:
replicas: 3
cellTemplates:
cell0:
cellDatabaseAccount: nova-cell0
cellDatabaseInstance: openstack
messagingBus:
cluster: rabbitmq
hasAPIAccess: true
cell1:
cellDatabaseAccount: nova-cell1
cellDatabaseInstance: openstack-cell1
messagingBus:
cluster: rabbitmq-cell1
noVNCProxyServiceTemplate:
enabled: true
networkAttachments:
- ctlplane
hasAPIAccess: true
secret: osp-secret
dns:
template:
options:
- key: server
values:
- 192.168.122.1
- key: server
values:
- 192.168.122.2
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: ctlplane
metallb.universe.tf/allow-shared-ip: ctlplane
metallb.universe.tf/loadBalancerIPs: 192.168.122.80
spec:
type: LoadBalancer
replicas: 2
galera:
templates:
openstack:
storageRequest: 5000M
secret: osp-secret
replicas: 3
openstack-cell1:
storageRequest: 5000M
secret: osp-secret
replicas: 3
keystone:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
replicas: 3
glance:
apiOverrides:
default:
route: {}
template:
databaseInstance: openstack
storage:
storageRequest: 10G
secret: osp-secret
keystoneEndpoint: default
glanceAPIs:
default:
replicas: 0 # Configure back end; set to 3 when deploying service
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
networkAttachments:
- storage
barbican:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
barbicanAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
barbicanWorker:
replicas: 3
barbicanKeystoneListener:
replicas: 1
memcached:
templates:
memcached:
replicas: 3
neutron:
apiOverride:
route: {}
template:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
networkAttachments:
- internalapi
swift:
enabled: true
proxyOverride:
route: {}
template:
swiftProxy:
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
replicas: 2
swiftRing:
ringReplicas: 3
swiftStorage:
networkAttachments:
- storage
replicas: 3
storageRequest: 100Gi
ovn:
template:
ovnDBCluster:
ovndbcluster-nb:
replicas: 3
dbType: NB
storageRequest: 10G
networkAttachment: internalapi
ovndbcluster-sb:
replicas: 3
dbType: SB
storageRequest: 10G
networkAttachment: internalapi
ovnNorthd: {}
ovnController:
networkAttachment: tenant
nicMappings:
my-network: nic1
placement:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
replicas: 3
secret: osp-secret
rabbitmq:
templates:
rabbitmq:
persistence:
storage: 10Gi
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.85
spec:
type: LoadBalancer
rabbitmq-cell1:
persistence:
storage: 10Gi
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.86
spec:
type: LoadBalancer
telemetry:
enabled: true
template:
metricStorage:
enabled: true
dashboardsEnabled: true
dataplaneNetwork: ctlplane
networkAttachments:
- ctlplane
monitoringStack:
alertingEnabled: true
scrapeInterval: 30s
storage:
strategy: persistent
retention: 24h
persistent:
pvcStorageRequest: 20G
autoscaling:
enabled: false
aodh:
databaseAccount: aodh
databaseInstance: openstack
passwordSelector:
aodhService: AodhPassword
serviceUser: aodh
secret: osp-secret
heatInstance: heat
ceilometer:
enabled: true
secret: osp-secret
logging:
enabled: false
-
spec.storageClass: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. -
spec.cinder: Service-specific parameters for the Block Storage service (cinder). -
spec.cinder.template.cinderBackup: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. -
spec.cinder.template.cinderVolumes: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. spec.cinder.template.cinderVolumes.networkAttachments: The list of networks that each service pod is directly attached to, specified by using theNetworkAttachmentDefinitionresource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBClusterservice uses theinternalapinetwork; and theovnControllerservice uses thetenantnetwork.-
spec.nova: Service-specific parameters for the Compute service (nova). -
spec.nova.apiOverride: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Setroute:to{}to apply the default route template. -
nicMappings: Pairs the physical network your gateway is on with the NIC that connects to the gateway network. This physical network is set in the neutron networkprovider:*namefield. You can, optionally, add more<network_name>:<nic_name>pairs as required. -
metallb.universe.tf/address-pool: The internal service API endpoint registered as a MetalLB service with theIPAddressPool internalapi. -
metallb.universe.tf/loadBalancerIPs: The virtual IP (VIP) address for the service. The IP is shared with other services by default. spec.rabbitmq: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in theloadBalancerIPsannotation, as indicated in 11 and 12.NoteYou cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
-
rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
4.4. Removing a service from the control plane Copy linkLink copied to clipboard!
You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.
Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.
Procedure
-
Open the
OpenStackControlPlaneCR file on your workstation. Locate the service you want to remove from the control plane and disable it:
cinder: enabled: false apiOverride: route: {} ...Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresource is updated with the disabled service when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackCheck that the service is removed:
$ oc get cinder -n openstackThis command returns the following message when the service is successfully removed:
No resources found in openstack namespace.Check that the API endpoints for the service are removed from the Identity service (keystone):
$ oc rsh -n openstack openstackclient $ openstack endpoint list --service volumev3This command returns the following message when the API endpoints for the service are successfully removed:
No service with a type, name or ID of 'volumev3' exists.
4.5. Additional resources Copy linkLink copied to clipboard!
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- Dynamic provisioning
- Configuring the Block Storage backup service
- Configuring the Image service (glance)
Chapter 5. Creating the data plane Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 or 9.6 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles.
You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:
- Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
- Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.
To create and deploy a data plane, you must perform the following tasks:
-
Create a
SecretCR for each node set for Ansible to use to execute commands on the data plane nodes. -
Create the
OpenStackDataPlaneNodeSetCRs that define the nodes and layout of the data plane. -
Create the
OpenStackDataPlaneDeploymentCR that triggers the Ansible execution that deploys and configures the software for the specified list ofOpenStackDataPlaneNodeSetCRs.
The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7, 8, and 9. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using a later version of Red Hat Ceph Storage, adjust the configuration examples accordingly.
5.1. Prerequisites Copy linkLink copied to clipboard!
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
-
You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with
cluster-adminprivileges.
5.2. Creating the data plane secrets Copy linkLink copied to clipboard!
You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.
To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:
An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each
OpenStackDataPlaneNodeSetCR in your data plane.- An SSH key to enable migration of instances between Compute nodes.
Prerequisites
-
Pre-provisioned nodes are configured with an SSH public key in the
$HOME/.ssh/authorized_keysfile for a user with passwordlesssudoprivileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.
Procedure
For unprovisioned nodes, create the SSH key pair for Ansible:
$ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096-
Replace
<key_file_name>with the name to use for the key pair.
-
Replace
Create the
SecretCR for Ansible and apply it to the cluster:$ oc create secret generic dataplane-ansible-ssh-private-key-secret \ --save-config \ --dry-run=client \ --from-file=ssh-privatekey=<key_file_name> \ --from-file=ssh-publickey=<key_file_name>.pub \ [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \ -o yaml | oc apply -f --
Replace
<key_file_name>with the name and location of your SSH key pair file. -
Optional: Only include the
--from-file=authorized_keysoption for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
If you are creating Compute nodes, create a secret for migration.
Create the SSH key pair for instance migration:
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''Create the
SecretCR for migration and apply it to the cluster:$ oc create secret generic nova-migration-ssh-key \ --save-config \ --from-file=ssh-privatekey=nova-migration-ssh-key \ --from-file=ssh-publickey=nova-migration-ssh-key.pub \ -n openstack \ -o yaml | oc apply -f -
For nodes that have not been registered to the Red Hat Customer Portal, create the
SecretCR for subscription-manager credentials to register the nodes:$ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'-
Replace
<subscription_manager_username>with the username you set forsubscription-manager. -
Replace
<subscription_manager_password>with the password you set forsubscription-manager.
-
Replace
Create a
SecretCR that contains the Red Hat registry credentials:$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'Replace
<username>and<password>with your Red Hat registry username and password credentials.For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.
If you are creating Compute nodes, create a secret for libvirt.
Create a file on your workstation named
secret_libvirt.yamlto define the libvirt secret:apiVersion: v1 kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque data: LibvirtPassword: <base64_password>Replace
<base64_password>with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:$ echo -n <password> | base64TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create the
SecretCR:$ oc apply -f secret_libvirt.yaml -n openstack
Verify that the
SecretCRs are created:$ oc describe secret dataplane-ansible-ssh-private-key-secret $ oc describe secret nova-migration-ssh-key $ oc describe secret subscription-manager $ oc describe secret redhat-registry $ oc describe secret libvirt-secret
5.3. Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes Copy linkLink copied to clipboard!
You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes.
Prerequisites
- The control plane network and IP address are configured for the pre-provisioned data plane nodes. For more information, see Creating the data plane network.
Procedure
Create a file on your workstation named
openstack_preprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: "True"metadata.name: TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), and start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.ImportantThe
OpenStackDataPlaneNodeSetCR name you provide should be a maximum of 28 characters. Although 63 characters are allotted for the CR name, this character count includes system identifiers that are appended to the name after it is saved. Additional characters above the 63 character limit are truncated when the CR is saved and this causes errors in system processes.-
spec.env: An optional list of environment variables to pass to the pod.
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplaneSpecify that the nodes in this set are pre-provisioned:
preProvisioned: trueAdd the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created for this node set in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplaneSpecify the
SecretCRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ... ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: []-
ansibleUser: The user associated with the secret you created in Creating the data plane secrets. -
ansibleVars: The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/. For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log intoregistry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes. The following example applies a network configuration to the data plane nodes:
nodeTemplate: ... ansible: ... ansibleVars: ... edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 nic2: 53:13:10:33:24:61 neutron_physical_bridge_name: br-ex edpm_network_config_nmstate: true edpm_network_config_update: false edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: linux_bond name: bond0 mtu: {{ min_viable_mtu }} bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4" members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic2 mtu: {{ min_viable_mtu }} {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}-
nic1andnic2: The MAC addresses assigned to the NIC to use for network configuration on the Compute node. -
neutron_physical_bridge_name: The name of the OVS bridge to setup on the Compute node. -
edpm_network_config_nmstate: Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider now. In a future release afternmstatelimitations are resolved, theifcfgprovider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with thenmstateprovider is not supported. For this and other limitations of RHOSOnmstatesupport, see https://issues.redhat.com/browse/OSPRH-11309. edpm_network_config_update: When deploying a node set for the first time, ensure that theedpm_network_config_updatevariable is set tofalse. If you later update aedpm_network_config_template, first setedpm_network_config_updatetotrue. After you complete the update, reset it tofalse.ImportantAfter an
edpm_network_config_templateupdate, you must resetedpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.-
dns_servers: Autogenerated from IPAM and DNS and no user input is required. -
domain: Autogenerated from IPAM and DNS and no user input is required. -
routes: Autogenerated from IPAM and DNS and no user input is required.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCRspecproperties. Add the time synchronization configuration to align the node time with a central source. Multiple NTP servers can be defined in this configuration. Time synchronization is provided by the
chronyservice. The following example creates a time synchronization configuration for all nodes in the nodeset use:nodeTemplate: ... ansible: ... ansibleVars: timesync_ntp_servers: - hostname: <ntp_server> iburst: <burst_value>-
Replace
<ntp_server>with the hostname or IP address of the NTP server. Replace
<burst_value>with eithertrueorfalse. This configures fast initial synchronization with the NTP server. The default isfalse.NoteConfigure time synchronization at the node level by adding it in the
ansibleVarssection of the individual node.
-
Replace
Define each node in this node set:
nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com-
edpm-compute-0: The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks: Defines the IPAM and the DNS records for the node. -
fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
networkData: TheSecretthat contains network configuration that is node-specific. -
userData.name: TheSecretthat contains user data that is node-specific. -
ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for thehostNamefield for the node or the node definition reference, for example,edpm-compute-0. Set to the same IP address as the IP address configured on the pre-provisioned node. ansibleVars: Node-specific Ansible variables that customize the node.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR properties.
-
Nodes defined within the
-
-
Save the
openstack_preprovisioned_node_set.yamldefinition file. Create the data plane resources:
$ oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstack
Verification
Verify that the data plane resources have been created by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:$ oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 46m ceph-client 46m ceph-hci-pre 46m configure-network 46m configure-os 46m ...
5.3.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-data-plane
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: true
nodeTemplate:
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: <pvc_name>
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
timesync_ntp_servers:
- hostname: ntp.example.com
iburst: true
- hostname: ntp2.example.com
iburst: false
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
nic1: 52:54:04:60:55:22
nic2: 53:13:10:33:24:61
neutron_physical_bridge_name: br-ex
edpm_network_config_nmstate: true
edpm_network_config_update: false
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: linux_bond
name: bond0
mtu: {{ min_viable_mtu }}
bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4"
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
primary: true
- type: interface
name: nic2
mtu: {{ min_viable_mtu }}
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-compute-0:
hostName: edpm-compute-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.100
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.100
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.100
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-0.example.com
edpm-compute-1:
hostName: edpm-compute-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.101
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.101
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.101
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-1.example.com
5.4. Creating a data plane with unprovisioned nodes Copy linkLink copied to clipboard!
To create a data plane with unprovisioned nodes, you must perform the following tasks:
-
Create a
BareMetalHostcustom resource (CR) for each bare-metal data plane node. -
Define an
OpenStackDataPlaneNodeSetCR for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking.
5.4.1. Prerequisites Copy linkLink copied to clipboard!
- Your RHOCP cluster supports provisioning bare-metal nodes. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
- Your Cluster Baremetal Operator (CBO) is configured for provisioning. For more information, see Provisioning [metal3.io/v1alpha1] in the RHOCP API Reference.
5.4.2. Creating the BareMetalHost CRs for unprovisioned nodes Copy linkLink copied to clipboard!
You must create a BareMetalHost custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.
If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. For information about how to prevent traffic being dropped because of the RPF filter, see How to prevent asymmetric routing.
Procedure
The Bare Metal Operator (BMO) manages
BareMetalHostcustom resources (CRs) in theopenshift-machine-apinamespace by default. Update theProvisioningCR to watch all namespaces:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the
ProvisioningCR to enablevirtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'Create a file on your workstation that defines the
SecretCR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set:apiVersion: v1 kind: Secret metadata: name: edpm-compute-0-bmc-secret namespace: openstack type: Opaque data: username: <base64_username> password: <base64_password>Replace
<base64_username>and<base64_password>with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:$ echo -n <string> | base64TipIf you do not want to base64-encode the username and password, you can use the
stringDatafield instead of thedatafield to set the username and password.
Create a file named
bmh_nodes.yamlon your workstation, that defines theBareMetalHostCR for each bare-metal data plane node. The following example creates aBareMetalHostCR with the provisioning method Redfish virtual media:apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: edpm-compute-0 namespace: openstack labels: app: openstack workload: compute spec: bmc: address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d credentialsName: edpm-compute-0-bmc-secret bootMACAddress: 00:c7:e4:a7:e7:f3 bootMode: UEFI online: false [preprovisioningNetworkDataName: <network_config_secret_name>]-
metadata.labels: Key-value pairs, such asapp,workload, andnodeName, that provide varying levels of granularity for labelling nodes. You can use these labels when you create anOpenStackDataPlaneNodeSetCR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set. -
spec.bmc.address: The URL for communicating with the BMC controller of the node. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Installing on bare metal guide. -
spec.bmc.credentialsName: The name of theSecretCR you created in the previous step for accessing the BMC of the node. -
preprovisioningNetworkDataName: An optional field that specifies the name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be innmstateformat.
For more information about how to create a
BareMetalHostCR, see About the BareMetalHost resource in the RHOCP Installing on bare metal guide.-
Create the
BareMetalHostresources:$ oc create -f bmh_nodes.yamlVerify that the
BareMetalHostresources have been created and are in theAvailablestate:$ oc wait --for=jsonpath='{.status.provisioning.state}'=available bmh/edpm-compute-baremetal-00 --timeout=<timeout_value>-
Replace
<timeout_value>with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value60m. Use a value that is appropriate to the size of your deployment. Give large deployments more time to complete deployment tasks.
-
Replace
5.4.3. Creating an OpenStackDataPlaneNodeSet CR with unprovisioned nodes Copy linkLink copied to clipboard!
You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.
You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.
To set a root password for the data plane nodes during provisioning, use the passwordSecret field in the OpenStackDataPlaneNodeSet CR. For more information, see How to set a root password for the Dataplane Node on Red Hat OpenStack Services on OpenShift.
For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes.
Prerequisites
-
A
BareMetalHostCR is created for each unprovisioned node that you want to include in each node set. For more information, see Creating theBareMetalHostCRs for unprovisioned nodes.
Procedure
Create a file on your workstation named
openstack_unprovisioned_node_set.yamlto define theOpenStackDataPlaneNodeSetCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: tlsEnabled: true env: - name: ANSIBLE_FORCE_COLOR value: "True"metadata.name: TheOpenStackDataPlaneNodeSetCR name must be unique, contain only lower case alphanumeric characters and-(hyphens) or.(periods), and start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.ImportantThe
OpenStackDataPlaneNodeSetCR name you provide should be a maximum of 28 characters. Although 63 characters are allotted for the CR name, this character count includes system identifiers that are appended to the name after it is saved. Additional characters above the 63 character limit are truncated when the CR is saved and this causes errors in system processes.-
spec.env: An optional list of environment variables to pass to the pod.
Connect the data plane to the control plane network:
spec: ... networkAttachments: - ctlplaneSpecify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:
preProvisioned: falseDefine the
baremetalSetTemplatefield to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface>-
Replace
<bmh_namespace>with the namespace defined in the correspondingBareMetalHostCR for the node, for example,openstack. -
Replace
<ansible_ssh_user>with the username of the Ansible SSH user, for example,cloud-admin. -
Replace
<bmh_label>with the metadata label defined in the correspondingBareMetalHostCR for the node, for example,openstack. Metadata labels, such asapp,workload, andnodeNameare key-value pairs for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHostCR. -
Replace
<interface>with the control plane interface the node connects to, for example,enp6s0.
-
Replace
If you created a custom
OpenStackProvisionServerCR, add it to yourbaremetalSetTemplatedefinition:baremetalSetTemplate: ... provisionServerName: my-os-provision-serverAdd the SSH key secret that you created to enable Ansible to connect to the data plane nodes:
nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>-
Replace
<secret-key>with the name of the SSH keySecretCR you created in Creating the data plane secrets, for example,dataplane-ansible-ssh-private-key-secret.
-
Replace
-
Create a Persistent Volume Claim (PVC) in the
openstacknamespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set thevolumeModetoFilesystemandaccessModestoReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and theansible-runnercreates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and link:Red Hat OpenShift Container Platform cluster requirements in Planning your deployment. Enable persistent logging for the data plane nodes:
nodeTemplate: ... extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: "/runner/artifacts"-
Replace
<pvc_name>with the name of the PVC storage on your RHOCP cluster.
-
Replace
Specify the management network:
nodeTemplate: ... managementNetwork: ctlplaneSpecify the
SecretCRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.nodeTemplate: ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.4 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: []-
ansibleUser: The user associated with the secret you created in Creating the data plane secrets. -
ansibleVars: The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/. For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log intoregistry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.
-
Add the network configuration template to apply to your data plane nodes. The following example applies a network configuration to the data plane nodes:
nodeTemplate: ... ansible: ... ansibleVars: ... edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 nic2: 53:13:10:33:24:61 neutron_physical_bridge_name: br-ex edpm_network_config_nmstate: true edpm_network_config_update: false edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: linux_bond name: bond0 mtu: {{ min_viable_mtu }} bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4" members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic2 mtu: {{ min_viable_mtu }} {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}-
nic1andnic2: The MAC addresses assigned to the NIC to use for network configuration on the Compute node. -
neutron_physical_bridge_name: The name of the OVS bridge to setup on the Compute node. -
edpm_network_config_nmstate: Sets theos-net-configprovider tonmstate. The default value istrue. Change it tofalseonly if a specific limitation of thenmstateprovider requires you to use theifcfgprovider now. In a future release afternmstatelimitations are resolved, theifcfgprovider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with thenmstateprovider is not supported. For this and other limitations of RHOSOnmstatesupport, see https://issues.redhat.com/browse/OSPRH-11309. edpm_network_config_update: When deploying a node set for the first time, ensure that theedpm_network_config_updatevariable is set tofalse. If you later update aedpm_network_config_template, first setedpm_network_config_updatetotrue. After you complete the update, reset it tofalse.ImportantAfter an
edpm_network_config_templateupdate, you must resetedpm_network_config_updatetofalse. Otherwise, the nodes could lose network access. Wheneveredpm_network_config_updateistrue, the updated network configuration is reapplied every time anOpenStackDataPlaneDeploymentCR is created that includes theconfigure-networkservice that is a member of theservicesOverridelist.-
dns_servers: Autogenerated from IPAM and DNS and no user input is required. -
domain: Autogenerated from IPAM and DNS and no user input is required. -
routes: Autogenerated from IPAM and DNS and no user input is required.
-
-
Add the common configuration for the set of nodes in this group under the
nodeTemplatesection. Each node in thisOpenStackDataPlaneNodeSetinherits this configuration. For information about the properties you can use to configure common node attributes, seeOpenStackDataPlaneNodeSetCR properties. Add the time synchronization configuration to align the node time with a central source. Multiple NTP servers can be defined in this configuration. Time synchronization is provided by the
chronyservice. The following example creates a time synchronization configuration for all nodes in the nodeset use:nodeTemplate: ... ansible: ... ansibleVars: timesync_ntp_servers: - hostname: <ntp_server> iburst: <burst_value>-
Replace
<ntp_server>with the hostname or IP address of the NTP server. Replace
<burst_value>with eithertrueorfalse. This configures fast initial synchronization with the NTP server. The default isfalse.NoteConfigure time synchronization at the node level by adding it in the
ansibleVarssection of the individual node.
-
Replace
Define each node in this node set:
nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 networkData: name: edpm-compute-0-network-data namespace: openstack userData: name: edpm-compute-0-user-data namespace: openstack ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com bmhLabelSelector: nodeName: edpm-compute-0 edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 networkData: name: edpm-compute-1-network-data namespace: openstack userData: name: edpm-compute-1-user-data namespace: openstack ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com bmhLabelSelector: nodeName: edpm-compute-1-
edpm-compute-0: The node definition reference, for example,edpm-compute-0. Each node in the node set must have a node definition. -
networks: Defines the IPAM and the DNS records for the node. -
fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in theNetConfigCR. -
networkData: TheSecretthat contains network configuration that is node-specific. -
userData.name: TheSecretthat contains user data that is node-specific. -
ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for thehostNamefield for the node or the node definition reference, for example,edpm-compute-0. -
ansibleVars: Node-specific Ansible variables that customize the node. bmhLabelSelector: Metadata labels, such asapp,workload, andnodeNameare key-value pairs for labelling nodes. Set thebmhLabelSelectorfield to select data plane nodes based on one or more labels that match the labels in the correspondingBareMetalHostCR.Note-
Nodes defined within the
nodessection can configure the same Ansible variables that are configured in thenodeTemplatesection. Where an Ansible variable is configured for both a specific node and within thenodeTemplatesection, the node-specific values override those from thenodeTemplatesection. -
You do not need to replicate all the
nodeTemplateAnsible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many
ansibleVarsincludeedpmin the name, which stands for "External Data Plane Management".For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSetCR properties.
-
Nodes defined within the
-
-
Save the
openstack_unprovisioned_node_set.yamldefinition file. Create the data plane resources:
$ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
Verification
Verify that the data plane resources have been created by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states.
Verify that the
Secretresource was created for the node set:$ oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50sVerify that the nodes have transitioned to the
provisionedstate:$ oc get bmh NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-data-plane true 3d21hNoteA provisioned node displays
provisionedin theSTATEcolumn andtruein theONLINEcolumn. If the node displays asprovisionedin theSTATEcolumn butfalsein theONLINEcolumn, the node is provisioned but it has experienced an error condition such as the loss of network connectivity, use of incorrect credentials, or the Baseboard Management Controller (BMC) is not accessible. Consult the Baremetal Provisioning Service (ironic) logs to determine if a cause for this error condition can be found. If the error condition persists, contact Red Hat Support.Verify that the services were created:
$ oc get openstackdataplaneservice -n openstack NAME AGE bootstrap 8m40s ceph-client 8m40s ceph-hci-pre 8m40s configure-network 8m40s configure-os 8m40s ...
5.4.4. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes Copy linkLink copied to clipboard!
The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.
The following variables are autogenerated from IPAM and DNS and are not provided by the user:
-
ctlplane_dns_nameservers -
dns_search_domains -
ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-data-plane
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: false
baremetalSetTemplate:
deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
bmhNamespace: openstack
cloudUserName: cloud-admin
bmhLabelSelector:
app: openstack
ctlplaneInterface: enp1s0
nodeTemplate:
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: <pvc_name>
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- secretRef:
name: subscription-manager
- secretRef:
name: redhat-registry
ansibleVars:
timesync_ntp_servers:
- hostname: ntp.example.com
iburst: true
- hostname: ntp2.example.com
iburst: false
rhc_release: 9.4
rhc_repositories:
- {name: "*", state: disabled}
- {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
- {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
- {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
- {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
edpm_bootstrap_release_version_package: []
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
nic1: 52:54:04:60:55:22
edpm-compute-1:
nic1: 52:54:04:60:55:22
neutron_physical_bridge_name: br-ex
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-compute-0:
hostName: edpm-compute-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-0.example.com
bmhLabelSelector:
nodeName: edpm-compute-0
edpm-compute-1:
hostName: edpm-compute-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-1.example.com
bmhLabelSelector:
nodeName: edpm-compute-1
5.4.5. Configuring network interface bonding on the control plane network Copy linkLink copied to clipboard!
Configure network interface bonding, also known as NIC teaming, on the control plane network to provide redundancy and increased throughput for data plane nodes.
Prerequisites
- Ensure your network environment is configured to support network interface bonding before implementing this configuration. Network environment configuration, such as the implementation and use of Link Aggregation Control Protocol (LACP), should be performed before implementing network interface bonding in your data plane nodes. For more information about the appropriate network environment configuration, see the vendor documentation for the switches and routers used in your network.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,openstack_data_plane.yaml. -
Locate the
bareMetalSetTemplatesection of the definition file. Configure network interface bonding using the
ctrlplaneInterfaceandctrlplaneBondattributes. The following is an example of this configuration:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm spec: baremetalSetTemplate: ctlplaneInterface: bond0 ctlplaneBond: bondInterfaces: - eno1 - eno2 bondMode: "802.3ad" bondOptions: bond-miimon: "100" bond-xmit-hash-policy: "layer3+4"where:
- ctrlplaneInterface
- The control plane network interface the data plane node set uses. This should be set to the bond interface name.
- ctrlplaneBond
- The options that make up the network interface bonding configuration.
- bondInterfaces
- The list of physical interfaces to bond. A minimum of two interfaces are required.
- bondMode
The mode used to handle traffic across multiple interfaces in the bond. The following modes are supported:
-
active-backup: Traffic is on only one interface at a time in the bond. If one interface fails, the other takes over. This is the default mode. -
802.3ad: Traffic is routed using IEEE 802.3ad Dynamic Link Aggregation. The physical switch must be configured for an LACP port channel. -
balance-rr: Traffic is load balanced using a round-robin policy. -
balance-xor: Traffic is balanced using a XOR policy.
-
- bondOptions
- Additional bonding options configured as key-pair values.
-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f openstack_data_plane.yamlVerify that the data plane resource has been updated by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in _Deploying Red Hat OpenStack Services on OpenShift.
Verify that the nodes have transitioned to the
provisionedstate:$ oc get bmh -n openstack NAME STATE CONSUMER ONLINE ERROR AGE edpm-compute-0 provisioned openstack-data-plane true 3d21hVerify node connectivity by viewing the Ansible logs:
$ oc logs -l app=openstackansibleee --tail=-1 -fIf a node encounters SSH or networking issues, the Ansible log displays
unreachableorconnection refusedfor the node network address.
5.4.6. How to prevent asymmetric routing Copy linkLink copied to clipboard!
If the Red Hat OpenShift Container Platform (RHOCP) cluster nodes have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning, it causes asymmetric traffic. Therefore, if you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. Implement one of the following methods to prevent traffic being dropped because of the RPF filter:
- Use a dedicated NIC on the network where RHOCP binds the provisioning service, that is, the RHOCP machine network or a dedicated RHOCP provisioning network.
- Use a dedicated NIC on a network that is reachable through routing on the RHOCP master nodes. For information about how to add routes to your RHOCP networks, see Adding routes to the RHOCP networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.
Use a shared NIC for provisioning and the RHOSO
ctlplaneinterface. You can use one of the following methods to configure a shared NIC:-
Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the
ctlplanenetwork and the other in the address range that you use for provisioning. -
Configure a DHCP server to allocate an address range for provisioning that is different from the
ctlplaneaddress range. -
If a DHCP server is not available, configure the
preprovisioningNetworkDatafield on theBareMetalHostCRs. For information about how to configure thepreprovisioningNetworkDatafield, see ConfiguringpreprovisioningNetworkDataon theBareMetalHostCRs.
-
Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the
-
If your environment has RHOCP master and worker nodes that are not connected to the network used by the EDPM nodes, you can set the
nodeSelectorfield on theOpenStackProvisionServerCR to place it on a worker node that does not have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning.
5.4.7. Configuring preprovisioningNetworkData on the BareMetalHost CRs Copy linkLink copied to clipboard!
If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. To prevent traffic being dropped because of the RPF filter, you can configure the preprovisioningNetworkData field on the BareMetalHost CRs.
Procedure
Create a
SecretCR withpreprovisioningNetworkDatainnmstateformat for eachBareMetalHostCR:apiVersion: v1 kind: Secret metadata: name: leaf0-0-preprovision-network-data namespace: openstack stringData: nmstate: | interfaces: - name: enp5s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.168.130.100 prefix-length: 24 dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.130.1 next-hop-interface: enp5s0 type: OpaqueCreate the
Secretresources:$ oc create -f secret_leaf0-0.yaml-
Open the
BareMetalHostCR file, for example,bmh_nodes.yaml. Add the
preprovisioningNetworkDataNamefield to eachBareMetalHostCR defined for each node in thebmh_nodes.yamlfile:apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: annotations: inspect.metal3.io: disabled labels: app: openstack nodeset: leaf0 name: leaf0-0 namespace: openstack spec: architecture: x86_64 automatedCleaningMode: metadata bmc: address: redfish-virtualmedia+http://sushy.utility:8000/redfish/v1/Systems/df2bf92f-3e2c-47e1-b1fa-0d2e06bd1b1d credentialsName: bmc-secret bootMACAddress: 52:54:04:15:a8:d9 bootMode: UEFI online: false preprovisioningNetworkDataName: leaf0-0-preprovision-network-data rootDeviceHints: deviceName: /dev/sdaUpdate the
BareMetalHostCRs:$ oc apply -f bmh_nodes.yaml
5.5. OpenStackDataPlaneNodeSet CR spec properties Copy linkLink copied to clipboard!
The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.
5.5.1. nodeTemplate Copy linkLink copied to clipboard!
Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.
| Field | Description |
|---|---|
|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
|
Name of the network to use for management (SSH/Ansible). Default: |
|
|
Network definitions for the |
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
|
UserData configuration for the |
|
|
NetworkData configuration for the |
5.5.2. nodes Copy linkLink copied to clipboard!
Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.
| Field | Description |
|---|---|
|
|
Ansible configuration options. For more information, see |
|
| The files to mount into an Ansible Execution Pod. |
|
| The node name. |
|
| Name of the network to use for management (SSH/Ansible). |
|
| NetworkData configuration for the node. |
|
| Instance networks. |
|
| Node-specific user data. |
5.5.3. ansible Copy linkLink copied to clipboard!
Defines the group of Ansible configuration options.
| Field | Description |
|---|---|
|
|
The user associated with the secret you created in Creating the data plane secrets. Default: |
|
| SSH host for the Ansible connection. |
|
| SSH port for the Ansible connection. |
|
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
|
|
A list of sources to populate Ansible variables from. Values defined by an |
5.5.4. ansibleVarsFrom Copy linkLink copied to clipboard!
Defines the list of sources to populate Ansible variables from.
| Field | Description |
|---|---|
|
|
An optional identifier to prepend to each key in the |
|
|
The |
|
|
The |
5.6. Deploying the data plane Copy linkLink copied to clipboard!
You use the OpenStackDataPlaneDeployment custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment CR to deploy each of your OpenStackDataPlaneNodeSet CRs.
When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.
Procedure
Create a file on your workstation named
openstack_data_plane_deploy.yamlto define theOpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack-
metadata.name: TheOpenStackDataPlaneDeploymentCR name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
-
Add all the
OpenStackDataPlaneNodeSetCRs that you want to deploy:spec: nodeSets: - openstack-data-plane - <nodeSet_name> - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack_data_plane_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitVerify that the data plane is deployed:
$ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value> $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>Replace
<timeout_value>with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value60m. If the completion status ofSetupReadyforoc wait openstackdataplanedeploymentorNodeSetReadyforoc wait openstackdataplanenodesetis not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Map the Compute nodes to the Compute cell that they are connected to:
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseIf you did not create additional cells, this command maps the Compute nodes to
cell1.Access the remote shell for the
openstackclientpod and verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor listIf some Compute nodes are missing from the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the
nova-computeservices on the deployed data plane nodes.Verify that the hypervisor hostname is a fully qualified domain name (FQDN):
$ hostname -fIf the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.
5.7. Data plane conditions and states Copy linkLink copied to clipboard!
Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.
| Condition | Description |
|---|---|
|
|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
|
| "True": The NodeSet has been successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
| "True": DNSData resources are ready. |
|
| "True": The IPSet resources are ready. |
|
| "True": Bare-metal nodes are provisioned and ready. |
| Status field | Description |
|---|---|
|
|
|
|
| |
|
|
| Condition | Description |
|---|---|
|
|
|
|
| "True": The data plane is successfully deployed. |
|
| "True": The required inputs are available and ready. |
|
|
"True": The deployment has succeeded for the named |
|
|
"True": The deployment has succeeded for the named |
| Status field | Description |
|---|---|
|
|
|
| Condition | Description |
|---|---|
|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
5.8. Troubleshooting data plane creation and deployment Copy linkLink copied to clipboard!
To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.
5.8.1. Checking the job condition message for a service Copy linkLink copied to clipboard!
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
$ oc get openstackdataplanedeploymentThe following example output shows two deployments currently in progress:
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-compute ["openstack-edpm-ipam"] False Deployment in progressRetrieve and inspect Ansible execution jobs.
The Kubernetes jobs are labelled with the name of the
OpenStackDataPlaneDeployment. You can list jobs for eachOpenStackDataPlaneDeploymentby using the label:$ oc get job -l openstackdataplanedeployment=edpm-compute NAME STATUS COMPLETIONS DURATION AGE bootstrap-edpm-compute-openstack-edpm-ipam Complete 1/1 78s 25h configure-network-edpm-compute-openstack-edpm-ipam Complete 1/1 37s 25h configure-os-edpm-compute-openstack-edpm-ipam Complete 1/1 66s 25h download-cache-edpm-compute-openstack-edpm-ipam Complete 1/1 64s 25h install-certs-edpm-compute-openstack-edpm-ipam Complete 1/1 46s 25h install-os-edpm-compute-openstack-edpm-ipam Complete 1/1 57s 25h libvirt-edpm-compute-openstack-edpm-ipam Complete 1/1 2m37s 25h neutron-metadata-edpm-compute-openstack-edpm-ipam Complete 1/1 61s 25h nova-edpm-compute-openstack-edpm-ipam Complete 1/1 3m20s 25h ovn-edpm-compute-openstack-edpm-ipam Complete 1/1 78s 25h run-os-edpm-compute-openstack-edpm-ipam Complete 1/1 33s 25h ssh-known-hosts-edpm-compute Complete 1/1 19s 25h telemetry-edpm-compute-openstack-edpm-ipam Complete 1/1 2m5s 25h validate-network-edpm-compute-openstack-edpm-ipam Complete 1/1 16s 25hYou can check logs by using
oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:$ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2 PLAY RECAP ********************************************************************* edpm-compute-0 : ok=22 changed=0 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0
5.8.1.1. Job condition messages Copy linkLink copied to clipboard!
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:
-
Job not started: The job has not started. -
Job not found: The job could not be found. -
Job is running: The job is currently running. -
Job complete: The job execution is complete. -
Job error occurred <error_message>: The job stopped executing unexpectedly. The<error_message>is replaced with a specific error message.
To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.
5.8.2. Checking the logs for a node set Copy linkLink copied to clipboard!
You can access the logs for a node set to check for deployment issues.
Procedure
Retrieve pods with the
OpenStackAnsibleEElabel:$ oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13sSSH into the pod you want to check:
Pod that is running:
$ oc rsh validate-network-edpm-compute-6g7n9Pod that is not running:
$ oc debug configure-network-edpm-compute-j6r4l
List the directories in the
/runner/artifactsmount:$ ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-computeView the
stdoutfor the required artifact:$ cat /runner/artifacts/configure-network-edpm-compute/stdout