Deploying multiple RHOSO environments on a single RHOCP cluster
Deploying multiple Red Hat OpenStack Services on OpenShift environments on a single Red Hat OpenShift Container Platform cluster
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback. Tell us how we can improve the documentation.
To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.
Procedure
- Log in to the Red Hat Atlassian Jira.
- Click the following link to open a Create Issue page: Create issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Click Create.
- Review the details of the bug you created.
Chapter 1. Creating multiple RHOSO environments with namespace separation Copy linkLink copied to clipboard!
You can deploy multiple independent Red Hat OpenStack Services on OpenShift (RHOSO) environments on a single Red Hat OpenShift Container Platform (RHOCP) cluster by using namespace separation. To deploy each RHOSO environment, you create multiple isolated namespaces and the isolated networks for each namespace, then use the procedures in Deploying Red Hat OpenStack Services on OpenShift to create the control plane and the data planes on each namespace.
- Support
- Red Hat supports up to five multiple RHOSO environments on a single cluster with namespace separation, for example, environments for development, testing, staging, and production.
- Limitations
- Telemetry visualization is not available in more than one namespace.
- Multiple zones are not supported, therefore verified architectures such as distributed Compute nodes (DCN) and distributed zones are not supported.
- Operator and OpenStack versions
-
All RHOSO deployments that are hosted on the same RHOCP cluster by using namespace separation run on the same version of RHOSO, because the Openstack Operator custom resource definitions (CRDs) are global to the RHOCP cluster. However, each RHOSO deployment can run different versions of the services deployed by RHOSO because the service containers are managed by the
OpenStackVersioncustom resource (CR) for each namespace. - Updating multiple deployments on a single cluster
When a new version of Red Hat OpenStack Services on OpenShift (RHOSO) is released, you must update all the control planes that are hosted on the Red Hat OpenShift Container Platform (RHOCP) cluster to the new version by performing the minor update procedure on all namespaces.
Before performing a minor update, you must ensure that all of your deployed environments are on the same version. Avoid performing the minor update on all environments at the same time to prevent having issues on all environments. You should perform and test the minor update on your environments in the following order to ensure it is performing as expected, before rolling it out to your production environment:
- Development environment
- Staging environment
- Testing environment
To create multiple independent RHOSO environments using namespace separation, you must complete the following tasks:
- Plan the networking for each isolated RHOSO environment.
- Create the namespaces for each RHOSO environment.
-
Create a
Secretcustom resource (CR) for each namespace to provide secure access to the RHOSO service pods in that namespace. -
Configure the
NodeNetworkConfigurationPolicyCR for each namespace. -
Create
NetworkAttachmentDefinition(net-attach-def) CRs for each namespace. -
Create
IPAddressPoolandL2AdvertisementCRs for each namespace. -
Create
NetConfigCRs for each namespace. -
Create
OpenStackControlPlaneCRs for each namespace. -
Create
OpenStackDataPlaneNodeSetandOpenStackDataPlaneDeploymentCRs for each namespace. For more information, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.
1.1. Prerequisites Copy linkLink copied to clipboard!
- An operational RHOCP cluster, version 4.18, with sufficient resources to accommodate the additional hosted control planes and the additional resource consumption. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
-
The
occommand line tool is installed on your workstation. -
You are logged in to the RHOCP cluster as a user with
cluster-adminprivileges. -
The OpenStack Operator (
openstack-operator) is installed on the RHOCP cluster. For more information, see Installing and preparing the OpenStack Operator. The RHOCP cluster is prepared for the multiple RHOSO environments:
Optional: You have configured node selectors and labels on nodes to dedicate nodes to specific control plane and data plane pods for each RHOSO cloud namespace. For more information, see Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment.
NoteIf you have not separated the pods for each namespace by using node selectors and labels, then the control plane and data plane pods for each namespace might be scheduled on the same RHOCP worker nodes.
- You have created the storage class for the RHOCP cluster. All namespaces use this storage class and its underlying persistent volumes (PVs) by default. You can create separate storage classes and PVs for each namespace if required. For more information, see Creating a storage class.
- You are using dedicated bare-metal resources for each RHOSO environment.
1.2. Planning the isolated networks for each namespace Copy linkLink copied to clipboard!
To plan the isolated RHOSO networks for each namespace, you must complete the following tasks:
- Review the minimum network requirements for a RHOSO environment. For more information about the RHOSO network requirements, see RHOCP network requirements and Planning your networks in Planning your deployment.
- Plan the network topology to support multiple RHOSO environments on the same RHOCP cluster, and plan the RHOSO networks for each namespace.
- Ensure that RHOSO-dedicated NICs are present on all RHOCP worker nodes, one for each namespace, to reinforce segregation between the RHOSO environments. These interfaces use VLANs for further network isolation.
- Ensure that there is connectivity between the namespace interfaces and the Compute nodes for that namespace.
- Ensure that VLAN IDs and subnets for each namespace do not overlap.
- Plan the networks for each RHOSO environment for each namespace. Each namespace must have unique network IP addresses for all networks. Do not overlap IP addresses and ranges between the namespaces. For more information about the required network values, see the "Default RHOSO networks" table in Default Red Hat OpenStack Services on OpenShift networks in Deploying Red Hat OpenStack Services on OpenShift.
Chapter 2. Creating the namespaces and the Secret CRs for each namespace Copy linkLink copied to clipboard!
To deploy multiple Red Hat OpenStack Services on OpenShift (RHOSO) environments on a Red Hat OpenShift Container Platform (RHOCP) cluster, create a namespace for each RHOSO environment and provide secure access to the RHOSO service pods for each RHOSO environment.
2.1. Creating the openstack namespace Copy linkLink copied to clipboard!
You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.
Prerequisites
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
Procedure
Create the
openstackproject for the deployed RHOSO environment:$ oc new-project openstackEnsure the
openstacknamespace is labeled to enable privileged pod creation by the OpenStack Operators:$ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { "kubernetes.io/metadata.name": "openstack", "pod-security.kubernetes.io/enforce": "privileged", "security.openshift.io/scc.podSecurityLabelSync": "false" }If the security context constraint (SCC) is not "privileged", use the following commands to change it:
$ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwriteOptional: To remove the need to specify the namespace when executing commands on the
openstacknamespace, set the defaultnamespacetoopenstack:$ oc project openstack
2.2. Providing secure access to the Red Hat OpenStack Services on OpenShift services Copy linkLink copied to clipboard!
You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.
For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.
You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.
Prerequisites
- You have installed python3-cryptography.
Procedure
-
Create a
SecretCR on your workstation, for example,openstack_service_secret.yaml. Add the following initial configuration to
openstack_service_secret.yaml:apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderPassword: <base64_password> DbRootPassword: <base64_password> DesignatePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatPassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronPassword: <base64_password> NovaPassword: <base64_password> OctaviaPassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: OpaqueReplace
<base64_password>with a 32-character key that is base64 encoded.NoteThe
HeatAuthEncryptionKeypassword must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that theHeatAuthEncryptionKeypassword remains at length 32.You can use the following command to manually generate a base64 encoded password:
$ echo -n <password> | base64Alternatively, if you are using a Linux workstation and you are generating the
SecretCR by using a Bash command such ascat, you can replace<base64_password>with the following command to auto-generate random passwords for each service:$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)Replace the
<base64_fernet_key>with a base64 encoded fernet key. You can use the following command to manually generate it:$(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
Create the
SecretCR in the cluster:$ oc create -f openstack_service_secret.yaml -n openstackVerify that the
SecretCR is created:$ oc describe secret osp-secret -n openstack
2.2.1. Example Secret CR for secure access to the RHOSO service pods Copy linkLink copied to clipboard!
You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.
If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:
$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
OctaviaHeartbeatKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
name: osp-secret
namespace: openstack
type: Opaque
EOF
Chapter 3. Preparing RHOCP networks for multiple RHOSO environments Copy linkLink copied to clipboard!
To prepare the Red Hat OpenShift Container Platform (RHOCP) cluster for your multiple Red Hat OpenStack Services on OpenShift (RHOSO) environments, you must configure the RHOCP networks on your RHOCP cluster.
3.1. Default Red Hat OpenStack Services on OpenShift networks Copy linkLink copied to clipboard!
The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:
- Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:
- To provide virtual machine instances with Internet access.
- To create flat provider networks that are separate from the control plane.
- To configure VLAN provider networks on a separate bridge from the control plane.
- To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
- Internal API network: used for internal communication between RHOSO components.
- Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
- Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
- Octavia controller network: (optional) used to connect Load-balancing service (octavia) controllers running in the control plane.
Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the
cluster_networkto replicate data.NoteFor more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:
The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.
By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.
| Network name | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | net-attach-def ipam range | OCP worker nncp range |
|---|---|---|---|---|---|
|
| 192.168.122.0/24 | 192.168.122.100 - 192.168.122.250 | 192.168.122.80 - 192.168.122.90 | 192.168.122.30 - 192.168.122.70 | 192.168.122.10 - 192.168.122.20 |
|
| 10.0.0.0/24 | 10.0.0.100 - 10.0.0.250 | n/a | n/a | n/a |
|
| 172.17.0.0/24 | 172.17.0.100 - 172.17.0.250 | 172.17.0.80 - 172.17.0.90 | 172.17.0.30 - 172.17.0.70 | 172.17.0.10 - 172.17.0.20 |
|
| 172.18.0.0/24 | 172.18.0.100 - 172.18.0.250 | n/a | 172.18.0.30 - 172.18.0.70 | 172.18.0.10 - 172.18.0.20 |
|
| 172.19.0.0/24 | 172.19.0.100 - 172.19.0.250 | n/a | 172.19.0.30 - 172.19.0.70 | 172.19.0.10 - 172.19.0.20 |
|
| 172.23.0.0/24 | n/a | n/a | 172.23.0.30 - 172.23.0.70 | n/a |
|
| 172.20.0.0/24 | 172.20.0.100 - 172.20.0.250 | n/a | 172.20.0.30 - 172.20.0.70 | 172.20.0.10 - 172.20.0.20 |
The following table specifies the networks that establish connectivity to the fabric using eth2 and eth3 with different IP addresses per zone and rack and also a global bgpmainnet that is used as a source for the traffic:
| Network name | Zone 0 | Zone 1 | Zone 2 |
|---|---|---|---|
|
BGP Net1 ( | 100.64.0.0/24 | 100.64.1.0 | 100.64.2.0 |
|
BGP Net2 ( | 100.65.0.0/24 | 100.65.1.0/24 | 100.65.2.0 |
| Bgpmainnet (loopback) | 99.99.0.0/24 | 99.99.1.0/24 | 99.99.2.0/24 |
3.2. Preparing RHOCP for RHOSO networks for each namespace Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.
The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.
The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
3.2.1. Preparing RHOCP with isolated network interfaces for each namespace Copy linkLink copied to clipboard!
The NodeNetworkConfigurationPolicy custom resource (CR) is a cluster-scoped network configuration resource, therefore, it is not allocated to a namespace when created. Create a NodeNetworkConfigurationPolicy (nncp) CR for each namespace to configure the interfaces for the isolated network for the namespace in RHOCP cluster. The following procedure creates an nncp CR file for a single namespace. Repeat the procedure for each of the namespaces you created, using the nncp ranges you planned for that namespace.
NodeNetworkConfigurationPolicy CRs are cluster-scoped resources, therefore you must ensure their names do not conflict with one another.
Procedure
-
Create a
NodeNetworkConfigurationPolicy(nncp) CR file on your workstation, for example,openstack-nncp.yaml. Retrieve the names of the worker nodes in the RHOCP cluster:
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"Discover the network configuration:
$ oc get nns/<worker_node> -o yaml | more-
Replace
<worker_node>with the name of a worker node retrieved in step 2, for example,worker-1. Repeat this step for each worker node.
-
Replace
In the
nncpCR file, configure the interfaces for each isolated network for each namespace on each worker node in the RHOCP cluster.In the following example, the
nncpCR configures theenp6s0interface for a single namespace,osp_ns_1, on worker node 1,osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation.apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface osp_ns_1 ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: internalapi_1 state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface osp_ns_1 ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: storage_1 state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface osp_ns_1 ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: tenant_1 state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true - description: ctlplane interface osp_ns_1 mtu: 1500 name: enp6s0 state: up type: ethernet - bridge: options: stp: enabled: false port: - name: enp6s0 vlan: {} description: linux-bridge over ctlplane interface osp_ns_1 ipv4: address: - ip: 192.168.122.11 prefix-length: "24" dhcp: false enabled: true ipv6: enabled: false mtu: 1500 name: ospbr_1 state: up type: linux-bridge - description: octavia vlan interface osp_ns_1 name: enp6s0.24 state: up type: vlan vlan: base-iface: enp6s0 id: 24 - bridge: options: stp: enabled: false port: - name: enp6s0.24 description: Configuring bridge octbr osp_ns_1 mtu: 1500 name: octbr_1 state: up type: linux-bridge nodeSelector: kubernetes.io/hostname: worker-1 node-role.kubernetes.io/worker: ""Create the
nncpCR in the cluster:$ oc apply -f openstack-nncp.yamlVerify that the
nncpCR is created:$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
3.2.2. Attaching service pods to the isolated networks for each namespace Copy linkLink copied to clipboard!
The NetworkAttachmentDefinition custom resource (CR) is a namespace-scoped resource. Create a NetworkAttachmentDefinition (net-attach-def) CR for each isolated network to attach the service pods to the networks. The following procedure creates a net-attach-def CR file for a single namespace. Repeat the procedure for each of the namespaces you created, using the ipam ranges you planned for that namespace.
Procedure
-
Create a
NetworkAttachmentDefinition(net-attach-def) CR file on your workstation, for example,openstack-net-attach-def.yaml. In the
NetworkAttachmentDefinitionCR file, configure aNetworkAttachmentDefinitionresource for each isolated network to attach a service deployment pod to the network. The following examples create aNetworkAttachmentDefinitionresource for theinternalapi,storage,ctlplane, andtenantnetworks of typemacvlan, and aNetworkAttachmentDefinitionresource foroctavia, the load-balancing management network, of typebridge:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: osp_ns_1 spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "internalapi_1", "ipam": { "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.30", "range_end": "172.17.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane namespace: osp_ns_1 spec: config: | { "cniVersion": "0.3.1", "name": "ctlplane", "type": "macvlan", "master": "ospbr_1", "ipam": { "type": "whereabouts", "range": "192.168.122.0/24", "range_start": "192.168.122.30", "range_end": "192.168.122.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: osp_ns_1 spec: config: | { "cniVersion": "0.3.1", "name": "storage", "type": "macvlan", "master": "storage_1", "ipam": { "type": "whereabouts", "range": "172.18.0.0/24", "range_start": "172.18.0.30", "range_end": "172.18.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant namespace: osp_ns_1 spec: config: | { "cniVersion": "0.3.1", "name": "tenant", "type": "macvlan", "master": "tenant_1", "ipam": { "type": "whereabouts", "range": "172.19.0.0/24", "range_start": "172.19.0.30", "range_end": "172.19.0.70" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia namespace: osp_ns_1 spec: config: | { "cniVersion": "0.3.1", "name": "octavia", "type": "bridge", "bridge": "octbr_1", "ipam": { "type": "whereabouts", "range": "172.23.0.0/24", "range_start": "172.23.0.30", "range_end": "172.23.0.70", "routes": [ { "dst": "172.24.0.0/16", "gw" : "172.23.0.150" } ] } }-
metadata.namespace: The namespace where the services are deployed. -
"master": The node interface name associated with the network, as defined in thenncpCR. -
"ipam": ThewhereaboutsCNI IPAM plug-in assigns IPs to the created pods from the range.30 - .70. -
"range_start" - "range_end": The IP address pool range must not overlap with the MetalLBIPAddressPoolrange and theNetConfig allocationRange. - The octavia network attachment is required to connect pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator.
-
Create the
NetworkAttachmentDefinitionCR in the cluster:$ oc apply -f openstack-net-attach-def.yamlVerify that the
NetworkAttachmentDefinitionCR is created:$ oc get net-attach-def -n openstack
3.2.3. Preparing RHOCP for RHOSO network VIPS for each namespace Copy linkLink copied to clipboard!
The IPAddressPool and L2Advertisement custom resources (CRs) are namespace-scoped resources that must be created within the metallb-system namespace for each of the RHOSO environments. You must create an L2Advertisement CR to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool CR to configure which IPs can be used as VIPs. In Layer 2 mode, one node assumes the responsibility of advertising a service to the local network. The following procedure creates a L2Advertisement CR file and an IPAddressPool CR file for a single namespace. Repeat the procedure for each of the namespaces you created, using the MetalLB IPAddressPool ranges you planned for that namespace.
IPAddressPool and L2Advertisement CRs are MetalLB resources that must exist in the metallb-system namespace. The resources for each namespace must use the same metallb-system namespace, therefore you must ensure their names do not conflict with one another.
Procedure
-
Create an
IPAddressPoolCR file on your workstation, for example,ipaddresspools_osp_ns_1.yaml. In the
IPAddressPoolCR file, configure anIPAddressPoolresource on the isolated network to specify the IP address ranges over which MetalLB has authority:apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi_1 namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane_1 spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: storage_1 spec: addresses: - 172.18.0.80-172.18.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: tenant_1 spec: addresses: - 172.19.0.80-172.19.0.90 autoAssign: true avoidBuggyIPs: false-
spec.addresses: TheIPAddressPoolrange must not overlap with thewhereaboutsIPAM range and the NetConfigallocationRange.
For information about how to configure the other
IPAddressPoolresource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.-
Create the
IPAddressPoolCR in the cluster:$ oc apply -f ipaddresspools_osp_ns_1.yamlVerify that the
IPAddressPoolCR is created:$ oc describe -n metallb-system IPAddressPool-
Create a
L2AdvertisementCR file on your workstation, for example,l2advertisement_osp_ns_1.yaml. In the
L2AdvertisementCR file, configureL2AdvertisementCRs to define which node advertises a service to the local network. Create oneL2Advertisementresource for each network.In the following example, each
L2AdvertisementCR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: internalapi_1 namespace: metallb-system spec: ipAddressPools: - internalapi_1 interfaces: - internalapi_1 nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane_1 namespace: metallb-system spec: ipAddressPools: - ctlplane_1 interfaces: - ospbr_1 nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: storage_1 namespace: metallb-system spec: ipAddressPools: - storage_1 interfaces: - storage_1 nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: "" --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: tenant_1 namespace: metallb-system spec: ipAddressPools: - tenant_1 interfaces: - tenant_1 nodeSelectors: - matchLabels: node-role.kubernetes.io/worker: ""-
spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisementresource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.-
Create the
L2AdvertisementCRs in the cluster:$ oc apply -f l2advertisement_osp_ns_1.yamlVerify that the
L2AdvertisementCRs are created:$ oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES ctlplane_1 ["ctlplane_1"] ["ospbr_1"] internalapi_1 ["internalapi_1"] ["internalapi_1"] storage_1 ["storage_1"] ["storage_1"] tenant_1 ["tenant_1"] ["tenant_1"]If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.
Check the network back end used by your cluster:
$ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
3.3. Creating the data plane network for each namespace Copy linkLink copied to clipboard!
To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as internalapi, storage, and external. Each network definition must include the IP address assignment. The following procedure creates a NetConfig CR file for a the osp_ns_1 namespace. Repeat the procedure for each of the namespaces you created, using the allocationRange ranges you planned for that namespace.
Procedure
-
Create a file named
netconfig_osp_ns_1.yamlon your workstation. Add the following configuration to
netconfig_osp_ns_1.yamlto create theNetConfigCR:apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: osp_ns_1In the
netconfig_osp_ns_1.yamlfile, define the topology for each data plane network. The following example creates isolated networks for the data plane:spec: networks: - name: ctlplane dnsDomain: ctlplane_1.example.com subnets: - name: subnet1 allocationRanges: - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: internalapi dnsDomain: internalapi_1.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 - name: external dnsDomain: external_1.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: storage dnsDomain: storage_1.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: tenant dnsDomain: tenant_1.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22-
spec.networks.name: The name of the network, for example,ctlplane. -
spec.networks.subnets: The IPv4 subnet specification. -
spec.networks.subnets.name: The name of the subnet, for example,subnet1. -
spec.networks.subnets.allocationRanges: TheNetConfigallocationRange. TheallocationRangemust not overlap with the MetalLBIPAddressPoolrange and the IP address pool range. -
spec.networks.subnets.excludeAddresses: An optional list of IP addresses from the allocation range that must not be used by data plane nodes. -
spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks.
-
-
Save the
netconfig_osp_ns_1.yamldefinition file. Create the data plane network:
$ oc create -f netconfig_osp_ns_1.yaml -n openstackTo verify that the data plane network is created, view the
openstacknetconfigresource:$ oc get netconfig/openstacknetconfig -n openstackIf you see errors, check the underlying
network-attach-definitionand node network configuration policies:$ oc get network-attachment-definitions -n openstack $ oc get nncp
Chapter 4. Creating a RHOSO environment for a namespace Copy linkLink copied to clipboard!
You create the control plane and the data plane for each namespace by following the guidance Deploying Red Hat OpenStack Services on OpenShift, and updating the network configuration for each namespace.
4.1. Creating a control plane for a namespace Copy linkLink copied to clipboard!
You create an OpenStackControlPlane custom resource (CR) for each namespace and configure the load balancers with the networks for that namespace.
Procedure
-
Create an
OpenStackControlPlaneCR for theosp_ns_1namespace by following the Creating the control plane procedure in Deploying Red Hat OpenStack Services on OpenShift. Locate the load balancer configuration for each service and update it with the networks for that namespace:
<service>: ... override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi_1 metallb.universe.tf/allow-shared-ip: internalapi_1 metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer-
metallb.universe.tf/address-pool: TheIpAddressPoolto assign the IP address from to load balance the service. metallb.universe.tf/allow-shared-ip: TheIpAddressPoolto use a shared IP address from for load balancing.NoteYou cannot share a VIP with services that use the same port, such as the multiple instances of rabbitmq.
-
metallb.universe.tf/loadBalancerIPs: Assigns a specific VIP from theIpAddressPoolrange for use when load balancing the service.
-
Update the NIC mappings for the
ovnservice to ensure that theOVNControllerresources for the namespace are segregated from the other namespaces, by configuring thenicMappingsfor thedatacentrenetwork to correspond to the dedicated interface for the namespace:... ovn: enabled: true template: ovnController: nicMappings: datacentre: ospbr_1 ovnDBCluster: ovndbcluster-nb:Update the control plane:
$ oc apply -f osp_ns_1_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n osp_ns_1 NAME STATUS MESSAGE control-plane-ns-1 Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.- Verify that the service pods are running on the correct worker nodes.
4.2. Creating a data plane for a namespace Copy linkLink copied to clipboard!
You create the OpenStackDataPlaneNodeSet custom resources (CRs) for each namespace and an OpenStackDataPlaneDeployment CR for each namespace.
Procedure
-
Create an
OpenStackDataPlaneNodeSetCR for theosp_ns_1namespace by following the Creating the data plane procedures in Deploying Red Hat OpenStack Services on OpenShift. Locate the
nodessection and update thehostNameand theansibleVars.fqdn_internal_apifields with unique names for each node that indicate the namespace the node belongs to:nodes: edpm-compute-0: hostName: edpm_1-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm_1-compute-0.example.comUpdate the IP addresses of the networks defined for each node defined for the namespace:
nodes: edpm-compute-0: ... networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ...-
fixedIP: Set to the IP address for the network for theosp_ns_1namespace.
-
-
Save the updated
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f osp_ns_1_data_plane.yamlVerify that the data plane resource has been updated by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset data-plane-ns-1 --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.Create a file on your workstation to define the new
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>-
Replace
<node_set_deployment_name>with the name of theOpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character.
TipGive the definition file and the
OpenStackDataPlaneDeploymentCR unique and descriptive names that indicate the purpose of the modified node set.-
Replace
Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - <nodeSet_name>-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f osp_ns_1_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitVerify that the modified
OpenStackDataPlaneNodeSetCR is deployed:$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE data-plane-ns-1 True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE data-plane-ns-1 True NodeSet ReadyIf the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.