Chapter 3. Migrating databases to the control plane
To begin creating the control plane, enable back-end services and import the databases from your original Red Hat OpenStack Platform 17.1 deployment.
3.1. Retrieving topology-specific service configuration Copy linkLink copied to clipboard!
Before you migrate your databases to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane, retrieve the topology-specific service configuration from your Red Hat OpenStack Platform (RHOSP) environment. You need this configuration for the following reasons:
- To check your current database for inaccuracies
- To ensure that you have the data you need before the migration
- To compare your RHOSP database with the adopted RHOSO database
Prerequisites
Define the following shell variables. Replace the example values with values that are correct for your environment:
NoteIf you use IPv6, define the
SOURCE_MARIADB_IPvalue without brackets. For example,SOURCE_MARIADB_IP=fd00:bbbb::2.$ PASSWORD_FILE="$HOME/overcloud-passwords.yaml" $ MARIADB_IMAGE=registry.redhat.io/rhoso/openstack-mariadb-rhel9:18.0 $ declare -A TRIPLEO_PASSWORDS $ CELLS="default cell1 cell2" $ for CELL in $(echo $CELLS); do > TRIPLEO_PASSWORDS[$CELL]="$PASSWORD_FILE" > done $ declare -A SOURCE_DB_ROOT_PASSWORD $ for CELL in $(echo $CELLS); do > SOURCE_DB_ROOT_PASSWORD[$CELL]=$(cat ${TRIPLEO_PASSWORDS[$CELL]} | grep ' MysqlRootPassword:' | awk -F ': ' '{ print $2; }') > doneDefine the following shell variables. Replace the example values with values that are correct for your environment:
$ MARIADB_CLIENT_ANNOTATIONS='--annotations=k8s.v1.cni.cncf.io/networks=internalapi' $ MARIADB_RUN_OVERRIDES="$MARIADB_CLIENT_ANNOTATIONS"NoteFor environments that are enabled with border gateway protocol (BGP), the network annotation must include a default route to enable proper routing. Use the following instead:
$ MARIADB_CLIENT_ANNOTATIONS='--annotations=k8s.v1.cni.cncf.io/networks=[{"name":"internalapi","namespace":"openstack","default-route":["<172.17.0.1>"]}]' $ MARIADB_RUN_OVERRIDES="$MARIADB_CLIENT_ANNOTATIONS"where:
- <172.17.0.1>
-
Replace with the gateway IP address of your
internalapinetwork.
$ CONTROLLER1_SSH="ssh -i *<path to SSH key>* root@*<node IP>*" $ declare -A SOURCE_MARIADB_IP $ SOURCE_MARIADB_IP[default]=*<galera cluster VIP>* $ SOURCE_MARIADB_IP[cell1]=*<galera cell1 cluster VIP>* $ SOURCE_MARIADB_IP[cell2]=*<galera cell2 cluster VIP>* # ...-
Provide
CONTROLLER1_SSHsettings with SSH connection details for any non-cell Controller of the source director cloud. -
For each cell that is defined in
CELLS, replaceSOURCE_MARIADB_IP[*]= ..., with the records lists for the cell names and VIP addresses of MariaDB Galera clusters, including the cells, of the source director cloud. To get the values for
SOURCE_MARIADB_IP, query the puppet-generated configurations in a Controller and CellController node:$ sudo grep -rI 'listen mysql' -A10 /var/lib/config-data/puppet-generated/ | grep bindNoteThe source cloud always uses the same password for cells databases. For that reason, the same passwords file is used for all cells stacks. However, split-stack topology allows using different passwords files for each stack.
Procedure
If your source RHOSP environment uses border gateway protocol (BGP) for Layer 3 networking, create a
BGPConfigurationcustom resource to enable BGP routing:$ cat << EOF > bgp.yaml apiVersion: network.openstack.org/v1beta1 kind: BGPConfiguration metadata: name: bgpconfiguration namespace: openstack spec: {} EOF $ oc apply -f bgp.yamlThe
BGPConfigurationresource enables BGP route advertisement between the Red Hat OpenShift Container Platform (RHOCP) cluster and the source cloud, which is necessary for themariadb-clientpod to reach the source MariaDB cluster.Create a persistent
mariadb-clientpod for database operations:$ oc delete pod mariadb-client || true $ oc run mariadb-client ${MARIADB_RUN_OVERRIDES} -q --image ${MARIADB_IMAGE} --restart=Never -- /usr/bin/sleep infinityThis creates a long-running pod that is used for all subsequent database operations, avoiding the need to create temporary pods for each command.
Wait for the
mariadb-clientpod to be able to reach the source MariaDB:$ oc rsh mariadb-client mysql -rsh "${SOURCE_MARIADB_IP[default]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[default]}" -e 'select 1;'NoteFor BGP-enabled environments, this command might take a few moments to succeed while BGP routes are advertised and propagated through the network. The
mariadb-clientpod needs to receive the route to the source MariaDB IP address through BGP before it can establish a connection. If the command fails, wait a few seconds and retry. The connection should succeed once the BGP route advertisement is complete.For IPv6 environments, this command might take a few moments to succeed while the network IPv6 stack completes its setup. If the command fails, wait a few seconds and retry.
For standard deployments, such as non-BGP deployments or IPv4 deployments, this command should succeed immediately.
Export the shell variables for the following outputs and test the connection to the RHOSP database:
$ unset PULL_OPENSTACK_CONFIGURATION_DATABASES $ declare -xA PULL_OPENSTACK_CONFIGURATION_DATABASES $ for CELL in $(echo $CELLS); do > PULL_OPENSTACK_CONFIGURATION_DATABASES[$CELL]=$(oc rsh mariadb-client \ > mysql -rsh "${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" -e 'SHOW databases;') > doneIf the connection is successful, the expected output is nothing.
NoteThe
nova,nova_api, andnova_cell0databases are included in the same database host for the main overcloud Orchestration service (heat) stack. Additional cells use thenovadatabase of their local Galera clusters.Run
mysqlcheckon the RHOSP database to check for inaccuracies:$ unset PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK $ declare -xA PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK $ run_mysqlcheck() { > PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK=$(oc rsh mariadb-client \ > mysqlcheck --all-databases -h ${SOURCE_MARIADB_IP[$CELL]} -u root -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" | grep -v OK) > } $ for CELL in $(echo $CELLS); do > run_mysqlcheck $CELL > done $ if [ "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" != "" ]; then > for CELL in $(echo $CELLS); do > MYSQL_UPGRADE=$(oc rsh mariadb-client \ > mysql_upgrade --skip-version-check -v -h ${SOURCE_MARIADB_IP[$CELL]} -u root -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}") > # rerun mysqlcheck to check if problem is resolved > run_mysqlcheck > done > fiGet the Compute service (nova) cell mappings:
export PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS=$(oc rsh mariadb-client \ mysql -rsh "${SOURCE_MARIADB_IP[default]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[default]}" nova_api -e \ 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;')Get the hostnames of the registered Compute services:
$ unset PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES $ declare -xA PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES $ for CELL in $(echo $CELLS); do > PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES[$CELL]=$(oc rsh mariadb-client \ > mysql -rsh "${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" -e \ > "select host from nova.services where services.binary='nova-compute' and deleted=0;") > doneGet the list of the mapped Compute service cells:
export PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS=$($CONTROLLER1_SSH sudo podman exec -it nova_conductor nova-manage cell_v2 list_cells)Store the exported variables for future use:
$ unset SRIOV_AGENTS $ declare -xA SRIOV_AGENTS $ for CELL in $(echo $CELLS); do > RCELL=$CELL > [ "$CELL" = "$DEFAULT_CELL_NAME" ] && RCELL=default > cat > ~/.source_cloud_exported_variables_$CELL << EOF > unset PULL_OPENSTACK_CONFIGURATION_DATABASES > unset PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK > unset PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES > declare -xA PULL_OPENSTACK_CONFIGURATION_DATABASES > declare -xA PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK > declare -xA PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES > PULL_OPENSTACK_CONFIGURATION_DATABASES[$CELL]="$(oc rsh mariadb-client \ > mysql -rsh ${SOURCE_MARIADB_IP[$RCELL]} -uroot -p${SOURCE_DB_ROOT_PASSWORD[$RCELL]} -e 'SHOW databases;')" > PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK[$CELL]="$(oc rsh mariadb-client \ > mysqlcheck --all-databases -h ${SOURCE_MARIADB_IP[$RCELL]} -u root -p${SOURCE_DB_ROOT_PASSWORD[$RCELL]} | grep -v OK)" > PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES[$CELL]="$(oc rsh mariadb-client \ > mysql -rsh ${SOURCE_MARIADB_IP[$RCELL]} -uroot -p${SOURCE_DB_ROOT_PASSWORD[$RCELL]} -e \ > "select host from nova.services where services.binary='nova-compute' and deleted=0;")" > if [ "$RCELL" = "default" ]; then > PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS="$(oc rsh mariadb-client \ > mysql -rsh ${SOURCE_MARIADB_IP[$RCELL]} -uroot -p${SOURCE_DB_ROOT_PASSWORD[$RCELL]} nova_api -e \ > 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;')" > PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS="$($CONTROLLER1_SSH sudo podman exec -it nova_conductor nova-manage cell_v2 list_cells)" > fi > EOF > done $ chmod 0600 ~/.source_cloud_exported_variables*-
declare -xA SRIOV_AGENTSgets theneutron-sriov-nic-agentconfiguration to use for the data plane adoption ifneutron-sriov-nic-agentagents are running in your RHOSP deployment.
-
Clean up the
mariadb-clientpod:$ oc delete pod mariadb-clientThe
mariadb-clientpod is no longer needed after all the data is exported and stored.
Next steps
This configuration and the exported values are required later, during the data plane adoption post-checks. After the RHOSP control plane services are shut down, if any of the exported values are lost, re-running the export command fails because the control plane services are no longer running on the source cloud, and the data cannot be retrieved. To avoid data loss, preserve the exported values in an environment file before shutting down the control plane services.
3.2. Deploying back-end services Copy linkLink copied to clipboard!
Create the OpenStackControlPlane custom resource (CR) with the basic back-end services deployed, and disable all the Red Hat OpenStack Platform (RHOSP) services. This CR is the foundation of the control plane.
Prerequisites
- The cloud that you want to adopt is running, and it is on RHOSP 17.1.4 or later.
- All control plane and data plane hosts of the source cloud are running, and continue to run throughout the adoption procedure.
-
The
openstack-operatoris deployed, butOpenStackControlPlaneis not deployed. - Install the OpenStack Operators. For more information, see Installing and preparing the OpenStack Operator in Deploying Red Hat OpenStack Services on OpenShift.
-
If you enabled TLS everywhere (TLS-e) on the RHOSP environment, you must copy the
tlsroot CA from the RHOSP environment to therootca-internalissuer. - There are free PVs available for Galera and RabbitMQ.
Set the desired admin password for the control plane deployment. This can be the admin password from your original deployment or a different password:
ADMIN_PASSWORD=SomePasswordTo use the existing RHOSP deployment password:
declare -A TRIPLEO_PASSWORDS TRIPLEO_PASSWORDS[default]="$HOME/overcloud-passwords.yaml" ADMIN_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' AdminPassword:' | awk -F ': ' '{ print $2; }')Set the service password variables to match the original deployment. Database passwords can differ in the control plane environment, but you must synchronize the service account passwords.
For example, in developer environments with director Standalone, the passwords can be extracted:
AODH_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' AodhPassword:' | awk -F ': ' '{ print $2; }') BARBICAN_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' BarbicanPassword:' | awk -F ': ' '{ print $2; }') CEILOMETER_METERING_SECRET=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' CeilometerMeteringSecret:' | awk -F ': ' '{ print $2; }') CEILOMETER_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' CeilometerPassword:' | awk -F ': ' '{ print $2; }') CINDER_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' CinderPassword:' | awk -F ': ' '{ print $2; }') GLANCE_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' GlancePassword:' | awk -F ': ' '{ print $2; }') HEAT_AUTH_ENCRYPTION_KEY=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' HeatAuthEncryptionKey:' | awk -F ': ' '{ print $2; }') HEAT_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' HeatPassword:' | awk -F ': ' '{ print $2; }') HEAT_STACK_DOMAIN_ADMIN_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' HeatStackDomainAdminPassword:' | awk -F ': ' '{ print $2; }') IRONIC_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' IronicPassword:' | awk -F ': ' '{ print $2; }') MANILA_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' ManilaPassword:' | awk -F ': ' '{ print $2; }') NEUTRON_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' NeutronPassword:' | awk -F ': ' '{ print $2; }') NOVA_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' NovaPassword:' | awk -F ': ' '{ print $2; }') OCTAVIA_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' OctaviaPassword:' | awk -F ': ' '{ print $2; }') PLACEMENT_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' PlacementPassword:' | awk -F ': ' '{ print $2; }') SWIFT_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' SwiftPassword:' | awk -F ': ' '{ print $2; }')
Procedure
Ensure that you are using the Red Hat OpenShift Container Platform (RHOCP) namespace where you want the control plane to be deployed:
$ oc project openstack- Create the RHOSP secret. For more information, see Providing secure access to the Red Hat OpenStack Services on OpenShift services in Deploying Red Hat OpenStack Services on OpenShift.
If the
$ADMIN_PASSWORDis different than the password you set inosp-secret, amend theAdminPasswordkey in theosp-secret:$ oc set data secret/osp-secret "AdminPassword=$ADMIN_PASSWORD"Set service account passwords in
osp-secretto match the service account passwords from the original deployment:$ oc set data secret/osp-secret "AodhPassword=$AODH_PASSWORD" $ oc set data secret/osp-secret "BarbicanPassword=$BARBICAN_PASSWORD" $ oc set data secret/osp-secret "CeilometerPassword=$CEILOMETER_PASSWORD" $ oc set data secret/osp-secret "CinderPassword=$CINDER_PASSWORD" $ oc set data secret/osp-secret "GlancePassword=$GLANCE_PASSWORD" $ oc set data secret/osp-secret "HeatAuthEncryptionKey=$HEAT_AUTH_ENCRYPTION_KEY" $ oc set data secret/osp-secret "HeatPassword=$HEAT_PASSWORD" $ oc set data secret/osp-secret "HeatStackDomainAdminPassword=$HEAT_STACK_DOMAIN_ADMIN_PASSWORD" $ oc set data secret/osp-secret "IronicPassword=$IRONIC_PASSWORD" $ oc set data secret/osp-secret "IronicInspectorPassword=$IRONIC_PASSWORD" $ oc set data secret/osp-secret "ManilaPassword=$MANILA_PASSWORD" $ oc set data secret/osp-secret "MetadataSecret=$METADATA_SECRET" $ oc set data secret/osp-secret "NeutronPassword=$NEUTRON_PASSWORD" $ oc set data secret/osp-secret "NovaPassword=$NOVA_PASSWORD" $ oc set data secret/osp-secret "OctaviaPassword=$OCTAVIA_PASSWORD" $ oc set data secret/osp-secret "PlacementPassword=$PLACEMENT_PASSWORD" $ oc set data secret/osp-secret "SwiftPassword=$SWIFT_PASSWORD"Deploy the
OpenStackControlPlaneCR. Ensure that you only enable the DNS, Galera, Memcached, and RabbitMQ services. All other services must be disabled:$ oc apply -f - <<EOF apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: secret: osp-secret storageClass: <storage_class> barbican: enabled: false template: barbicanAPI: {} barbicanWorker: {} barbicanKeystoneListener: {} cinder: enabled: false template: cinderAPI: {} cinderScheduler: {} cinderBackup: {} cinderVolumes: {} dns: template: override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: <loadBalancer_IP> spec: type: LoadBalancer options: - key: server values: - <DNS_server_IP> replicas: 1 glance: enabled: false template: glanceAPIs: {} heat: enabled: false template: {} horizon: enabled: false template: {} ironic: enabled: false template: ironicConductors: [] keystone: enabled: false template: {} manila: enabled: false template: manilaAPI: {} manilaScheduler: {} manilaShares: {} galera: enabled: true templates: openstack: secret: osp-secret replicas: 3 storageRequest: 5G openstack-cell1: secret: osp-secret replicas: 3 storageRequest: 5G openstack-cell2: secret: osp-secret replicas: 1 storageRequest: 5G openstack-cell3: secret: osp-secret replicas: 1 storageRequest: 5G memcached: enabled: true templates: memcached: replicas: 3 neutron: enabled: false template: {} nova: enabled: false template: {} ovn: enabled: false template: ovnController: networkAttachment: tenant ovnNorthd: replicas: 0 ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB networkAttachment: internalapi placement: enabled: false template: {} rabbitmq: templates: rabbitmq: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: <loadBalancer_IP> spec: type: LoadBalancer rabbitmq-cell1: persistence: storage: 10Gi override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: <loadBalancer_IP> spec: type: LoadBalancer rabbitmq-cell2: persistence: storage: 10Gi override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: <loadBalancer_IP> spec: type: LoadBalancer rabbitmq-cell3: persistence: storage: 10Gi override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: <loadBalancer_IP> spec: type: LoadBalancer telemetry: enabled: false tls: podLevel: enabled: false ingress: enabled: false swift: enabled: false template: swiftRing: ringReplicas: 3 swiftStorage: replicas: 0 swiftProxy: replicas: 2 EOFwhere:
- <DNS_server_IP>
Specifies the value for the DNS server reachable from the
dnsmasqpod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example,1.1.1.1, or a DNS server for a specific domain, for example,/google.com/8.8.8.8.NoteThis DNS service,
dnsmasq, provides DNS services for nodes on the RHOSO data plane.dnsmasqis different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.- <storage_class>
- Specifies an existing storage class in your RHOCP cluster.
- <loadBalancer_IP>
Specifies the LoadBalancer IP address. If you use IPv6, change the load balancer IPs to the IPs in your environment, for example:
... metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: fd00:aaaa::80 ... metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: fd00:bbbb::85 ... metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: fd00:bbbb::86-
galera.openstack-cell1provides the required infrastructure database and messaging services for the Compute cells, for example,cell1,cell2, andcell3. Adjust the values for fields such asreplicas,storage, orstorageRequest, for each Compute cell as needed. spec.tlsspecifies whether TLS-e is enabled. If you enabled TLS-e in your RHOSP environment, settlsto the following:spec: ... tls: podLevel: enabled: true internal: ca: customIssuer: rootca-internal libvirt: ca: customIssuer: rootca-internal ovn: ca: customIssuer: rootca-internal ingress: ca: customIssuer: rootca-internal enabled: true
-
Verification
Verify that the Galera and RabbitMQ status is
Runningfor all defined cells:$ RENAMED_CELLS="cell1 cell2 cell3" $ oc get pod openstack-galera-0 -o jsonpath='{.status.phase}{"\n"}' $ oc get pod rabbitmq-server-0 -o jsonpath='{.status.phase}{"\n"}' $ for CELL in $(echo $RENAMED_CELLS); do > oc get pod openstack-$CELL-galera-0 -o jsonpath='{.status.phase}{"\n"}' > oc get pod rabbitmq-$CELL-server-0 -o jsonpath='{.status.phase}{"\n"}' > doneThe given cells names are later referred to by using the environment variable
RENAMED_CELLS. During the database migration procedure, the Nova cells are renamed.RENAMED_CELLSvariable represents the new cell names used in the RHOSO deployment.Ensure that the statuses of all the Rabbitmq and Galera CRs are
Setup complete:$ oc get Rabbitmqs,Galera NAME STATUS MESSAGE rabbitmq.rabbitmq.openstack.org/rabbitmq True Setup complete rabbitmq.rabbitmq.openstack.org/rabbitmq-cell1 True Setup complete NAME READY MESSAGE galera.mariadb.openstack.org/openstack True Setup complete galera.mariadb.openstack.org/openstack-cell1 True Setup completeVerify that the
OpenStackControlPlaneCR is waiting for deployment of theopenstackclientpod:$ oc get OpenStackControlPlane openstack NAME STATUS MESSAGE openstack Unknown OpenStackControlPlane Client not started
3.3. Configuring a Red Hat Ceph Storage back end Copy linkLink copied to clipboard!
If your Red Hat OpenStack Platform (RHOSP) 17.1 deployment uses a Red Hat Ceph Storage back end for any service, such as Image Service (glance), Block Storage service (cinder), Compute service (nova), or Shared File Systems service (manila), you must configure the custom resources (CRs) to use the same back end in the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment.
To run ceph commands, you must use SSH to connect to a Red Hat Ceph Storage node and run sudo cephadm shell. This generates a Ceph orchestrator container that enables you to run administrative commands against the Red Hat Ceph Storage cluster. If you deployed the Red Hat Ceph Storage cluster by using director, you can launch the cephadm shell from an RHOSP Controller node.
Prerequisites
-
The
OpenStackControlPlaneCR is created. If your RHOSP 17.1 deployment uses the Shared File Systems service, the openstack keyring is updated. Modify the
openstackuser so that you can use it across all RHOSP services:ceph auth caps client.openstack \ mgr 'allow *' \ mon 'allow r, profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, allow rw pool manila_data'Using the same user across all services makes it simpler to create a common Red Hat Ceph Storage secret that includes the keyring and
ceph.conffile and propagate the secret to all the services that need it.The following shell variables are defined. Replace the following example values with values that are correct for your environment:
CEPH_SSH="ssh -i <path to SSH key> root@<node IP>" CEPH_KEY=$($CEPH_SSH "cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0") CEPH_CONF=$($CEPH_SSH "cat /etc/ceph/ceph.conf | base64 -w 0")
Procedure
Create the
ceph-conf-filessecret that includes the Red Hat Ceph Storage configuration:$ oc apply -f - <<EOF apiVersion: v1 data: ceph.client.openstack.keyring: $CEPH_KEY ceph.conf: $CEPH_CONF kind: Secret metadata: name: ceph-conf-files type: Opaque EOFThe content of the file should be similar to the following example:
apiVersion: v1 kind: Secret metadata: name: ceph-conf-files stringData: ceph.client.openstack.keyring: | [client.openstack] key = <secret key> caps mgr = "allow *" caps mon = "allow r, profile rbd" caps osd = "pool=vms, profile rbd pool=volumes, profile rbd pool=images, allow rw pool manila_data' ceph.conf: | [global] fsid = 7a1719e8-9c59-49e2-ae2b-d7eb08c695d4 mon_host = 10.1.1.2,10.1.1.3,10.1.1.4where:
mon_hostspecifies the addresses of the cluster’s monitors. If you use IPv6, use brackets for the
mon_host. For example:mon_host = [v2:[fd00:cccc::100]:3300/0,v1:[fd00:cccc::100]:6789/0]NoteFor Distributed Compute Node (DCN) deployments with multiple Red Hat Ceph Storage clusters, create one secret per site. Each secret contains only the keys that the respective site requires. For more information on the rationale and key distribution pattern, see Red Hat Ceph Storage migration for Distributed Compute Node deployments.
The Red Hat Ceph Storage configuration files for all clusters are available on the RHOSP controller at either
/var/lib/tripleo-config/ceph/, or/etc/ceph. Copy them locally and create the per-site secrets:$ CEPH_SSH="ssh root@<controller>" $ CEPH_DIR="/var/lib/tripleo-config/ceph" $ TMPDIR=$(mktemp -d) $ $CEPH_SSH "cat ${CEPH_DIR}/central.conf" > ${TMPDIR}/central.conf $ $CEPH_SSH "sudo cat ${CEPH_DIR}/central.client.openstack.keyring" > ${TMPDIR}/central.client.openstack.keyring $ $CEPH_SSH "cat ${CEPH_DIR}/dcn1.conf" > ${TMPDIR}/dcn1.conf $ $CEPH_SSH "sudo cat ${CEPH_DIR}/dcn1.client.openstack.keyring" > ${TMPDIR}/dcn1.client.openstack.keyring $ $CEPH_SSH "cat ${CEPH_DIR}/dcn2.conf" > ${TMPDIR}/dcn2.conf $ $CEPH_SSH "sudo cat ${CEPH_DIR}/dcn2.client.openstack.keyring" > ${TMPDIR}/dcn2.client.openstack.keyring # Central site secret: contains all clusters $ oc create secret generic ceph-conf-central \ --from-file=${TMPDIR}/central.conf \ --from-file=${TMPDIR}/central.client.openstack.keyring \ --from-file=${TMPDIR}/dcn1.conf \ --from-file=${TMPDIR}/dcn1.client.openstack.keyring \ --from-file=${TMPDIR}/dcn2.conf \ --from-file=${TMPDIR}/dcn2.client.openstack.keyring \ -n openstack # DCN1 edge site secret: central + local only $ oc create secret generic ceph-conf-dcn1 \ --from-file=${TMPDIR}/central.conf \ --from-file=${TMPDIR}/central.client.openstack.keyring \ --from-file=${TMPDIR}/dcn1.conf \ --from-file=${TMPDIR}/dcn1.client.openstack.keyring \ -n openstack # DCN2 edge site secret: central + local only $ oc create secret generic ceph-conf-dcn2 \ --from-file=${TMPDIR}/central.conf \ --from-file=${TMPDIR}/central.client.openstack.keyring \ --from-file=${TMPDIR}/dcn2.conf \ --from-file=${TMPDIR}/dcn2.client.openstack.keyring \ -n openstack $ rm -rf ${TMPDIR}Repeat for each additional edge site. Each edge site secret must include the central cluster files and only the files for that edge site’s local cluster.
When configuring
extraMountson theOpenStackControlPlane, use propagation labels matching the service instance names (for example,central,dcn1,dcn2) so that each pod mounts only its site-specific secret.
In your
OpenStackControlPlaneCR, inject the Red Hat Ceph Storage configuration into the RHOSP service pods usingextraMounts. For a single-cluster deployment, propagate one secret to all services:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - CinderBackup - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true 'For a DCN deployment with per-site secrets, use propagation labels matching each service instance name so that each pod receives only the keys for its site:
$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: extraMounts: - name: v1 region: r1 extraVol: - extraVolType: Ceph propagation: - central - CinderBackup - ManilaShare volumes: - name: ceph-central projected: sources: - secret: name: ceph-conf-central mounts: - name: ceph-central mountPath: "/etc/ceph" readOnly: true - extraVolType: Ceph propagation: - dcn1 volumes: - name: ceph-dcn1 projected: sources: - secret: name: ceph-conf-dcn1 mounts: - name: ceph-dcn1 mountPath: "/etc/ceph" readOnly: true - extraVolType: Ceph propagation: - dcn2 volumes: - name: ceph-dcn2 projected: sources: - secret: name: ceph-conf-dcn2 mounts: - name: ceph-dcn2 mountPath: "/etc/ceph" readOnly: true 'The propagation label
centralmatches the Image service and Block Storage service pod instances namedcentral. TheCinderBackupandManilaSharelabels are service-type propagation and apply to all Block Storage service backup and Shared File Systems service pods, which run only at the central site. Replacecentral,dcn1, anddcn2with the instance names used in your deployment.
3.4. Stopping Red Hat OpenStack Platform services Copy linkLink copied to clipboard!
Before you start the Red Hat OpenStack Services on OpenShift (RHOSO) adoption, you must stop the Red Hat OpenStack Platform (RHOSP) services to avoid inconsistencies in the data that you migrate for the data plane adoption. Inconsistencies are caused by resource changes after the database is copied to the new deployment.
You should not stop the infrastructure management services yet, such as:
- Database
- RabbitMQ
- HAProxy Load Balancer
- Ceph-nfs
- Compute service
- Containerized modular libvirt daemons
- Object Storage service (swift) back-end services
Prerequisites
Ensure that there no long-running tasks that require the services that you plan to stop, such as instance live migrations, volume migrations, volume creation, backup and restore, attaching, detaching, and other similar operations:
$ openstack server list --all-projects -c ID -c Status |grep -E '\| .+ing \|' $ openstack volume list --all-projects -c ID -c Status |grep -E '\| .+ing \|'| grep -vi error $ openstack volume backup list --all-projects -c ID -c Status |grep -E '\| .+ing \|' | grep -vi error $ openstack share list --all-projects -c ID -c Status |grep -E '\| .+ing \|'| grep -vi error $ openstack image list -c ID -c Status |grep -E '\| .+ing \|'- Collect the services topology-specific configuration. For more information, see Retrieving topology-specific service configuration.
Define the following shell variables. The values are examples and refer to a single node standalone director deployment. Replace these example values with values that are correct for your environment:
CONTROLLER1_SSH="ssh -i <path to SSH key> root@<controller-1 IP>" CONTROLLER2_SSH="ssh -i <path to SSH key> root@<controller-2 IP>" CONTROLLER3_SSH="ssh -i <path to SSH key> root@<controller-3 IP>"Specify the IP addresses of all Controller nodes, for example:
CONTROLLER1_SSH="ssh -i <path to SSH key> root@<controller-1 IP>" CONTROLLER2_SSH="ssh -i <path to SSH key> root@<controller-2 IP>" CONTROLLER3_SSH="ssh -i <path to SSH key> root@<controller-3 IP>" # ...-
<path_to_SSH_key>defines the path to your SSH key. -
<controller-<X> IP>defines the IP addresses of all Controller nodes.
-
Procedure
If your deployment enables CephFS through NFS as a back end for Shared File Systems service (manila), remove the following Pacemaker ordering and co-location constraints that govern the Virtual IP address of the
ceph-nfsservice and themanila-shareservice:# check the co-location and ordering constraints concerning "manila-share" sudo pcs constraint list --full # remove these constraints sudo pcs constraint remove colocation-openstack-manila-share-ceph-nfs-INFINITY sudo pcs constraint remove order-ceph-nfs-openstack-manila-share-OptionalDisable RHOSP control plane services:
# Update the services list to be stopped ServicesToStop=("tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_notification.service" "tripleo_designate_api.service" "tripleo_designate_backend_bind9.service" "tripleo_designate_central.service" "tripleo_designate_mdns.service" "tripleo_designate_producer.service" "tripleo_designate_worker.service" "tripleo_octavia_api.service" "tripleo_octavia_health_manager.service" "tripleo_octavia_rsyslog.service" "tripleo_octavia_driver_agent.service" "tripleo_octavia_housekeeping.service" "tripleo_octavia_worker.service" "tripleo_horizon.service" "tripleo_keystone.service" "tripleo_barbican_api.service" "tripleo_barbican_worker.service" "tripleo_barbican_keystone_listener.service" "tripleo_cinder_api.service" "tripleo_cinder_api_cron.service" "tripleo_cinder_scheduler.service" "tripleo_cinder_volume.service" "tripleo_cinder_backup.service" "tripleo_collectd.service" "tripleo_glance_api.service" "tripleo_gnocchi_api.service" "tripleo_gnocchi_metricd.service" "tripleo_gnocchi_statsd.service" "tripleo_manila_api.service" "tripleo_manila_api_cron.service" "tripleo_manila_scheduler.service" "tripleo_neutron_api.service" "tripleo_placement_api.service" "tripleo_nova_api_cron.service" "tripleo_nova_api.service" "tripleo_nova_conductor.service" "tripleo_nova_metadata.service" "tripleo_nova_scheduler.service" "tripleo_nova_vnc_proxy.service" "tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_compute.service" "tripleo_ceilometer_agent_ipmi.service" "tripleo_ceilometer_agent_notification.service" "tripleo_ovn_cluster_northd.service" "tripleo_ironic_neutron_agent.service" "tripleo_ironic_api.service" "tripleo_ironic_inspector.service" "tripleo_ironic_conductor.service" "tripleo_ironic_inspector_dnsmasq.service" "tripleo_ironic_pxe_http.service" "tripleo_ironic_pxe_tftp.service" "tripleo_unbound.service") PacemakerResourcesToStop=("openstack-cinder-volume" "openstack-cinder-backup" "openstack-manila-share") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done echo "Stopping pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then echo "Stopping $resource" ${!SSH_CMD} sudo pcs resource disable $resource else echo "Service $resource not present" fi done break fi done echo "Checking pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then if ! ${!SSH_CMD} sudo pcs resource status $resource | grep Started; then echo "OK: Service $resource is stopped" else echo "ERROR: Service $resource is started" fi fi done break fi doneIf the status of each service is
OK, then the services stopped successfully.For Distributed Compute Node (DCN) deployments where Image service, Block Storage service, and Red Hat Ceph Storage services run on edge Compute nodes, stop the Image service, Block Storage service, and etcd services on all edge Compute nodes with the
DistributedComputeHCIrole:NoteThe
DistributedComputeHCIrole runsGlanceApiEdge,CinderVolumeEdge, andEtcdservices. A minimum of three nodes per site use this role. Skip this step if your DCN deployment does not run these services on edge Compute nodes. The examples in this procedure use hyper-converged (HCI) roles. If your deployment does not use HCI, the same services apply to theDistributedComputerole, which runs the sameGlanceApiEdge,CinderVolumeEdge, andEtcdservices but without Ceph OSD, Ceph Monitor, or Ceph Manager.Define shell variables for your
DistributedComputeHCIedge Compute nodes:# DCN1 edge site DistributedComputeHCI nodes DCN1_HCI0_SSH="ssh -i <path to SSH key> root@<dcn1-hci-0 IP>" DCN1_HCI1_SSH="ssh -i <path to SSH key> root@<dcn1-hci-1 IP>" DCN1_HCI2_SSH="ssh -i <path to SSH key> root@<dcn1-hci-2 IP>" # DCN2 edge site DistributedComputeHCI nodes DCN2_HCI0_SSH="ssh -i <path to SSH key> root@<dcn2-hci-0 IP>" DCN2_HCI1_SSH="ssh -i <path to SSH key> root@<dcn2-hci-1 IP>" DCN2_HCI2_SSH="ssh -i <path to SSH key> root@<dcn2-hci-2 IP>"where:
<path to SSH key>-
Specifies the path to your SSH key for each
DistributedComputeHCIedge Compute node on each DCN edge site. <dcn1-hci-0 IP>,<dcn1-hci-1 IP>,<dcn1-hci-2 IP>-
Specifies the IP address for each
DistributedComputeHCIedge Compute node within the DCN1 edge site. <dcn2-hci-0 IP>,<dcn2-hci-1 IP>,<dcn2-hci-2 IP>-
Specifies the IP address for each
DistributedComputeHCIedge Compute node within the DCN2 edge site.
Stop the storage services on all
DistributedComputeHCInodes:# Services to stop on DistributedComputeHCI edge compute nodes DCN_HCI_SERVICES=("tripleo_glance_api_internal.service" "tripleo_cinder_volume.service" "tripleo_etcd.service") # List of all DistributedComputeHCI node SSH commands DCN_HCI_NODES=("$DCN1_HCI0_SSH" "$DCN1_HCI1_SSH" "$DCN1_HCI2_SSH" "$DCN2_HCI0_SSH" "$DCN2_HCI1_SSH" "$DCN2_HCI2_SSH") echo "Stopping storage services on DistributedComputeHCI nodes" for node_ssh in "${DCN_HCI_NODES[@]}"; do [ -z "$node_ssh" ] && continue echo "Processing node: $node_ssh" for service in "${DCN_HCI_SERVICES[@]}"; do if $node_ssh sudo systemctl is-active $service 2>/dev/null; then echo "Stopping $service" $node_ssh sudo systemctl stop $service fi done done echo "Checking storage services on DistributedComputeHCI nodes" for node_ssh in "${DCN_HCI_NODES[@]}"; do [ -z "$node_ssh" ] && continue for service in "${DCN_HCI_SERVICES[@]}"; do if ! $node_ssh systemctl show $service 2>/dev/null | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on $node_ssh" else echo "OK: Service $service is not running on $node_ssh" fi done doneNote-
On edge sites, the Image service runs with the service name
tripleo_glance_api_internal.service, which is different from thetripleo_glance_api.serviceon the central controller. -
The Block Storage service volume service (
tripleo_cinder_volume.service) uses the same service name on both edge sites and the central controller. -
The etcd service (
tripleo_etcd.service) is used as a distributed lock manager (DLM) for the Block Storage service volume service running in active/active mode on edge sites.
-
On edge sites, the Image service runs with the service name
If your DCN deployment includes
DistributedComputeHCIScaleOutnodes, stop the HAProxy service on those nodes:NoteThe
DistributedComputeHCIScaleOutrole is used to scale compute and storage capacity beyond the initial threeDistributedComputeHCInodes at each site. These nodes runHAProxyEdge, which proxies Image service requests to theGlanceApiEdgeinstances onDistributedComputeHCInodes. Skip this step if your DCN deployment does not includeDistributedComputeHCIScaleOutnodes. For non-HCI deployments, the equivalent role isDistributedComputeScaleOut, which runs the sameHAProxyEdgeservice.Define shell variables for your
DistributedComputeHCIScaleOutedge Compute nodes:# DCN1 edge site DistributedComputeHCIScaleOut nodes DCN1_SCALEOUT0_SSH="ssh -i <path to SSH key> root@<dcn1-scaleout-0 IP>" DCN1_SCALEOUT1_SSH="ssh -i <path to SSH key> root@<dcn1-scaleout-1 IP>" # DCN2 edge site DistributedComputeHCIScaleOut nodes DCN2_SCALEOUT0_SSH="ssh -i <path to SSH key> root@<dcn2-scaleout-0 IP>" DCN2_SCALEOUT1_SSH="ssh -i <path to SSH key> root@<dcn2-scaleout-1 IP>"where:
<dcn1-scaleout-0 IP>,<dcn1-scaleout-1 IP>-
Specifies the IP address for each
DistributedComputeHCIScaleOutnode within the DCN1 edge site. <dcn2-scaleout-0 IP>,<dcn2-scaleout-1 IP>-
Specifies the IP address for each
DistributedComputeHCIScaleOutnode within the DCN2 edge site.
Stop the services on all
DistributedComputeHCIScaleOutnodes:# Services to stop on DistributedComputeHCIScaleOut edge compute nodes DCN_SCALEOUT_SERVICES=("tripleo_haproxy_edge.service") # List of all DistributedComputeHCIScaleOut node SSH commands DCN_SCALEOUT_NODES=("$DCN1_SCALEOUT0_SSH" "$DCN1_SCALEOUT1_SSH" "$DCN2_SCALEOUT0_SSH" "$DCN2_SCALEOUT1_SSH") echo "Stopping services on DistributedComputeHCIScaleOut nodes" for node_ssh in "${DCN_SCALEOUT_NODES[@]}"; do [ -z "$node_ssh" ] && continue echo "Processing node: $node_ssh" for service in "${DCN_SCALEOUT_SERVICES[@]}"; do if $node_ssh sudo systemctl is-active $service 2>/dev/null; then echo "Stopping $service" $node_ssh sudo systemctl stop $service fi done done echo "Checking services on DistributedComputeHCIScaleOut nodes" for node_ssh in "${DCN_SCALEOUT_NODES[@]}"; do [ -z "$node_ssh" ] && continue for service in "${DCN_SCALEOUT_SERVICES[@]}"; do if ! $node_ssh systemctl show $service 2>/dev/null | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on $node_ssh" else echo "OK: Service $service is not running on $node_ssh" fi done doneNote-
The HAProxy edge service (
tripleo_haproxy_edge.service) provided a local Image service endpoint onDistributedComputeHCIScaleOutnodes, proxying requests to theGlanceApiEdgeinstances onDistributedComputeHCInodes. During adoption, Red Hat OpenShift Container Platform (RHOCP) Kubernetes service endpoints backed by MetalLB replace HAProxy.
-
The HAProxy edge service (
3.5. Migrating databases to MariaDB instances Copy linkLink copied to clipboard!
Migrate your databases from the original Red Hat OpenStack Platform (RHOSP) deployment to the MariaDB instances in the Red Hat OpenShift Container Platform (RHOCP) cluster.
Prerequisites
- Ensure that the control plane MariaDB and RabbitMQ are running, and that no other control plane services are running.
- Retrieve the topology-specific service configuration. For more information, see Retrieving topology-specific service configuration.
- Stop the RHOSP services. For more information, see Stopping Red Hat OpenStack Platform services.
- Ensure that there is network routability between the original MariaDB and the MariaDB for the control plane.
Define the following shell variables. Replace the following example values with values that are correct for your environment:
$ STORAGE_CLASS=local-storage $ MARIADB_IMAGE=registry.redhat.io/rhoso/openstack-mariadb-rhel9:18.0 $ CELLS="default cell1 cell2" $ DEFAULT_CELL_NAME="cell3" $ RENAMED_CELLS="cell1 cell2 $DEFAULT_CELL_NAME" $ CHARACTER_SET=utf8 # $ COLLATION=utf8_general_ci $ declare -A PODIFIED_DB_ROOT_PASSWORD $ for CELL in $(echo "super $RENAMED_CELLS"); do > PODIFIED_DB_ROOT_PASSWORD[$CELL]=$(oc get -o json secret/osp-secret | jq -r .data.DbRootPassword | base64 -d) > done $ declare -A PODIFIED_MARIADB_IP $ for CELL in $(echo "super $RENAMED_CELLS"); do > if [ "$CELL" = "super" ]; then > PODIFIED_MARIADB_IP[$CELL]=$(oc get svc --selector "mariadb/name=openstack" -ojsonpath='{.items[0].spec.clusterIP}') > else > PODIFIED_MARIADB_IP[$CELL]=$(oc get svc --selector "mariadb/name=openstack-$CELL" -ojsonpath='{.items[0].spec.clusterIP}') > fi > done $ declare -A TRIPLEO_PASSWORDS $ for CELL in $(echo $CELLS); do > if [ "$CELL" = "default" ]; then > TRIPLEO_PASSWORDS[default]="$HOME/overcloud-passwords.yaml" > else > # in a split-stack source cloud, it should take a stack-specific passwords file instead > TRIPLEO_PASSWORDS[$CELL]="$HOME/overcloud-passwords.yaml" > fi > done $ declare -A SOURCE_DB_ROOT_PASSWORD $ for CELL in $(echo $CELLS); do > SOURCE_DB_ROOT_PASSWORD[$CELL]=$(cat ${TRIPLEO_PASSWORDS[$CELL]} | grep ' MysqlRootPassword:' | awk -F ': ' '{ print $2; }') > done $ declare -A SOURCE_MARIADB_IP $ SOURCE_MARIADB_IP[default]=*<galera cluster VIP>* $ SOURCE_MARIADB_IP[cell1]=*<galera cell1 cluster VIP>* $ SOURCE_MARIADB_IP[cell2]=*<galera cell2 cluster VIP>* # ... $ declare -A SOURCE_GALERA_MEMBERS_DEFAULT $ SOURCE_GALERA_MEMBERS_DEFAULT=( > ["standalone.localdomain"]=172.17.0.100 > # [...]=... > ) $ declare -A SOURCE_GALERA_MEMBERS_CELL1 $ SOURCE_GALERA_MEMBERS_CELL1=( > # ... > ) $ declare -A SOURCE_GALERA_MEMBERS_CELL2 $ SOURCE_GALERA_MEMBERS_CELL2=( > # ... > )-
CELLSandRENAMED_CELLSrepresent changes that are going to be made after you import the databases. Thedefaultcell takes a new name fromDEFAULT_CELL_NAME. In a multi-cell adoption scenario,defaultcell might retain its original default name as well. -
CHARACTER_SETandCOLLATIONshould match the source database. If they do not match, then foreign key relationships break for any tables that are created in the future as part of the database sync. -
SOURCE_MARIADB_IP[X]= ...includes the data for each cell that is defined inCELLS. Provide records for the cell names and VIP addresses of MariaDB Galera clusters. -
<galera_cell1_cluster_VIP>defines the VIP of your galera cell1 cluster. -
<galera_cell2_cluster_VIP>defines the VIP of your galera cell2 cluster, and so on. -
SOURCE_GALERA_MEMBERS_CELL<X>, defines the names of the MariaDB Galera cluster members and their IP address for each cell defined inCELLS. Replace["standalone.localdomain"]="172.17.0.100"with the real hosts data.
-
A standalone director environment only creates a default cell, which should be the only CELLS value in this case. The DEFAULT_CELL_NAME value should be cell1.
The super is the top-scope Nova API upcall database instance. A super conductor connects to that database. In subsequent examples, the upcall and cells databases use the same password that is defined in osp-secret. Old passwords are only needed to prepare the data exports.
To get the values for
SOURCE_MARIADB_IP, query the puppet-generated configurations in the Controller and CellController nodes:$ sudo grep -rI 'listen mysql' -A10 /var/lib/config-data/puppet-generated/ | grep bindTo get the values for
SOURCE_GALERA_MEMBERS_*, query the puppet-generated configurations in the Controller and CellController nodes:$ sudo grep -rI 'listen mysql' -A10 /var/lib/config-data/puppet-generated/ | grep serverThe source cloud always uses the same password for cells databases. For that reason, the same passwords file is used for all cells stacks. However, split-stack topology allows using different passwords files for each stack.
Prepare the MariaDB adoption helper pod:
Create a temporary volume claim and a pod for the database data copy. Edit the volume claim storage request if necessary, to give it enough space for the overcloud databases:
$ oc apply -f - <<EOF --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mariadb-data spec: storageClassName: $STORAGE_CLASS accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: mariadb-copy-data annotations: openshift.io/scc: anyuid k8s.v1.cni.cncf.io/networks: internalapi labels: app: adoption spec: containers: - image: $MARIADB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: mariadb-data securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: mariadb-data persistentVolumeClaim: claimName: mariadb-data EOFWait for the pod to be ready:
$ oc wait --for condition=Ready pod/mariadb-copy-data --timeout=30s
Procedure
Check that the source Galera database clusters in each cell have its members online and synced:
for CELL in $(echo $CELLS); do MEMBERS=SOURCE_GALERA_MEMBERS_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')[@] for i in "${!MEMBERS}"; do echo "Checking for the database node $i WSREP status Synced" oc rsh mariadb-copy-data mysql \ -h "$i" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" \ -e "show global status like 'wsrep_local_state_comment'" | \ grep -qE "\bSynced\b" done doneNoteEach additional Compute service (nova) v2 cell runs a dedicated Galera database cluster, so the command checks each cell.
Get the count of source databases with the
NOK(not-OK) status:for CELL in $(echo $CELLS); do oc rsh mariadb-copy-data mysql -h "${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" -e "SHOW databases;" doneCheck that
mysqlcheckhad no errors:for CELL in $(echo $CELLS); do set +u . ~/.source_cloud_exported_variables_$CELL set -u test -z "${PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK[$CELL]}" || [ "${PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK[$CELL]}" = " " ] && echo "OK" || echo "CHECK FAILED" doneTest the connection to the control plane upcall and cells databases:
for CELL in $(echo "super $RENAMED_CELLS"); do oc rsh mariadb-copy-data mysql -rsh "${PODIFIED_MARIADB_IP[$CELL]}" -uroot -p"${PODIFIED_DB_ROOT_PASSWORD[$CELL]}" -e 'SHOW databases;' doneNoteYou must transition Compute services that you import later into a superconductor architecture by deleting the old service records in the cell databases, starting with
cell1. New records are registered with different hostnames that are provided by the Compute service operator. All Compute services, except the Compute agent, have no internal state, and you can safely delete their service records. You also need to rename the formerdefaultcell toDEFAULT_CELL_NAME.Create a dump of the original databases:
for CELL in $(echo $CELLS); do oc rsh mariadb-copy-data << EOF mysql -h"${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" \ -N -e "show databases" | grep -E -v "schema|mysql|gnocchi|aodh" | \ while read dbname; do echo "Dumping $CELL cell \${dbname}"; mysqldump -h"${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" \ --single-transaction --complete-insert --skip-lock-tables --lock-tables=0 \ "\${dbname}" > /backup/"${CELL}.\${dbname}".sql; done EOF doneRestore the databases from
.sqlfiles into the control plane MariaDB:for CELL in $(echo $CELLS); do RCELL=$CELL [ "$CELL" = "default" ] && RCELL=$DEFAULT_CELL_NAME oc rsh mariadb-copy-data << EOF declare -A db_name_map db_name_map['nova']="nova_$RCELL" db_name_map['ovs_neutron']='neutron' db_name_map['ironic-inspector']='ironic_inspector' declare -A db_cell_map db_cell_map['nova']="nova_$DEFAULT_CELL_NAME" db_cell_map["nova_$RCELL"]="nova_$RCELL" declare -A db_server_map db_server_map['default']=${PODIFIED_MARIADB_IP['super']} db_server_map["nova"]=${PODIFIED_MARIADB_IP[$DEFAULT_CELL_NAME]} db_server_map["nova_$RCELL"]=${PODIFIED_MARIADB_IP[$RCELL]} declare -A db_server_password_map db_server_password_map['default']=${PODIFIED_DB_ROOT_PASSWORD['super']} db_server_password_map["nova"]=${PODIFIED_DB_ROOT_PASSWORD[$DEFAULT_CELL_NAME]} db_server_password_map["nova_$RCELL"]=${PODIFIED_DB_ROOT_PASSWORD[$RCELL]} cd /backup for db_file in \$(ls ${CELL}.*.sql); do db_name=\$(echo \${db_file} | awk -F'.' '{ print \$2; }') [[ "$CELL" != "default" && ! -v "db_cell_map[\${db_name}]" ]] && continue if [[ "$CELL" == "default" && -v "db_cell_map[\${db_name}]" ]] ; then target=$DEFAULT_CELL_NAME elif [[ "$CELL" == "default" && ! -v "db_cell_map[\${db_name}]" ]] ; then target=super else target=$RCELL fi renamed_db_file="\${target}_new.\${db_name}.sql" mv -f \${db_file} \${renamed_db_file} if [[ -v "db_name_map[\${db_name}]" ]]; then echo "renaming $CELL cell \${db_name} to \$target \${db_name_map[\${db_name}]}" db_name=\${db_name_map[\${db_name}]} fi db_server=\${db_server_map["default"]} if [[ -v "db_server_map[\${db_name}]" ]]; then db_server=\${db_server_map[\${db_name}]} fi db_password=\${db_server_password_map['default']} if [[ -v "db_server_password_map[\${db_name}]" ]]; then db_password=\${db_server_password_map[\${db_name}]} fi echo "creating $CELL cell \${db_name} in \$target \${db_server}" mysql -h"\${db_server}" -uroot "-p\${db_password}" -e \ "CREATE DATABASE IF NOT EXISTS \${db_name} DEFAULT \ CHARACTER SET ${CHARACTER_SET} DEFAULT COLLATE ${COLLATION};" echo "importing $CELL cell \${db_name} into \$target \${db_server} from \${renamed_db_file}" mysql -h "\${db_server}" -uroot "-p\${db_password}" "\${db_name}" < "\${renamed_db_file}" done if [ "$CELL" = "default" ] ; then mysql -h "\${db_server_map['default']}" -uroot -p"\${db_server_password_map['default']}" -e \ "update nova_api.cell_mappings set name='$DEFAULT_CELL_NAME' where name='default';" fi mysql -h "\${db_server_map["nova_$RCELL"]}" -uroot -p"\${db_server_password_map["nova_$RCELL"]}" -e \ "delete from nova_${RCELL}.services where host not like '%nova_${RCELL}-%' and services.binary != 'nova-compute';" EOF done-
db_name_mapdefines which common databases to rename when importing them. -
db_cell_mapdefines which cells databases to import, and how to rename them, if needed. -
db_cell_map["nova_$RCELL"]="nova_$RCELL"omits importing specialcell0databases of the cells, as its contents cannot be consolidated during adoption. -
db_server_mapdefines which databases to import into which servers, usually dedicated for cells. -
db_server_password_mapdefines the root passwords map for database servers. You can only use the same password for now. -
renamed_db_file="\${target}_new.\${db_name}.sql"assigns which databases to import into which hosts when extracting databases from thedefaultcell.
-
Verification
Compare the following outputs with the topology-specific service configuration. For more information, see Retrieving topology-specific service configuration.
Check that the databases are imported correctly:
$ set +u $ . ~/.source_cloud_exported_variables_default $ set -u $ dbs=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot -p"${PODIFIED_DB_ROOT_PASSWORD['super']}" -e 'SHOW databases;') $ echo $dbs | grep -Eq '\bkeystone\b' && echo "OK" || echo "CHECK FAILED" $ echo $dbs | grep -Eq '\bneutron\b' && echo "OK" || echo "CHECK FAILED" $ echo "${PULL_OPENSTACK_CONFIGURATION_DATABASES[@]}" | grep -Eq '\bovs_neutron\b' && echo "OK" || echo "CHECK FAILED" $ novadb_mapped_cells=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot -p"${PODIFIED_DB_ROOT_PASSWORD['super']}" \ > nova_api -e 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;') > uuidf='\S{8,}-\S{4,}-\S{4,}-\S{4,}-\S{12,}' > default=$(printf "%s\n" "$PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS" | sed -rn "s/^($uuidf)\s+default\b.*$/\1/p") > difference=$(diff -ZNua \ > <(printf "%s\n" "$PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS") \ > <(printf "%s\n" "$novadb_mapped_cells")) || true > if [ "$DEFAULT_CELL_NAME" != "default" ]; then > printf "%s\n" "$difference" | grep -qE "^\-$default\s+default\b" && echo "OK" || echo "CHECK FAILED" > printf "%s\n" "$difference" | grep -qE "^\+$default\s+$DEFAULT_CELL_NAME\b" && echo "OK" || echo "CHECK FAILED" > [ $(grep -E "^[-\+]$uuidf" <<<"$difference" | wc -l) -eq 2 ] && echo "OK" || echo "CHECK FAILED" > else > [ "x$difference" = "x" ] && echo "OK" || echo "CHECK FAILED" > fi > for CELL in $(echo $RENAMED_CELLS); do > RCELL=$CELL > [ "$CELL" = "$DEFAULT_CELL_NAME" ] && RCELL=default > set +u > . ~/.source_cloud_exported_variables_$RCELL > set -u > c1dbs=$(oc exec openstack-$CELL-galera-0 -c galera -- mysql -rs -uroot -p${PODIFIED_DB_ROOT_PASSWORD[$CELL]} -e 'SHOW databases;') > echo $c1dbs | grep -Eq "\bnova_${CELL}\b" && echo "OK" || echo "CHECK FAILED" > novadb_svc_records=$(oc exec openstack-$CELL-galera-0 -c galera -- mysql -rs -uroot -p${PODIFIED_DB_ROOT_PASSWORD[$CELL]} \ > nova_$CELL -e "select host from services where services.binary='nova-compute' and deleted=0 order by host asc;") > diff -Z <(echo "x$novadb_svc_records") <(echo "x${PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES[@]}") && echo "OK" || echo "CHECK FAILED" > done-
echo "${PULL_OPENSTACK_CONFIGURATION_DATABASES[@]}" | grep -Eq '\bovs_neutron\b' && echo "OK" || echo "CHECK FAILED"ensures that the Networking service (neutron) database is renamed fromovs_neutron. -
nova_api -e 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;')ensures that thedefaultcell is renamed to$DEFAULT_CELL_NAME, and the cell UUIDs are retained. -
for CELL in $(echo $RENAMED_CELLS); doensures that the registered Compute services names have not changed. -
c1dbs=$(oc exec openstack-$CELL-galera-0 -c galera -- mysql -rs -uroot -p${PODIFIED_DB_ROOT_PASSWORD[$CELL]} -e 'SHOW databases;')ensures Compute service cells databases are extracted to separate database servers, and renamed fromnovatonova_cell<X>. -
diff -Z <(echo "x$novadb_svc_records") <(echo "x${PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES[@]}") && echo "OK" || echo "CHECK FAILED"ensures that the registered Compute service name has not changed.
-
Delete the
mariadb-datapod and themariadb-copy-datapersistent volume claim that contains the database backup:NoteConsider taking a snapshot of them before deleting.
$ oc delete pod mariadb-copy-data $ oc delete pvc mariadb-data
During the pre-checks and post-checks, the mariadb-client pod might return a pod security warning related to the restricted:latest security context constraint. This warning is due to default security context constraints and does not prevent the admission controller from creating a pod. You see a warning for the short-lived pod, but it does not interfere with functionality. For more information, see About pod security standards and warnings.
3.6. Migrating OVN data Copy linkLink copied to clipboard!
Migrate the data in the OVN databases from the original Red Hat OpenStack Platform deployment to ovsdb-server instances that are running in the Red Hat OpenShift Container Platform (RHOCP) cluster.
Prerequisites
-
The
OpenStackControlPlaneresource is created. -
NetworkAttachmentDefinitioncustom resources (CRs) for the original cluster are defined. Specifically, theinternalapinetwork is defined. -
The original Networking service (neutron) and OVN
northdare not running. - There is network routability between the control plane services and the adopted cluster.
- The cloud is migrated to the Modular Layer 2 plug-in with Open Virtual Networking (ML2/OVN) mechanism driver.
Define the following shell variables. Replace the example values with values that are correct for your environment:
STORAGE_CLASS=local-storage OVSDB_IMAGE=registry.redhat.io/rhoso/openstack-ovn-base-rhel9:18.0 SOURCE_OVSDB_IP=172.17.0.100 # For IPv4 SOURCE_OVSDB_IP=[fd00:bbbb::100] # For IPv6To get the value to set
SOURCE_OVSDB_IP, query the puppet-generated configurations in a Controller node:$ grep -rI 'ovn_[ns]b_conn' /var/lib/config-data/puppet-generated/
Procedure
Prepare a temporary
PersistentVolumeclaim and the helper pod for the OVN backup. Adjust the storage requests for a large database, if needed:$ oc apply -f - <<EOF --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ovn-data-cert spec: commonName: ovn-data-cert secretName: ovn-data-cert issuerRef: name: rootca-internal --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ovn-data spec: storageClassName: $STORAGE_CLASS accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: ovn-copy-data annotations: openshift.io/scc: anyuid k8s.v1.cni.cncf.io/networks: internalapi labels: app: adoption spec: containers: - image: $OVSDB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: ovn-data - mountPath: /etc/pki/tls/misc name: ovn-data-cert readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: ovn-data persistentVolumeClaim: claimName: ovn-data - name: ovn-data-cert secret: secretName: ovn-data-cert EOFWait for the pod to be ready:
$ oc wait --for=condition=Ready pod/ovn-copy-data --timeout=30sIf the podified internalapi cidr is different than the source internalapi cidr, add an iptables accept rule on the Controller nodes:
$ $CONTROLLER1_SSH sudo iptables -I INPUT -s {PODIFIED_INTERNALAPI_NETWORK} -p tcp -m tcp --dport 6641 -m conntrack --ctstate NEW -j ACCEPT $ $CONTROLLER1_SSH sudo iptables -I INPUT -s {PODIFIED_INTERNALAPI_NETWORK} -p tcp -m tcp --dport 6642 -m conntrack --ctstate NEW -j ACCEPTBack up your OVN databases:
If you did not enable TLS everywhere, run the following command:
$ oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db"If you enabled TLS everywhere, run the following command:
$ oc exec ovn-copy-data -- bash -c "ovsdb-client backup --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client backup --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db"
Start the control plane OVN database services prior to import, with
northddisabled:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: replicas: 0 'Wait for the OVN database services to reach the
Runningphase:$ oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-nb $ oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-sbFetch the OVN database IP addresses on the
clusterIPservice network:PODIFIED_OVSDB_NB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-nb-0" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_OVSDB_SB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-sb-0" -ojsonpath='{.items[0].spec.clusterIP}')If you are using IPv6, adjust the address to the format expected by
ovsdb-*tools:PODIFIED_OVSDB_NB_IP=[$PODIFIED_OVSDB_NB_IP] PODIFIED_OVSDB_SB_IP=[$PODIFIED_OVSDB_SB_IP]Upgrade the database schema for the backup files:
If you did not enable TLS everywhere, use the following command:
$ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" $ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema"If you enabled TLS everywhere, use the following command:
$ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" $ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema"
Restore the database backup to the new OVN database servers:
If you did not enable TLS everywhere, use the following command:
$ oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db"If you enabled TLS everywhere, use the following command:
$ oc exec ovn-copy-data -- bash -c "ovsdb-client restore --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client restore --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db"
Check that the data was successfully migrated by running the following commands against the new database servers, for example:
$ oc exec -it ovsdbserver-nb-0 -- ovn-nbctl show $ oc exec -it ovsdbserver-sb-0 -- ovn-sbctl list ChassisStart the control plane
ovn-northdservice to keep both OVN databases in sync:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnNorthd: replicas: 1 'If you are running OVN gateway services on RHOCP nodes, enable the control plane
ovn-controllerservice:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnController: nicMappings: physNet: NIC 'physNetdefines the name of your physical network.NICis the name of the physical interface that is connected to your physical network.NoteRunning OVN gateways on RHOCP nodes might be prone to data plane downtime during Open vSwitch upgrades. Consider running OVN gateways on dedicated
Networkerdata plane nodes for production deployments instead.
Delete the
ovn-datahelper pod and the temporaryPersistentVolumeClaimthat is used to store OVN database backup files:$ oc delete --ignore-not-found=true pod ovn-copy-data $ oc delete --ignore-not-found=true pvc ovn-dataNoteConsider taking a snapshot of the
ovn-datahelper pod and the temporaryPersistentVolumeClaimbefore deleting them. For more information, see About volume snapshots in OpenShift Container Platform storage overview.Stop the adopted OVN database servers:
ServicesToStop=("tripleo_ovn_cluster_north_db_server.service" "tripleo_ovn_cluster_south_db_server.service") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done