Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Migrating databases to the control plane
To begin creating the control plane, enable backend services and import the databases from your original Red Hat OpenStack Platform 17.1 deployment.
3.1. Retrieving topology-specific service configuration
Prerequisites
Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment:
CONTROLLER_SSH="ssh -F ~/director_standalone/vagrant_ssh_config vagrant@standalone" MARIADB_IMAGE=registry.redhat.io/rhosp-dev-preview/openstack-mariadb-rhel9:18.0 SOURCE_MARIADB_IP=172.17.0.2 SOURCE_DB_ROOT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' MysqlRootPassword:' | awk -F ': ' '{ print $2; }') MARIADB_CLIENT_ANNOTATIONS='--annotations=k8s.v1.cni.cncf.io/networks=internalapi'
Procedure
Export shell variables for the following outputs to compare it with post-adoption values later on: Test connection to the original database:
export PULL_OPENSTACK_CONFIGURATION_DATABASES=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh "$SOURCE_MARIADB_IP" -uroot -p"$SOURCE_DB_ROOT_PASSWORD" -e 'SHOW databases;') echo "$PULL_OPENSTACK_CONFIGURATION_DATABASES"
Note that the
nova
,nova_api
,nova_cell0
databases reside in the same DB host.Run
mysqlcheck
on the original database to look for inaccuracies:export PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysqlcheck --all-databases -h $SOURCE_MARIADB_IP -u root -p"$SOURCE_DB_ROOT_PASSWORD" | grep -v OK) echo "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK"
Get the Compute service (nova) cells mappings from the database:
export PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh "${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" nova_api -e \ 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;') echo "$PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS"
Get the host names of the registered Compute services:
export PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh "$SOURCE_MARIADB_IP" -uroot -p"$SOURCE_DB_ROOT_PASSWORD" nova_api -e \ "select host from nova.services where services.binary='nova-compute';") echo "$PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES"
Get the list of mapped the Compute service cells:
export PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS=$($CONTROLLER_SSH sudo podman exec -it nova_api nova-manage cell_v2 list_cells) echo "$PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS"
After the source control plane services shutdown, if either of the exported values lost, it could be no longer evaluated again. Preserving the exported values in an environment file should protect you from such a situation.
Store exported variables for future use:
cat > ~/.source_cloud_exported_variables << EOF PULL_OPENSTACK_CONFIGURATION_DATABASES="$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh $SOURCE_MARIADB_IP -uroot -p$SOURCE_DB_ROOT_PASSWORD -e 'SHOW databases;')" PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK="$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysqlcheck --all-databases -h $SOURCE_MARIADB_IP -u root -p$SOURCE_DB_ROOT_PASSWORD | grep -v OK)" PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS="$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh $SOURCE_MARIADB_IP -uroot -p$SOURCE_DB_ROOT_PASSWORD nova_api -e \ 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;')" PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES="$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh $SOURCE_MARIADB_IP -uroot -p$SOURCE_DB_ROOT_PASSWORD nova_api -e \ "select host from nova.services where services.binary='nova-compute';")" PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS="$($CONTROLLER_SSH sudo podman exec -it nova_api nova-manage cell_v2 list_cells)" EOF chmod 0600 ~/.source_cloud_exported_variables
Optional: If there are neutron-sriov-nic-agent agents running in the deployment, get its configuration:
podman run -i --rm --userns=keep-id -u $UID $MARIADB_IMAGE mysql \ -rsh "$SOURCE_MARIADB_IP" -uroot -p"$SOURCE_DB_ROOT_PASSWORD" ovs_neutron -e \ "select host, configurations from agents where agents.binary='neutron-sriov-nic-agent';"
This configuration will be required later, during the data plane adoption.
3.2. Deploying backend services
Create the OpenStackControlPlane
custom resource (CR) with basic backend services deployed, and all the Red Hat OpenStack Platform (RHOSP) services disabled. This will be the foundation of the control plane.
In subsequent steps, you import the original databases and then add RHOSP control plane services.
Prerequisites
- The cloud that you want to adopt is up and running, and it is on the RHOSP 17.1 release.
- All control plane and data plane hosts of the source cloud are up and running, and continue to run throughout the adoption procedure.
-
The
openstack-operator
is deployed, butOpenStackControlPlane
is not deployed. For production environments, the deployment method will likely be different. -
If TLS Everywhere is enabled on the source environment, the
tls
root CA from the source environment must be copied over to the rootca-internal issuer. - There are free PVs available to be claimed (for MariaDB and RabbitMQ).
Set the desired admin password for the control plane deployment. This can be the original deployment’s admin password or something else.
ADMIN_PASSWORD=SomePassword
To use the existing RHOSP deployment password:
ADMIN_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' AdminPassword:' | awk -F ': ' '{ print $2; }')
Set service password variables to match the original deployment. Database passwords can differ in the control plane environment, but synchronizing the service account passwords is a required step.
For example, in developer environments with director Standalone, the passwords can be extracted like this:
AODH_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' AodhPassword:' | awk -F ': ' '{ print $2; }') BARBICAN_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' BarbicanPassword:' | awk -F ': ' '{ print $2; }') CEILOMETER_METERING_SECRET=$(cat ~/tripleo-standalone-passwords.yaml | grep ' CeilometerMeteringSecret:' | awk -F ': ' '{ print $2; }') CEILOMETER_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' CeilometerPassword:' | awk -F ': ' '{ print $2; }') CINDER_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' CinderPassword:' | awk -F ': ' '{ print $2; }') GLANCE_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' GlancePassword:' | awk -F ': ' '{ print $2; }') HEAT_AUTH_ENCRYPTION_KEY=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatAuthEncryptionKey:' | awk -F ': ' '{ print $2; }') HEAT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatPassword:' | awk -F ': ' '{ print $2; }') IRONIC_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' IronicPassword:' | awk -F ': ' '{ print $2; }') MANILA_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' ManilaPassword:' | awk -F ': ' '{ print $2; }') NEUTRON_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' NeutronPassword:' | awk -F ': ' '{ print $2; }') NOVA_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' NovaPassword:' | awk -F ': ' '{ print $2; }') OCTAVIA_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' OctaviaPassword:' | awk -F ': ' '{ print $2; }') PLACEMENT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' PlacementPassword:' | awk -F ': ' '{ print $2; }') SWIFT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' SwiftPassword:' | awk -F ': ' '{ print $2; }')
Procedure
Make sure you are using the Red Hat OpenShift Container Platform namespace where you want the control plane deployed:
oc project openstack
- Create OSP secret.
If the
$ADMIN_PASSWORD
is different than the already set password inosp-secret
, amend theAdminPassword
key in theosp-secret
correspondingly:oc set data secret/osp-secret "AdminPassword=$ADMIN_PASSWORD"
Set service account passwords in
osp-secret
to match the service account passwords from the original deployment:oc set data secret/osp-secret "AodhPassword=$AODH_PASSWORD" oc set data secret/osp-secret "BarbicanPassword=$BARBICAN_PASSWORD" oc set data secret/osp-secret "CeilometerMeteringSecret=$CEILOMETER_METERING_SECRET" oc set data secret/osp-secret "CeilometerPassword=$CEILOMETER_PASSWORD" oc set data secret/osp-secret "CinderPassword=$CINDER_PASSWORD" oc set data secret/osp-secret "GlancePassword=$GLANCE_PASSWORD" oc set data secret/osp-secret "HeatAuthEncryptionKey=$HEAT_AUTH_ENCRYPTION_KEY" oc set data secret/osp-secret "HeatPassword=$HEAT_PASSWORD" oc set data secret/osp-secret "IronicPassword=$IRONIC_PASSWORD" oc set data secret/osp-secret "IronicInspectorPassword=$IRONIC_PASSWORD" oc set data secret/osp-secret "ManilaPassword=$MANILA_PASSWORD" oc set data secret/osp-secret "NeutronPassword=$NEUTRON_PASSWORD" oc set data secret/osp-secret "NovaPassword=$NOVA_PASSWORD" oc set data secret/osp-secret "OctaviaPassword=$OCTAVIA_PASSWORD" oc set data secret/osp-secret "PlacementPassword=$PLACEMENT_PASSWORD" oc set data secret/osp-secret "SwiftPassword=$SWIFT_PASSWORD"
-
Deploy
OpenStackControlPlane
. Make sure to only enable DNS, MariaDB, Memcached, and RabbitMQ services. All other services must be disabled. If the source environment enables TLS Everywhere, modify spec:tls section with the following override before applying it:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: tls: podLevel: enabled: true internal: ca: customIssuer: rootca-internal libvirt: ca: customIssuer: rootca-internal ovn: ca: customIssuer: rootca-internal ingress: ca: customIssuer: rootca-internal enabled: true
If the source environment does not enable TLS Everywhere, modify spec:tls section with the following override before applying it:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: tls: podLevel: enabled: false
oc apply -f - <<EOF apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: secret: osp-secret storageClass: local-storage barbican: enabled: false template: barbicanAPI: {} barbicanWorker: {} barbicanKeystoneListener: {} cinder: enabled: false template: cinderAPI: {} cinderScheduler: {} cinderBackup: {} cinderVolumes: {} dns: template: override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer options: - key: server values: - 192.168.122.1 replicas: 1 glance: enabled: false template: glanceAPIs: {} heat: enabled: false template: {} horizon: enabled: false template: {} ironic: enabled: false template: ironicConductors: [] keystone: enabled: false template: {} manila: enabled: false template: manilaAPI: {} manilaScheduler: {} manilaShares: {} mariadb: enabled: false templates: {} galera: enabled: true templates: openstack: secret: osp-secret replicas: 1 storageRequest: 500M openstack-cell1: secret: osp-secret replicas: 1 storageRequest: 500M memcached: enabled: true templates: memcached: replicas: 1 neutron: enabled: false template: {} nova: enabled: false template: {} ovn: enabled: false template: ovnController: networkAttachment: tenant nodeSelector: node: non-existing-node-name ovnNorthd: replicas: 0 ovnDBCluster: ovndbcluster-nb: dbType: NB networkAttachment: internalapi ovndbcluster-sb: dbType: SB networkAttachment: internalapi placement: enabled: false template: {} rabbitmq: templates: rabbitmq: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer telemetry: enabled: false template: {} swift: enabled: false template: swiftRing: ringReplicas: 1 swiftStorage: replicas: 0 swiftProxy: replicas: 1 EOF
Verification
Check that MariaDB is running.
oc get pod openstack-galera-0 -o jsonpath='{.status.phase}{"\n"}' oc get pod openstack-cell1-galera-0 -o jsonpath='{.status.phase}{"\n"}'
3.3. Configuring a Ceph backend
If the original deployment uses a Ceph storage backend for any service (e.g. Image Service (glance), Block Storage service (cinder), Compute service (nova), Shared File Systems service (manila)), the same backend must be used in the adopted deployment and custom resources (CRs) must be configured accordingly.
If you use Shared File Systems service (manila), on director environments, the CephFS driver in Shared File Systems service is configured to use its own keypair. For convenience, modify the openstack
user so that you can use it across all Red Hat OpenStack Platform services.
Using the same user across the services serves two purposes:
- The capabilities of the user required to interact with Shared File Systems service became far simpler and hence, more became more secure with RHOSO 18.0.
- It is simpler to create a common ceph secret (keyring and ceph config file) and propagate the secret to all services that need it.
To run ceph
commands, you must use SSH to connect to a Ceph storage node and run sudo cephadm shell
. This brings up a ceph orchestrator container that allows you to run administrative commands against the ceph cluster. If you deployed the ceph cluster by using director, you may launch the cephadm
shell from an RHOSP controller node.
ceph auth caps client.openstack \ mgr 'allow *' \ mon 'allow r, profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, allow rw pool manila_data'
Prerequisites
-
The
OpenStackControlPlane
custom resource (CR) must already exist. - Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment:
CEPH_SSH="ssh -i <path to SSH key> root@<node IP>" CEPH_KEY=$($CEPH_SSH "cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0") CEPH_CONF=$($CEPH_SSH "cat /etc/ceph/ceph.conf | base64 -w 0")
Procedure
Create the
ceph-conf-files
secret, containing Ceph configuration:oc apply -f - <<EOF apiVersion: v1 data: ceph.client.openstack.keyring: $CEPH_KEY ceph.conf: $CEPH_CONF kind: Secret metadata: name: ceph-conf-files namespace: openstack type: Opaque EOF
The content of the file should look something like this:
apiVersion: v1 kind: Secret metadata: name: ceph-conf-files namespace: openstack stringData: ceph.client.openstack.keyring: | [client.openstack] key = <secret key> caps mgr = "allow *" caps mon = "allow r, profile rbd" caps osd = "pool=vms, profile rbd pool=volumes, profile rbd pool=images, allow rw pool manila_data' ceph.conf: | [global] fsid = 7a1719e8-9c59-49e2-ae2b-d7eb08c695d4 mon_host = 10.1.1.2,10.1.1.3,10.1.1.4
Configure
extraMounts
within theOpenStackControlPlane
CR:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - CinderBackup - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true '
3.4. Creating a NFS Ganesha cluster
If you use the Ceph via NFS backend with Shared File Systems service (manila), prior to adoption, you must create a new clustered NFS service on the Ceph cluster. This service will replace the standalone, pacemaker-controlled ceph-nfs
service that was used on Red Hat OpenStack Platform 17.1.
Procedure
-
You must identify the ceph nodes to deploy the new clustered NFS service.This service must be deployed on the
StorageNFS
isolated network so that it is easier for clients to mount their existing shares through the new NFS export locations. You must propagate the
StorageNFS
network to the target nodes where theceph-nfs
service will be deployed. The following steps will be relevant if the Ceph Storage nodes were deployed via director.-
Identify the node definition file used in the environment. This is the input file associated with the
openstack overcloud node provision
command. For example, this file may be calledovercloud-baremetal-deploy.yaml
Edit the networks associated with the Red Hat Ceph Storage nodes to include the
StorageNFS
network:- name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: storage_nfs
Edit the network configuration template file for the Red Hat Ceph Storage nodes to include an interface connecting to the
StorageNFS
network. In the example above, the path to the network configuration template file is/home/stack/network/nic-configs/ceph-storage.j2
. This file is modified to include the following NIC template:- type: vlan device: nic2 vlan_id: {{ storage_nfs_vlan_id }} addresses: - ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }} routes: {{ storage_nfs_host_routes }}
Re-run the
openstack overcloud node provision
command to update the Red Hat Ceph Storage nodes.openstack overcloud node provision \ --stack overcloud \ --network-config -y \ -o overcloud-baremetal-deployed-storage_nfs.yaml \ --concurrency 2 \ /home/stack/network/baremetal_deployment.yaml
When the update is complete, ensure that the Red Hat Ceph Storage nodes have a new interface created and tagged with the appropriate VLAN associated with
StorageNFS
.
-
Identify the node definition file used in the environment. This is the input file associated with the
Identify an IP address from the
StorageNFS
network to use as the Virtual IP address for the Ceph NFS service. This IP address must be provided in place of the{{ VIP }}
in the example below. You can query used IP addresses with:openstack port list -c "Fixed IP Addresses" --network storage_nfs
- Pick an appropriate size for the NFS cluster. The NFS service provides active/active high availability when the cluster size is more than one node. It is recommended that the ``{{ cluster_size }}`` is at least one less than the number of hosts identified. This solution has been well tested with a 3-node NFS cluster.
-
The
ingress-mode
argument must be set to ``haproxy-protocol``. No other ingress-mode will be supported. This ingress mode will allow enforcing client restrictions through Shared File Systems service. For more information on deploying the clustered Ceph NFS service, see the Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability) in Red Hat Ceph Storage 7 Operations Guide. The following commands are run inside a
cephadm shell
to create a clustered Ceph NFS service.# wait for shell to come up, then execute: ceph orch host ls # Identify the hosts that can host the NFS service. # Repeat the following command to label each host identified: ceph orch host label add <HOST> nfs # Set the appropriate {{ cluster_size }} and {{ VIP }}: ceph nfs cluster create cephfs \ "{{ cluster_size }} label:nfs" \ --ingress \ --virtual-ip={{ VIP }} --ingress-mode=haproxy-protocol }} # Check the status of the nfs cluster with these commands ceph nfs cluster ls ceph nfs cluster info cephfs
3.5. Stopping Red Hat OpenStack Platform services
Before you start the adoption, you must stop the Red Hat OpenStack Platform (RHOSP) services.
This is an important step to avoid inconsistencies in the data migrated for the data-plane adoption procedure caused by resource changes after the DB has been copied to the new deployment.
Some services are easy to stop because they only perform short asynchronous operations, but other services are a bit more complex to gracefully stop because they perform synchronous or long running operations that you might want to complete instead of aborting them.
Since gracefully stopping all services is non-trivial and beyond the scope of this guide, the following procedure uses the force method and presents recommendations on how to check some things in the services.
Note that you should not stop the infrastructure management services yet, such as:
- database
- RabbitMQ
- HAProxy Load Balancer
- ceph-nfs
- Compute service
- containerized modular libvirt daemons
- Object Storage service (swift) backend services
Prerequisites
- Confirm that there no long-running operations that require the services you plan to stop.
Ensure that there are no ongoing instance live migrations, volume migrations (online or offline), volume creation, backup restore, attaching, detaching, and so on.
openstack server list --all-projects -c ID -c Status |grep -E '\| .+ing \|' openstack volume list --all-projects -c ID -c Status |grep -E '\| .+ing \|'| grep -vi error openstack volume backup list --all-projects -c ID -c Status |grep -E '\| .+ing \|' | grep -vi error openstack share list --all-projects -c ID -c Status |grep -E '\| .+ing \|'| grep -vi error openstack image list -c ID -c Status |grep -E '\| .+ing \|'
- Collect the services topology-specific configuration before stopping services required to gather it live. The topology-specific configuration is necessary for migrating the databases. For more information, see Retrieving topology-specific service configuration.
Define the following shell variables. The values that are used are examples and refer to a single node standalone director deployment. Replace these example values with values that are correct for your environment:
CONTROLLER1_SSH="ssh -i <path to SSH key> root@<node IP>" CONTROLLER2_SSH="" CONTROLLER3_SSH=""
Procedure
You can stop RHOSP services at any moment, but you might leave your environment in an undesired state. You should confirm that there are no ongoing operations.
- Connect to all the controller nodes.
- Remove any constraints between infrastructure and RHOSP control plane services.
- Stop the control plane services.
- Verify the control plane services are stopped.
The cinder-backup service on RHOSP 17.1 could be running as Active-Passive under pacemaker or as Active-Active, so you must check how it is running and stop it.
If the deployment enables CephFS through NFS as a backend for Shared File Systems service (manila), there are pacemaker ordering and co-location constraints that govern the Virtual IP address assigned to the ceph-nfs
service, the ceph-nfs
service itself and manila-share
service. These constraints must be removed:
# check the co-location and ordering constraints concerning "manila-share" sudo pcs constraint list --full # remove these constraints sudo pcs constraint remove colocation-openstack-manila-share-ceph-nfs-INFINITY sudo pcs constraint remove order-ceph-nfs-openstack-manila-share-Optional
The following steps to disable RHOSP control plane services can be automated with a simple script that relies on the previously defined environmental variables and function:
# Update the services list to be stopped ServicesToStop=("tripleo_horizon.service" "tripleo_keystone.service" "tripleo_barbican_api.service" "tripleo_barbican_worker.service" "tripleo_barbican_keystone_listener.service" "tripleo_cinder_api.service" "tripleo_cinder_api_cron.service" "tripleo_cinder_scheduler.service" "tripleo_cinder_volume.service" "tripleo_cinder_backup.service" "tripleo_glance_api.service" "tripleo_manila_api.service" "tripleo_manila_api_cron.service" "tripleo_manila_scheduler.service" "tripleo_neutron_api.service" "tripleo_placement_api.service" "tripleo_nova_api_cron.service" "tripleo_nova_api.service" "tripleo_nova_conductor.service" "tripleo_nova_metadata.service" "tripleo_nova_scheduler.service" "tripleo_nova_vnc_proxy.service" "tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_compute.service" "tripleo_ceilometer_agent_ipmi.service" "tripleo_ceilometer_agent_notification.service" "tripleo_ovn_cluster_northd.service" "tripleo_ironic_neutron_agent.service" "tripleo_ironic_api.service" "tripleo_ironic_inspector.service" "tripleo_ironic_conductor.service") PacemakerResourcesToStop=("openstack-cinder-volume" "openstack-cinder-backup" "openstack-manila-share") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done echo "Stopping pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then echo "Stopping $resource" ${!SSH_CMD} sudo pcs resource disable $resource else echo "Service $resource not present" fi done break fi done echo "Checking pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then if ! ${!SSH_CMD} sudo pcs resource status $resource | grep Started; then echo "OK: Service $resource is stopped" else echo "ERROR: Service $resource is started" fi fi done break fi done
3.6. Migrating databases to MariaDB instances
This document describes how to move the databases from the original Red Hat OpenStack Platform (RHOSP) deployment to the MariaDB instances in the Red Hat OpenShift Container Platform cluster.
This example scenario describes a simple single-cell setup. Real multi-stack topology recommended for production use results in different cells DBs layout, and should be using different naming schemes (not covered here this time).
Prerequisites
Make sure the previous Adoption steps have been performed successfully.
-
The
OpenStackControlPlane
resource must be already created. - The control plane MariaDB and RabbitMQ are running. No other control plane services are running.
- Required services specific topology. For more information, see Retrieving topology-specific service configuration.
- RHOSP services have been stopped. For more information, see Stopping Red Hat OpenStack Platform services.
- There must be network routability between the original MariaDB and the MariaDB for the control plane.
-
The
Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment:
PODIFIED_MARIADB_IP=$(oc get svc --selector "mariadb/name=openstack" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_CELL1_MARIADB_IP=$(oc get svc --selector "mariadb/name=openstack-cell1" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_DB_ROOT_PASSWORD=$(oc get -o json secret/osp-secret | jq -r .data.DbRootPassword | base64 -d) # The CHARACTER_SET and collation should match the source DB # if the do not then it will break foreign key relationships # for any tables that are created in the future as part of db sync CHARACTER_SET=utf8 COLLATION=utf8_general_ci STORAGE_CLASS=local-storage MARIADB_IMAGE=registry.redhat.io/rhosp-dev-preview/openstack-mariadb-rhel9:18.0 # Replace with your environment's MariaDB Galera cluster VIP and backend IPs: SOURCE_MARIADB_IP=172.17.0.2 declare -A SOURCE_GALERA_MEMBERS SOURCE_GALERA_MEMBERS=( ["standalone.localdomain"]=172.17.0.100 # ... ) SOURCE_DB_ROOT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' MysqlRootPassword:' | awk -F ': ' '{ print $2; }')
- Prepare MariaDB copy directory and the adoption helper pod
Create a temporary folder to store adoption helper pod (pick storage requests to fit MySQL database size):
mkdir ~/adoption-db cd ~/adoption-db
oc apply -f - <<EOF --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mariadb-data spec: storageClassName: $STORAGE_CLASS accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: mariadb-copy-data annotations: openshift.io/scc: anyuid k8s.v1.cni.cncf.io/networks: internalapi labels: app: adoption spec: containers: - image: $MARIADB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: mariadb-data securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: mariadb-data persistentVolumeClaim: claimName: mariadb-data EOF
Wait for the pod to come up
oc wait --for condition=Ready pod/mariadb-copy-data --timeout=30s
Procedure
Check that the source Galera database cluster members are online and synced:
for i in "${!SOURCE_GALERA_MEMBERS[@]}"; do echo "Checking for the database node $i WSREP status Synced" oc rsh mariadb-copy-data mysql \ -h "${SOURCE_GALERA_MEMBERS[$i]}" -uroot -p"$SOURCE_DB_ROOT_PASSWORD" \ -e "show global status like 'wsrep_local_state_comment'" | \ grep -qE "\bSynced\b" done
Get the count of not-OK source databases:
oc rsh mariadb-copy-data mysql -h "${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" -e "SHOW databases;"
Run mysqlcheck on the original DB to look for things that are not OK:
. ~/.source_cloud_exported_variables test -z "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" || [ "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" = " " ]
Test connection to control plane DBs (show databases):
oc run mariadb-client --image $MARIADB_IMAGE -i --rm --restart=Never -- \ mysql -rsh "$PODIFIED_MARIADB_IP" -uroot -p"$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;' oc run mariadb-client --image $MARIADB_IMAGE -i --rm --restart=Never -- \ mysql -rsh "$PODIFIED_CELL1_MARIADB_IP" -uroot -p"$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;'
NoteYou need to transition Compute service (nova) services imported later on into a superconductor architecture. For that, delete the old service records in cells DBs, starting from the cell1. New records will be registered with different hostnames provided by the Compute service service operator. All Compute service services, except the compute agent, have no internal state, and its service records can be safely deleted. You also need to rename the former
default
cell tocell1
.Create a dump of the original databases:
oc rsh mariadb-copy-data << EOF mysql -h"${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" \ -N -e "show databases" | grep -E -v "schema|mysql|gnocchi" | \ while read dbname; do echo "Dumping \${dbname}"; mysqldump -h"${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" \ --single-transaction --complete-insert --skip-lock-tables --lock-tables=0 \ "\${dbname}" > /backup/"\${dbname}".sql; done EOF
Restore the databases from .sql files into the control plane MariaDB:
oc rsh mariadb-copy-data << EOF # db schemas to rename on import declare -A db_name_map db_name_map['nova']='nova_cell1' db_name_map['ovs_neutron']='neutron' db_name_map['ironic-inspector']='ironic_inspector' # db servers to import into declare -A db_server_map db_server_map['default']=${PODIFIED_MARIADB_IP} db_server_map['nova_cell1']=${PODIFIED_CELL1_MARIADB_IP} # db server root password map declare -A db_server_password_map db_server_password_map['default']=${PODIFIED_DB_ROOT_PASSWORD} db_server_password_map['nova_cell1']=${PODIFIED_DB_ROOT_PASSWORD} cd /backup for db_file in \$(ls *.sql); do db_name=\$(echo \${db_file} | awk -F'.' '{ print \$1; }') if [[ -v "db_name_map[\${db_name}]" ]]; then echo "renaming \${db_name} to \${db_name_map[\${db_name}]}" db_name=\${db_name_map[\${db_name}]} fi db_server=\${db_server_map["default"]} if [[ -v "db_server_map[\${db_name}]" ]]; then db_server=\${db_server_map[\${db_name}]} fi db_password=\${db_server_password_map['default']} if [[ -v "db_server_password_map[\${db_name}]" ]]; then db_password=\${db_server_password_map[\${db_name}]} fi echo "creating \${db_name} in \${db_server}" mysql -h"\${db_server}" -uroot "-p\${db_password}" -e \ "CREATE DATABASE IF NOT EXISTS \${db_name} DEFAULT \ CHARACTER SET ${CHARACTER_SET} DEFAULT COLLATE ${COLLATION};" echo "importing \${db_name} into \${db_server}" mysql -h "\${db_server}" -uroot "-p\${db_password}" "\${db_name}" < "\${db_file}" done mysql -h "\${db_server_map['default']}" -uroot -p"\${db_server_password_map['default']}" -e \ "update nova_api.cell_mappings set name='cell1' where name='default';" mysql -h "\${db_server_map['nova_cell1']}" -uroot -p"\${db_server_password_map['nova_cell1']}" -e \ "delete from nova_cell1.services where host not like '%nova-cell1-%' and services.binary != 'nova-compute';" EOF
Verification
Compare the following outputs with the topology specific configuration. For more information, see Retrieving topology-specific service configuration.
Check that the databases were imported correctly:
. ~/.source_cloud_exported_variables # use 'oc exec' and 'mysql -rs' to maintain formatting dbs=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;') echo $dbs | grep -Eq '\bkeystone\b' # ensure neutron db is renamed from ovs_neutron echo $dbs | grep -Eq '\bneutron\b' echo $PULL_OPENSTACK_CONFIGURATION_DATABASES | grep -Eq '\bovs_neutron\b' # ensure nova cell1 db is extracted to a separate db server and renamed from nova to nova_cell1 c1dbs=$(oc exec openstack-cell1-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;') echo $c1dbs | grep -Eq '\bnova_cell1\b' # ensure default cell renamed to cell1, and the cell UUIDs retained intact novadb_mapped_cells=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" \ nova_api -e 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;') uuidf='\S{8,}-\S{4,}-\S{4,}-\S{4,}-\S{12,}' left_behind=$(comm -23 \ <(echo $PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS | grep -oE " $uuidf \S+") \ <(echo $novadb_mapped_cells | tr -s "| " " " | grep -oE " $uuidf \S+")) changed=$(comm -13 \ <(echo $PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS | grep -oE " $uuidf \S+") \ <(echo $novadb_mapped_cells | tr -s "| " " " | grep -oE " $uuidf \S+")) test $(grep -Ec ' \S+$' <<<$left_behind) -eq 1 default=$(grep -E ' default$' <<<$left_behind) test $(grep -Ec ' \S+$' <<<$changed) -eq 1 grep -qE " $(awk '{print $1}' <<<$default) cell1$" <<<$changed # ensure the registered Compute service name has not changed novadb_svc_records=$(oc exec openstack-cell1-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" \ nova_cell1 -e "select host from services where services.binary='nova-compute' order by host asc;") diff -Z <(echo $novadb_svc_records) <(echo $PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES)
-
During the pre/post checks the pod
mariadb-client
might have returned a pod security warning related to therestricted:latest
security context constraint. This is due to default security context constraints and will not prevent pod creation by the admission controller. You’ll see a warning for the short-lived pod but it will not interfere with functionality. Delete the
mariadb-data
pod andmariadb-copy-data
persistent volume claim with databases backup (consider making a snapshot of it, before deleting)oc delete pod mariadb-copy-data oc delete pvc mariadb-data
For more information, see About pod security standards and warnings.
3.7. Migrating OVN data
This document describes how to move OVN northbound and southbound databases from the original Red Hat OpenStack Platform deployment to ovsdb-server
instances running in the Red Hat OpenShift Container Platform cluster. While it may be argued that the control plane Networking service (neutron) ML2/OVN driver and OVN northd service will reconstruct the databases on startup, the reconstruction may be time consuming on large existing clusters. The procedure below allows to speed up data migration and avoid unnecessary data plane disruptions due to incomplete OpenFlow table contents.
Prerequisites
Make sure the previous Adoption steps have been performed successfully.
-
The
OpenStackControlPlane
resource must be already created at this point. -
NetworkAttachmentDefinition custom resource definitions (CRDs) for the original cluster are already defined. Specifically,
openstack/internalapi
network is defined. - Control plane MariaDB and RabbitMQ may already run. The Networking service and OVN are not running yet.
- Original OVN is older or equal to the control plane version.
- Original Neutron Server and OVN northd services are stopped.
There must be network routability between:
- The adoption host and the original OVN.
- The adoption host and the control plane OVN.
-
The
Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment:
STORAGE_CLASS=local-storage OVSDB_IMAGE=registry.redhat.io/rhosp-dev-preview/openstack-ovn-base-rhel9:18.0 SOURCE_OVSDB_IP=172.17.1.49
You can get the value to set
SOURCE_OVSDB_IP
by querying the puppet-generated configurations:grep -rI 'ovn_[ns]b_conn' /var/lib/config-data/puppet-generated/
Procedure
Prepare the OVN DBs copy dir and the adoption helper pod (pick the storage requests to fit the OVN databases sizes)
oc apply -f - <<EOF --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ovn-data-cert namespace: openstack spec: commonName: ovn-data-cert secretName: ovn-data-cert issuerRef: name: rootca-internal --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ovn-data spec: storageClassName: $STORAGE_CLASS_NAME accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: ovn-copy-data annotations: openshift.io/scc: anyuid k8s.v1.cni.cncf.io/networks: internalapi labels: app: adoption spec: containers: - image: $OVSDB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: ovn-data - mountPath: /etc/pki/tls/misc name: ovn-data-cert readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: ovn-data persistentVolumeClaim: claimName: ovn-data - name: ovn-data-cert secret: secretName: ovn-data-cert EOF
Wait for the pod to come up
oc wait --for=condition=Ready pod/ovn-copy-data --timeout=30s
Backup OVN databases on an environment without TLS everywhere.
oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db"
Backup OVN databases on a TLS everywhere environment.
oc exec ovn-copy-data -- bash -c "ovsdb-client backup --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client backup --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db"
Start the control plane OVN database services prior to import, keeping
northd/ovn-controller
stopped.oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnDBCluster: ovndbcluster-nb: dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: replicas: 0 ovnController: networkAttachment: tenant nodeSelector: node: non-existing-node-name '
Wait for the OVN DB pods reaching the running phase.
oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-nb oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-sb
Fetch the control plane OVN IP addresses on the clusterIP service network.
PODIFIED_OVSDB_NB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-nb-0" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_OVSDB_SB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-sb-0" -ojsonpath='{.items[0].spec.clusterIP}')
Upgrade database schema for the backup files on an environment without TLS everywhere.
oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema"
Upgrade database schema for the backup files on a TLS everywhere environment.
oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema"
Restore database backup to the control plane OVN database servers on an environment without TLS everywhere.
oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db"
Restore database backup to control plane OVN database servers on a TLS everywhere environment.
oc exec ovn-copy-data -- bash -c "ovsdb-client restore --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client restore --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db"
Check that the control plane OVN databases contain objects from backup, for example:
oc exec -it ovsdbserver-nb-0 -- ovn-nbctl show oc exec -it ovsdbserver-sb-0 -- ovn-sbctl list Chassis
Finally, you can start
ovn-northd
service that will keep OVN northbound and southbound databases in sync.oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnNorthd: replicas: 1 '
Also enable
ovn-controller
:oc patch openstackcontrolplane openstack --type=json -p="[{'op': 'remove', 'path': '/spec/ovn/template/ovnController/nodeSelector'}]"
Delete the
ovn-data
pod and persistent volume claim with OVN databases backup (consider making a snapshot of it, before deleting):oc delete pod ovn-copy-data oc delete pvc ovn-data
Stop old OVN database servers.
ServicesToStop=("tripleo_ovn_cluster_north_db_server.service" "tripleo_ovn_cluster_south_db_server.service") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done