此内容没有您所选择的语言版本。
Chapter 4. Adopting Red Hat OpenStack Platform control plane services
Adopt your Red Hat OpenStack Platform 17.1 control plane services to deploy them in the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 control plane.
4.1. Adopting the Identity service
Prerequisites
- Previous Adoption steps completed. Notably, the Migrating databases to MariaDB instances must already be imported into the control plane MariaDB.
Ensure that you copy the fernet keys. Create the
keystone
secret, containing fernet keys:oc apply -f - <<EOF apiVersion: v1 data: CredentialKeys0: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/0 | base64 -w 0) CredentialKeys1: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/1 | base64 -w 0) FernetKeys0: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/0 | base64 -w 0) FernetKeys1: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/1 | base64 -w 0) kind: Secret metadata: name: keystone namespace: openstack type: Opaque EOF
Procedure
Patch
OpenStackControlPlane
to deploy Identity service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: keystone: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret '
Create an alias to use
openstack
command in the adopted deployment:$ alias openstack="oc exec -t openstackclient -- openstack"
Clean up old services and endpoints that still point to the old control plane, excluding the Identity service and its endpoints:
$ openstack endpoint list | grep keystone | awk '/admin/{ print $2; }' | xargs ${BASH_ALIASES[openstack]} endpoint delete || true for service in aodh heat heat-cfn barbican cinderv3 glance manila manilav2 neutron nova placement swift ironic-inspector ironic; do openstack service list | awk "/ $service /{ print \$2; }" | xargs ${BASH_ALIASES[openstack]} service delete || true done
Verification
See that Identity service endpoints are defined and pointing to the control plane FQDNs:
$ openstack endpoint list | grep keystone
4.2. Adopting the Key Manager service
Adopting Key Manager service (barbican) means that an existing OpenStackControlPlane
custom resource (CR), where Key Manager service is initialy disabled, should be patched to start the service with the configuration parameters provided by the source environment.
When the procedure is over, the expectation is to see the BarbicanAPI
, BarbicanWorker
, BarbicanKeystoneListener
services are up and running. Keystone endpoints
should also be updated and the same crypto plugin of the source Cloud will be available. If the conditions above are met, the adoption is considered concluded.
This procedure configures the Key Manager service to use the simple_crypto
backend. Additional backends are available, such as PKCS11 and DogTag, however they are not supported in this release.
Prerequisites
- Previous Adoption steps completed. Notably, MariaDB, RabbitMQ, and Identity service (keystone). should be already adopted.
Procedure
Add the kek secret. In this case we are updating and using
osp-secret
, which contains other service passwords:oc set data secret/osp-secret "BarbicanSimpleCryptoKEK=$($CONTROLLER1_SSH "python3 -c \"import configparser; c = configparser.ConfigParser(); c.read('/var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf'); print(c['simple_crypto_plugin']['kek'])\"" | base64 -w 0)"
Patch
OpenStackControlPlane
to deploy the Key Manager service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: barbican: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: barbican databaseUser: barbican rabbitMqClusterName: rabbitmq secret: osp-secret simpleCryptoBackendSecret: osp-secret serviceAccount: barbican serviceUser: barbican passwordSelectors: database: BarbicanDatabasePassword service: BarbicanPassword simplecryptokek: BarbicanSimpleCryptoKEK barbicanAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 1 barbicanKeystoneListener: replicas: 1 '
Verification
Check that the Identity service (keystone) endpoints are defined and pointing to the control plane FQDNs:
$ openstack endpoint list | grep key-manager
Check that Barbican API service is registered in Identity service:
$ openstack service list | grep key-manager
$ openstack endpoint list | grep key-manager
List secrets:
$ openstack secret list
4.3. Adopting the Networking service
Adopting Networking service (neutron) means that an existing OpenStackControlPlane
custom resource (CR), where the Networking service is supposed to be disabled, should be patched to start the service with the configuration parameters provided by the source environment.
When the procedure is over, the expectation is to see the NeutronAPI
service is running: the Identity service (keystone) endpoints should be updated and the same backend of the source Cloud will be available. If the conditions above are met, the adoption is considered concluded.
This guide also assumes that:
- A director environment (the source Cloud) is running on one side;
-
A
SNO
/CodeReadyContainers
is running on the other side.
Prerequisites
- Previous Adoption steps completed. Notably, MariaDB,Identity service (keystone), and Migrating OVN data should be already adopted.
Procedure
Patch
OpenStackControlPlane
to deploy Networking service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: neutron: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack databaseAccount: neutron secret: osp-secret networkAttachments: - internalapi '
Verification
Inspect the resulting Networking service pods:
NEUTRON_API_POD=`oc get pods -l service=neutron | tail -n 1 | cut -f 1 -d' '` oc exec -t $NEUTRON_API_POD -c neutron-api -- cat /etc/neutron/neutron.conf
Check that the
Neutron API
service is registered in the Identity service:$ openstack service list | grep network
$ openstack endpoint list | grep network | 6a805bd6c9f54658ad2f24e5a0ae0ab6 | regionOne | neutron | network | True | public | http://neutron-public-openstack.apps-crc.testing | | b943243e596847a9a317c8ce1800fa98 | regionOne | neutron | network | True | internal | http://neutron-internal.openstack.svc:9696 | | f97f2b8f7559476bb7a5eafe3d33cee7 | regionOne | neutron | network | True | admin | http://192.168.122.99:9696 |
Create sample resources. You can test whether the user can create networks, subnets, ports, or routers.
$ openstack network create net $ openstack subnet create --network net --subnet-range 10.0.0.0/24 subnet $ openstack router create router
4.4. Adopting the Object Storage service
This section only applies if you are using OpenStack Swift as Object Storage service (swift). If you are using the Object Storage API of Ceph RGW this section can be skipped.
Prerequisites
- Previous adoption steps completed.
- The Object Storage service storage backend services must still be running.
- Storage network has been properly configured on the Red Hat OpenShift Container Platform cluster.
Procedure
Create the
swift-conf
secret, containing the Object Storage service hash path suffix and prefix:oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: swift-conf namespace: openstack type: Opaque data: swift.conf: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/swift/etc/swift/swift.conf | base64 -w0) EOF
Create the
swift-ring-files
configmap, containing the Object Storage service ring files:oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: swift-ring-files binaryData: swiftrings.tar.gz: $($CONTROLLER1_SSH "cd /var/lib/config-data/puppet-generated/swift/etc/swift && tar cz *.builder *.ring.gz backups/ | base64 -w0") account.ring.gz: $($CONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/account.ring.gz") container.ring.gz: $($CONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/container.ring.gz") object.ring.gz: $($CONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/object.ring.gz") EOF
Patch
OpenStackControlPlane
to deploy the Object Storage service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: swift: enabled: true template: memcachedInstance: memcached swiftRing: ringReplicas: 1 swiftStorage: replicas: 0 networkAttachments: - storage storageClass: local-storage storageRequest: 10Gi swiftProxy: secret: osp-secret replicas: 1 passwordSelectors: service: SwiftPassword serviceUser: swift override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage '
Verification
Inspect the resulting Object Storage service service pods:
$ oc get pods -l component=swift-proxy
Check that the Object Storage service proxy service is registered in the Identity service:
$ openstack service list | grep swift | b5b9b1d3c79241aa867fa2d05f2bbd52 | swift | object-store |
$ openstack endpoint list | grep swift | 32ee4bd555414ab48f2dc90a19e1bcd5 | regionOne | swift | object-store | True | public | https://swift-public-openstack.apps-crc.testing/v1/AUTH_%(tenant_id)s | | db4b8547d3ae4e7999154b203c6a5bed | regionOne | swift | object-store | True | internal | http://swift-internal.openstack.svc:8080/v1/AUTH_%(tenant_id)s |
Check that you are able to up- and download objects:
echo "Hello World!" > obj openstack container create test +---------------------------------------+-----------+------------------------------------+ | account | container | x-trans-id | +---------------------------------------+-----------+------------------------------------+ | AUTH_4d9be0a9193e4577820d187acdd2714a | test | txe5f9a10ce21e4cddad473-0065ce41b9 | +---------------------------------------+-----------+------------------------------------+ openstack object create test obj +--------+-----------+----------------------------------+ | object | container | etag | +--------+-----------+----------------------------------+ | obj | test | d41d8cd98f00b204e9800998ecf8427e | +--------+-----------+----------------------------------+ openstack object save test obj --file - Hello World!
At this point data is still stored on the previously existing nodes. For more information about migrating the actual data from the old to the new deployment, see Migrating the Object Storage service (swift) data from RHOSP to Red Hat OpenStack Services on OpenShift (RHOSO) nodes.
4.5. Adopting the Image service
Adopting Image Service (glance) means that an existing OpenStackControlPlane
custom resource (CR), where Image service is supposed to be disabled, should be patched to start the service with the configuration parameters provided by the source environment.
When the procedure is over, the expectation is to see the GlanceAPI
service up and running: the Identity service endpoints are updated and the same backend of the source Cloud is available. If the conditions above are met, the adoption is considered concluded.
This guide also assumes that:
- A director environment (the source Cloud) is running on one side.
-
A
SNO
/CodeReadyContainers
is running on the other side. -
(optional) An internal/external
Ceph
cluster is reachable by bothcrc
and director.
4.5.1. Adopting the Image service that is deployed with a Object Storage service backend
Adopt the Image Service (glance) that you deployed with an Object Storage service (swift) backend. When Image service is deployed with Object Storage service (swift) as a backend in the Red Hat OpenStack Platform environment based on director, the control plane glanceAPI
instance is deployed with the following configuration:
.. spec glance: ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }}
Prerequisites
- Previous Adoption steps completed. Notably, MariaDB, Identity service (keystone) and Key Manager service (barbican) should be already adopted.
Procedure
Write the patch manifest into a file, for example
glance_swift.patch
. For example:spec: glance: enabled: true apiOverride: route: {} template: databaseInstance: openstack storageClass: "local-storage" storageRequest: 10G customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }} glanceAPIs: default: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage
Having Object Storage service as a backend establishes a dependency between the two services, and any deployed
GlanceAPI
instance would not work if Image service is configured with Object Storage service that is still not available in theOpenStackControlPlane
. Once Object Storage service, and in particularSwiftProxy
, has been adopted, it is possible to proceed with theGlanceAPI
adoption. For more information, see Adopting the Object Storage service.Verify that
SwiftProxy
is available:$ oc get pod -l component=swift-proxy | grep Running swift-proxy-75cb47f65-92rxq 3/3 Running 0
Patch the
GlanceAPI
service deployed in the control plane context:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=glance_swift.patch
4.5.2. Adopting the Image service that is deployed with a Block Storage service backend
Adopt the Image Service (glance) that you deployed with an Block Storage service (cinder) backend. When Image service is deployed with Block Storage service as a backend in the Red Hat OpenStack Platform environment based on director, the control plane glanceAPI
instance is deployed with the following configuration:
.. spec glance: ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] rootwrap_config = /etc/glance/rootwrap.conf description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true
Prerequisites
- Previous Adoption steps completed. Notably, MariaDB, Identity service (keystone) and Key Manager service (barbican) should be already adopted.
Procedure
Write the patch manifest into a file, for example
glance_cinder.patch
. For example:spec: glance: enabled: true apiOverride: route: {} template: databaseInstance: openstack storageClass: "local-storage" storageRequest: 10G customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] rootwrap_config = /etc/glance/rootwrap.conf description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true glanceAPIs: default: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage
Having Block Storage service as a backend establishes a dependency between the two services, and any deployed
GlanceAPI
instance would not work if the Image service is configured with Block Storage service that is still not available in theOpenStackControlPlane
. Once Block Storage service, and in particularCinderVolume
, has been adopted, it is possible to proceed with theGlanceAPI
adoption.Verify that
CinderVolume
is available:$ oc get pod -l component=cinder-volume | grep Running cinder-volume-75cb47f65-92rxq 3/3 Running 0
Patch the
GlanceAPI
service deployed in the control plane context:oc patch openstackcontrolplane openstack --type=merge --patch-file=glance_cinder.patch
4.5.3. Adopting the Image service that is deployed with an NFS Ganesha backend
Adopt the Image Service (glance) that you deployed with an NFS Ganesha backend. The following steps assume that:
- The Storage network has been propagated to the RHOSP control plane.
-
The Image service is able to reach the Storage network and connect to the nfs-server through the port
2049
.
Prerequisites
- Previous Adoption steps completed. Notably, MariaDB, Identity service (keystone) and Key Manager service (barbican) should be already adopted.
In the source cloud, verify the NFS Ganesha parameters used by the overcloud to configure the Image service backend. In particular, find among the director heat templates the following variables that are usually an override of the default content provided by
/usr/share/openstack-tripleo-heat-templates/environments/storage/glance-nfs.yaml
[glance-nfs.yaml]:**GlanceBackend**: file **GlanceNfsEnabled**: true **GlanceNfsShare**: 192.168.24.1:/var/nfs
In the example above, as the first variable shows, the Image service has no notion of NFS Ganesha backend: the
File
driver is used in this scenario, and behind the scenes, thefilesystem_store_datadir
which usually points to/var/lib/glance/images/
is mapped to the export value provided by theGlanceNfsShare
variable. If theGlanceNfsShare
is not exported through a network that is supposed to be propagated to the adopted Red Hat OpenStack Platform control plane, an extra action is required by the human administrator, who must stop thenfs-server
and remap the export to thestorage
network. This action usually happens when the Image service is stopped in the source Controller nodes. In the control plane, the Image service is attached to the Storage network, propagated via the associatedNetworkAttachmentsDefinition
custom resource, and the resulting Pods have already the right permissions to handle the Image service traffic through this network. In a deployed RHOSP control plane, you can verify that the network mapping matches with what has been deployed in the director-based environment by checking both theNodeNetworkConfigPolicy
(nncp
) and theNetworkAttachmentDefinition
(net-attach-def
):$ oc get nncp NAME STATUS REASON enp6s0-crc-8cf2w-master-0 Available SuccessfullyConfigured $ oc get net-attach-def NAME ctlplane internalapi storage tenant $ oc get ipaddresspool -n metallb-system NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES ctlplane true false ["192.168.122.80-192.168.122.90"] internalapi true false ["172.17.0.80-172.17.0.90"] storage true false ["172.18.0.80-172.18.0.90"] tenant true false ["172.19.0.80-172.19.0.90"]
The above represents an example of the output that should be checked in the Red Hat OpenShift Container Platform environment to make sure there are no issues with the propagated networks.
Procedure
Adopt the Image service and create a new
default
GlanceAPI
instance connected with the existing NFS Ganesha share.cat << EOF > glance_nfs_patch.yaml spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: /var/nfs server: 172.17.3.20 name: r1 region: r1 glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images/ storageClass: "local-storage" storageRequest: 10G glanceAPIs: default: replicas: 1 type: single override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage EOF
NoteReplace in
glance_nfs_patch.yaml
thenfs/server
IP address with the IP used to reach thenfs-server
and make sure thenfs/path
points to the exported path in thenfs-server
.Patch
OpenStackControlPlane
to deploy Image service with a NFS Ganesha backend:$ oc patch openstackcontrolplane openstack --type=merge --patch-file glance_nfs_patch.yaml
Verification
When GlanceAPI is active, you can see a single API instance:
$ oc get pods -l service=glance NAME READY STATUS RESTARTS glance-default-single-0 3/3 Running 0 ```
and the description of the pod must report:
Mounts: ... nfs: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: {{ server ip address }} Path: {{ nfs export path }} ReadOnly: false ...
Check the mountpoint:
oc rsh -c glance-api glance-default-single-0 sh-5.1# mount ... ... {{ ip address }}:/var/nfs on /var/lib/glance/images type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.18.0.5,local_lock=none,addr=172.18.0.5) ... ...
Confirm that the UUID has been created in the exported directory on the NFS Ganesha node. For example:
$ oc rsh openstackclient $ openstack image list sh-5.1$ curl -L -o /tmp/cirros-0.5.2-x86_64-disk.img http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img ... ... sh-5.1$ openstack image create --container-format bare --disk-format raw --file /tmp/cirros-0.5.2-x86_64-disk.img cirros ... ... sh-5.1$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 634482ca-4002-4a6d-b1d5-64502ad02630 | cirros | active | +--------------------------------------+--------+--------+
On the nfs-server node, the same
uuid
is in the exported/var/nfs
:$ ls /var/nfs/ 634482ca-4002-4a6d-b1d5-64502ad02630
4.5.4. Adopting the Image service that is deployed with a Red Hat Ceph Storage backend
Adopt the Image Service (glance) that you deployed with a Red Hat Ceph Storage backend. Use the customServiceConfig
parameter to inject the right configuration to the GlanceAPI
instance.
Prerequisites
- Previous Adoption steps completed. Notably, MariaDB, Identity service (keystone) and Key Manager service (barbican) should be already adopted.
Make sure the Ceph-related secret (
ceph-conf-files
) was created in theopenstack
namespace and that theextraMounts
property of theOpenStackControlPlane
custom resource (CR) has been configured properly. These tasks are described in an earlier Adoption step Configuring a Ceph backend.cat << EOF > glance_patch.yaml spec: glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends=default_backend:rbd [glance_store] default_backend=default_backend [default_backend] rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=openstack rbd_store_pool=images store_description=Ceph glance store backend. storageClass: "local-storage" storageRequest: 10G glanceAPIs: default: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage EOF
If you have previously backed up your RHOSP services configuration file from the old environment, you can use os-diff to compare and make sure the configuration is correct. For more information, see Pulling the configuration from a director deployment.
os-diff diff /tmp/collect_tripleo_configs/glance/etc/glance/glance-api.conf glance_patch.yaml --crd
This produces the difference between both ini configuration files.
Procedure
Patch
OpenStackControlPlane
CR to deploy Image service with a Red Hat Ceph Storage backend:$ oc patch openstackcontrolplane openstack --type=merge --patch-file glance_patch.yaml
4.5.5. Verifying the Image service adoption
Verify that you successfully adopted your Image Service (glance) to the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment.
Procedure
Test the Image Service (glance) from the Red Hat OpenStack Platform CLI. You can compare and make sure the configuration has been correctly applied to the Image service pods:
os-diff diff /etc/glance/glance.conf.d/02-config.conf glance_patch.yaml --frompod -p glance-api
If no line appears, then the configuration is correctly done.
Inspect the resulting glance pods:
GLANCE_POD=`oc get pod |grep glance-default-external-0 | cut -f 1 -d' '` oc exec -t $GLANCE_POD -c glance-api -- cat /etc/glance/glance.conf.d/02-config.conf [DEFAULT] enabled_backends=default_backend:rbd [glance_store] default_backend=default_backend [default_backend] rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=openstack rbd_store_pool=images store_description=Ceph glance store backend.
If you use a Ceph backend, ensure that the Ceph secrets are properly mounted:
oc exec -t $GLANCE_POD -c glance-api -- ls /etc/ceph ceph.client.openstack.keyring ceph.conf
Check that the service is active and the endpoints are properly updated in the RHOSP CLI:
(openstack)$ service list | grep image | fc52dbffef36434d906eeb99adfc6186 | glance | image | (openstack)$ endpoint list | grep image | 569ed81064f84d4a91e0d2d807e4c1f1 | regionOne | glance | image | True | internal | http://glance-internal-openstack.apps-crc.testing | | 5843fae70cba4e73b29d4aff3e8b616c | regionOne | glance | image | True | public | http://glance-public-openstack.apps-crc.testing | | 709859219bc24ab9ac548eab74ad4dd5 | regionOne | glance | image | True | admin | http://glance-admin-openstack.apps-crc.testing |
Check that the images that you previously listed in the source Cloud are available in the adopted service:
(openstack)$ image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | c3158cad-d50b-452f-bec1-f250562f5c1f | cirros | active | +--------------------------------------+--------+--------+
4.6. Adopting the Placement service
Prerequisites
Previous Adoption steps completed. Notably,
- the Migrating databases to MariaDB instances must already be imported into the control plane MariaDB.
- the Adopting the Identity service needs to be imported.
- the Memcached operator needs to be deployed (nothing to import for it from the source environment).
Procedure
Patch
OpenStackControlPlane
to deploy the Placement service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: placement: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: placement secret: osp-secret override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer '
Verification
Check that Placement endpoints are defined and pointing to the control plane FQDNs and that Placement API responds:
alias openstack="oc exec -t openstackclient -- openstack" openstack endpoint list | grep placement # Without OpenStack CLI placement plugin installed: PLACEMENT_PUBLIC_URL=$(openstack endpoint list -c 'Service Name' -c 'Service Type' -c URL | grep placement | grep public | awk '{ print $6; }') oc exec -t openstackclient -- curl "$PLACEMENT_PUBLIC_URL" # With OpenStack CLI placement plugin installed: openstack resource class list
4.7. Adopting the Compute service
This example scenario describes a simple single-cell setup. Real multi-stack topology recommended for production use results in different cells DBs layout, and should be using different naming schemes (not covered here this time).
Prerequisites
Previous Adoption steps completed. Notably,
- the Migrating databases to MariaDB instances must already be imported into the control plane MariaDB;
- the Adopting the Identity service needs to be imported;
- the Adopting the Key Manager service needs to be imported;
- the Adopting the Placement service needs to be imported;
- the Adopting the Image service needs to be imported;
- the Migrating OVN data need to be imported;
- the Adopting the Networking service needs to be imported;
- the Bare Metal Provisioning service needs to be imported;
- Required topology-specific service configuration. For more information, see Retrieving topology-specific service configuration.
- Red Hat OpenStack Platform services have been stopped. For more information, see Stopping Red Hat OpenStack Platform services.
- Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment:
alias openstack="oc exec -t openstackclient -- openstack"
This procedure assumes that Compute service Metadata is deployed on the top level and not on each cell level, so this example imports it the same way. If the source deployment has a per cell metadata deployment, adjust the given below patch as needed. Metadata service cannot be run in cell0
.
Patch
OpenStackControlPlane
to deploy the Compute service:oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: nova: enabled: true apiOverride: route: {} template: secret: osp-secret apiServiceTemplate: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true metadataServiceTemplate: enabled: true # deploy single nova metadata on the top level override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true schedulerServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cellTemplates: cell0: conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cell1: metadataServiceTemplate: enabled: false # enable here to run it in a cell instead override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true '
If adopting Compute service (nova) with the Baremetal service (
ironic
), append the followingnovaComputeTemplates
in thecell1
section of the Compute service CR patch:NOTE: Set the
[DEFAULT]host
configuration option to match the hostname of the node running theironic
compute driver in the source cloud.cell1: novaComputeTemplates: standalone: customServiceConfig: | [DEFAULT] host = standalone.localdomain [workarounds] disable_compute_service_check_for_ffu=true
Wait for Compute service control plane services' custom resources (CRs) to become ready:
oc wait --for condition=Ready --timeout=300s Nova/nova
The local Conductor services will be started for each cell, while the superconductor runs in
cell0
. Note thatdisable_compute_service_check_for_ffu
is mandatory for all imported Nova services, until the external data plane is imported, and until Nova Compute services fast-forward upgraded. For more information, see Adopting Compute services to the RHOSO data plane and Performing a fast-forward upgrade on Compute services.
Verification
Check that Compute service endpoints are defined and pointing to the control plane FQDNs and that Nova API responds.
$ openstack endpoint list | grep nova $ openstack server list
Compare the following outputs with the topology specific configuration in Retrieving topology-specific service configuration.
Query the superconductor for cell1 existance and compare it to pre-adoption values:
. ~/.source_cloud_exported_variables echo $PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS oc rsh nova-cell0-conductor-0 nova-manage cell_v2 list_cells | grep -F '| cell1 |'
The expected changes to happen:
-
cell1’s
nova
DB and user name becomenova_cell1
. -
Default cell is renamed to
cell1
(in a multi-cell setup, it should become indexed as the last cell instead). -
RabbitMQ transport URL no longer uses
guest
.
-
cell1’s
At this point, the Compute service control plane services do not control the existing Compute service Compute workloads. The control plane manages the data plane only after the data adoption process is successfully completed. For more information, see Adopting Compute services to the RHOSO data plane.
4.8. Adopting the Block Storage service
Adopting a director-deployed Block Storage service (cinder) service into Red Hat OpenStack Platform usually entails:
- Checking existing limitations.
- Considering the placement of the Block Storage service services.
- Preparing the Red Hat OpenShift Container Platform nodes where volume and backup services will run.
-
Crafting the manifest based on the existing
cinder.conf
file. - Deploying Block Storage service.
- Validating the new deployment.
This guide provides necessary knowledge to complete these steps in most situations, but it still requires knowledge on how RHOSP services work and the structure of a Block Storage service configuration file.
4.8.1. Limitations for adopting the Block Storage service
There are currently limitations that are worth highlighting; some are related to this guideline while some to the operator:
-
There is no global
nodeSelector
for all Block Storage service (cinder) volumes, so it needs to be specified per backend. -
There is no global
customServiceConfig
orcustomServiceConfigSecrets
for all Block Storage service volumes, so it needs to be specified per backend. - Adoption of LVM backends, where the volume data is stored in the compute nodes, is not currently being documented in this process.
- Support for Block Storage service backends that require kernel modules not included in RHEL has not been tested in Operator deployed Red Hat OpenStack Platform.
- Adoption of DCN/Edge deployment is not currently described in this guide.
4.8.2. Red Hat OpenShift Container Platform preparation for Block Storage service adoption
Before deploying Red Hat OpenStack Platform (RHOSP) in Red Hat OpenShift Container Platform, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the RHOCP nodes have been made. For Block Storage service (cinder) volume and backup services all these 3 must be carefully considered.
- Node Selection
You might need, or want, to restrict the RHOCP nodes where Block Storage service volume and backup services can run.
The best example of when you need to do node selection for a specific Block Storage service is when you deploy the Block Storage service with the LVM driver. In that scenario, the LVM data where the volumes are stored only exists in a specific host, so you need to pin the Block Storage-volume service to that specific RHOCP node. Running the service on any other RHOCP node would not work. Since
nodeSelector
only works on labels, you cannot use the RHOCP host node name to restrict the LVM backend and you need to identify it using a unique label, an existing label, or new label:$ oc label nodes worker0 lvm=cinder-volumes
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: secret: osp-secret storageClass: local-storage cinder: enabled: true template: cinderVolumes: lvm-iscsi: nodeSelector: lvm: cinder-volumes < . . . >
As mentioned in the About node selector, an example where you need to use labels is when using FC storage and you do not have HBA cards in all your RHOCP nodes. In this scenario you need to restrict all the Block Storage service volume backends (not only the FC one) as well as the backup services.
Depending on the Block Storage service backends, their configuration, and the usage of Block Storage service, you can have network intensive Block Storage service volume services with lots of I/O as well as Block Storage service backup services that are not only network intensive but also memory and CPU intensive. This may be a concern for the RHOCP human operators, and they may want to use the
nodeSelector
to prevent these service from interfering with their other RHOCP workloads. For more information about node selection, see About node selector.When selecting the nodes where the Block Storage service volume is going to run remember that Block Storage service-volume may also use local storage when downloading a Image Service (glance) image for the create volume from image operation, and it can require a considerable amount of space when having concurrent operations and not using Block Storage service volume cache.
If you do not have nodes with enough local disk space for the temporary images, you can use a remote NFS location for the images. You had to manually set this up in director deployments, but with operators, you can do it automatically using the extra volumes feature ()
extraMounts
.- Transport protocols
Due to the specifics of the storage transport protocols some changes may be required on the RHOCP side, and although this is something that must be documented by the Vendor here wer are going to provide some generic instructions that can serve as a guide for the different transport protocols.
Check the backend sections in your
cinder.conf
file that are listed in theenabled_backends
configuration option to figure out the transport storage protocol used by the backend.Depending on the backend, you can find the transport protocol:
-
Looking at the
volume_driver
configuration option, as it may contain the protocol itself: RBD, iSCSI, FC… Looking at the
target_protocol
configuration optionWarningAny time a
MachineConfig
is used to make changes to RHOCP nodes the node will reboot!! Act accordingly.
-
Looking at the
- NFS
- There is nothing to do for NFS. RHOCP can connect to NFS backends without any additional changes.
- RBD/Ceph
- There is nothing to do for RBD/Ceph in terms of preparing the nodes, RHOCP can connect to Ceph backends without any additional changes. Credentials and configuration files will need to be provided to the services though.
- iSCSI
Connecting to iSCSI volumes requires that the iSCSI initiator is running on the RHOCP hosts where volume and backup services are going to run, because the Linux Open iSCSI initiator does not currently support network namespaces, so you must only run 1 instance of the service for the normal RHOCP usage, plus the RHOCP CSI plugins, plus the RHOSP services.
If you are not already running
iscsid
on the RHOCP nodes, then you need to apply aMachineConfig
similar to this one:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service
If you are using labels to restrict the nodes where the Block Storage services are running you need to use a
MachineConfigPool
as described in the About node selector to limit the effects of theMachineConfig
to only the nodes where your services may run.If you are using a single node deployment to test the process, replace
worker
withmaster
in theMachineConfig
.
- FC
There is nothing to do for FC volumes to work, but the Block Storage service volume and Block Storage service backup services need to run in an RHOCP host that has HBAs, so if there are nodes that do not have HBAs then you need to use labels to restrict where these services can run, as mentioned in About node selector.
This also means that for virtualized RHOCP clusters using FC you need to expose the host’s HBAs inside the VM.
- NVMe-oF
Connecting to NVMe-oF volumes requires that the nvme kernel modules are loaded on the RHOCP hosts.
If you are not already loading the
nvme-fabrics
module on the RHOCP nodes where volume and backup services are going to run then you need to apply aMachineConfig
similar to this one:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false # Mode must be decimal, this is 0644 mode: 420 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,nvme-fabrics
If you are using labels to restrict the nodes where Block Storage services are running, you need to use a
MachineConfigPool
as described in the About node selector to limit the effects of theMachineConfig
to only the nodes where your services may run.If you are using a single node deployment to test the process,replace
worker
withmaster
in theMachineConfig
.You are only loading the
nvme-fabrics
module because it takes care of loading the transport specific modules (tcp, rdma, fc) as needed.For production deployments using NVMe-oF volumes it is recommended that you use multipathing. For NVMe-oF volumes RHOSP uses native multipathing, called ANA.
Once the RHOCP nodes have rebooted and are loading the
nvme-fabrics
module you can confirm that the Operating System is configured and supports ANA by checking on the host:cat /sys/module/nvme_core/parameters/multipath
ImportantANA does not use the Linux Multipathing Device Mapper, but the current RHOSP code requires
multipathd
on Compute nodes to be running for Compute service (nova) to be able to use multipathing.
- Multipathing
For iSCSI and FC protocols, using multipathing is recommended, which has 4 parts:
- Prepare the RHOCP hosts
- Configure the Block Storage services
- Prepare the Compute service computes
Configure the Compute service service
To prepare the RHOCP hosts, you need to ensure that the Linux Multipath Device Mapper is configured and running on the RHOCP hosts, and you do that using
MachineConfig
like this one:# Includes the /etc/multipathd.conf contents and the systemd unit changes apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false # Mode must be decimal, this is 0600 mode: 384 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service
If you are using labels to restrict the nodes where Block Storage services are running you need to use a
MachineConfigPool
as described in the About node selector to limit the effects of theMachineConfig
to only the nodes where your services may run.If you are using a single node deployment to test the process, replace
worker
withmaster
in theMachineConfig
.To configure the Block Storage services to use multipathing, enable the
use_multipath_for_image_xfer
configuration option in all the backend sections and in the[DEFAULT]
section for the backup service. This is the default in control plane deployments. Multipathing works as long as the service is running on the RHOCP host. Do not override this option by settinguse_multipath_for_image_xfer = false
.
4.8.3. Preparing the Block Storage service configurations for adoption
The Block Storage service (cinder) is configured using configuration snippets instead of using configuration parameters defined by the installer. For more information, see Service configurations.
The recommended way to deploy Block Storage service volume backends has changed to remove old limitations, add flexibility, and improve operations.
When deploying with director you used to run a single Block Storage service volume service with all your backends (each backend would run on its own process), and even though that way of deploying is still supported, it is not recommended. It is recommended to use a volume service per backend since it is a superior deployment model.
With an LVM and a Ceph backend you have 2 entries in cinderVolume
and, as mentioned in the limitations section, you cannot set global defaults for all volume services, so you have to define it for each of them, like this:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: cinderVolume: lvm: customServiceConfig: | [DEFAULT] debug = True [lvm] < . . . > ceph: customServiceConfig: | [DEFAULT] debug = True [ceph] < . . . >
Reminder that for volume backends that have sensitive information using Secret
and the customServiceConfigSecrets
key is the recommended way to go.
For adoption instead of using a whole deployment manifest you use a targeted patch, like you did with other services, and in this patch you will enable the different Block Storage services with their specific configurations.
Check that all configuration options are still valid for the new Red Hat OpenStack Platform version. Configuration options may have been deprecated, removed, or added. This applies to both backend driver specific configuration options and other generic options.
4.8.3.1. Preparing the Block Storage service configuration
Creating the Cinder configuration entails:
Procedure
-
Determine what part of the configuration is generic for all the Block Storage service (cinder) services and remove anything that would change when deployed in Red Hat OpenShift Container Platform, like the
connection
in the[dabase]
section, thetransport_url
andlog_dir
in[DEFAULT]
, the whole[coordination]
and[barbican]
sections. This configuration goes into thecustomServiceConfig
(or aSecret
and then used incustomServiceConfigSecrets
) at thecinder: template:
level. -
Determine if there’s any scheduler specific configuration and add it to the
customServiceConfig
section incinder: template: cinderScheduler
. -
Determine if there’s any API specific configuration and add it to the
customServiceConfig
section incinder: template: cinderAPI
. -
If you have Block Storage service backup deployed, then you get the Block Storage service backup relevant configuration options and add them to
customServiceConfig
(or aSecret
and then used incustomServiceConfigSecrets
) at thecinder: template: cinderBackup:
level. You should remove thehost
configuration in the[DEFAULT]
section to facilitate supporting multiple replicas in the future. -
Determine the individual volume backend configuration for each of the drivers. The configuration will not only be the specific driver section, it should also include the
[backend_defaults]
section and FC zoning sections is they are being used, because the Block Storage service operator doesn’t support acustomServiceConfig
section global for all volume services. Each backend would have its own section undercinder: template: cinderVolumes
and the configuration would go incustomServiceConfig
(or aSecret
and then used incustomServiceConfigSecrets
). Check if any of the Block Storage service volume drivers being used requires a custom vendor image. If they do, find the location of the image in the vendor’s instruction available in the Red Hat OpenStack Platform Block Storage service ecosystem page and add it under the specific’s driver section using the
containerImage
key. The following example shows a CRD for a Pure Storage array with a certified driver:spec: cinder: enabled: true template: cinderVolume: pure: containerImage: registry.connect.redhat.com/purestorage/openstack-cinder-volume-pure-rhosp-18-0' customServiceConfigSecrets: - openstack-cinder-pure-cfg < . . . >
External files: Block Storage services sometimes use external files, for example for a custom policy, or to store credentials, or SSL CA bundles to connect to a storage array, and you need to make those files available to the right containers. To achieve this, you use
Secrets
orConfigMap
to store the information in RHOCP and then theextraMounts
key. For example, for the Ceph credentials stored in aSecret
calledceph-conf-files
you patch the top levelextraMounts
inOpenstackControlPlane
:spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files
But for a service specific one, like the API policy, you do it directly on the service itself. In this example, you include the Block Storage API configuration that references the policy you are adding from a
ConfigMap
calledmy-cinder-conf
that has a keypolicy
with the contents of the policy:spec: cinder: enabled: true template: cinderAPI: customServiceConfig: | [oslo_policy] policy_file=/etc/cinder/api/policy.yaml extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/cinder/api name: policy readOnly: true propagation: - CinderAPI volumes: - name: policy projected: sources: - configMap: name: my-cinder-conf items: - key: policy path: policy.yaml
4.8.4. Deploying the Block Storage services
Assuming you have already stopped Block Storage service (cinder) services, prepared the Red Hat OpenShift Container Platform nodes, deployed the Red Hat OpenStack Platform (RHOSP) operators and a bare RHOSP manifest, and migrated the database, and prepared the patch manifest with the Block Storage service configuration, you must apply the patch and wait for the operator to apply the changes and deploy the Block Storage services.
Prerequisites
- Previous Adoption steps completed. Notably, Block Storage service must have been stopped and the service databases must already be imported into the control plane MariaDB.
- Identity service (keystone) and Key Manager service (barbican) should be already adopted.
- Storage network has been properly configured on the RHOCP cluster.
You need the contents of
cinder.conf
file. Download the file so that you can access it locally:$CONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf > cinder.conf
Procedure
It is recommended to write the patch manifest into a file, for example
cinder.patch
and then apply it with something like:oc patch openstackcontrolplane openstack --type=merge --patch-file=cinder.patch
For example, for the RBD deployment from the Development Guide the
cinder.patch
would look like this:spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files cinder: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: cinder secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 customServiceConfig: | [DEFAULT] default_volume_type=tripleo cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] backup_driver=cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=openstack backup_ceph_pool=backups cinderVolumes: ceph: networkAttachments: - storage replicas: 1 customServiceConfig: | [tripleo_ceph] backend_host=hostgroup volume_backend_name=tripleo_ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False report_discard_supported=True
Once the services have been deployed you need to clean up the old scheduler and backup services which will appear as being down while you have others that appear as being up:
openstack volume service list +------------------+------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------------+------+---------+-------+----------------------------+ | cinder-backup | standalone.localdomain | nova | enabled | down | 2023-06-28T11:00:59.000000 | | cinder-scheduler | standalone.localdomain | nova | enabled | down | 2023-06-28T11:00:29.000000 | | cinder-volume | hostgroup@tripleo_ceph | nova | enabled | up | 2023-06-28T17:00:03.000000 | | cinder-scheduler | cinder-scheduler-0 | nova | enabled | up | 2023-06-28T17:00:02.000000 | | cinder-backup | cinder-backup-0 | nova | enabled | up | 2023-06-28T17:00:01.000000 | +------------------+------------------------+------+---------+-------+----------------------------+
In this case you need to remove services for hosts
standalone.localdomain
oc exec -it cinder-scheduler-0 -- cinder-manage service remove cinder-backup standalone.localdomain oc exec -it cinder-scheduler-0 -- cinder-manage service remove cinder-scheduler standalone.localdomain
The reason why we haven’t preserved the name of the backup service is because we have taken the opportunity to change its configuration to support Active-Active, even though we are not doing so right now because we have 1 replica.
Now that the Block Storage services are running, the DB schema migration has been completed and you can proceed to apply the DB data migrations. While it is not necessary to run these data migrations at this precise moment, because you can run them right before the next upgrade, for adoption it is best to run them now to make sure there are no issues before running production workloads on the deployment.
The command to run the DB data migrations is:
oc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations
Verification
Before you can run any checks you need to set the right cloud configuration for the openstack
command to be able to connect to your RHOCP control plane.
Ensure that the
openstack
alias is defined:alias openstack="oc exec -t openstackclient -- openstack"
Now you can run a set of tests to confirm that the deployment is using your old database contents:
See that Block Storage service endpoints are defined and pointing to the control plane FQDNs:
openstack endpoint list --service cinderv3
Check that the Block Storage services are running and up. The API won’t show but if you get a response you know it’s up as well:
openstack volume service list
Check that your old volume types, volumes, snapshots, and backups are there:
openstack volume type list openstack volume list openstack volume snapshot list openstack volume backup list
To confirm that the configuration is working, the following basic operations are recommended:
Create a volume from an image to check that the connection to Image Service (glance) is working.
openstack volume create --image cirros --bootable --size 1 disk_new
Backup the old attached volume to a new backup. Example:
openstack --os-volume-api-version 3.47 volume create --backup backup restored
You do not boot a Compute service (nova) instance using the new volume from image or try to detach the old volume because Compute service and the Block Storage service are still not connected.
4.9. Adopting the Dashboard service
Prerequisites
- Previous Adoption steps completed. Notably, Memcached and Identity service (keystone) should be already adopted.
Procedure
Patch
OpenStackControlPlane
to deploy the Dashboard service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: horizon: enabled: true apiOverride: route: {} template: memcachedInstance: memcached secret: osp-secret '
Verification
See that the Dashboard service instance is successfully deployed and ready
oc get horizon
Check that the Dashboard service is reachable and returns status code
200
PUBLIC_URL=$(oc get horizon horizon -o jsonpath='{.status.endpoint}') curl --silent --output /dev/stderr --head --write-out "%{http_code}" "$PUBLIC_URL/dashboard/auth/login/?next=/dashboard/" -k | grep 200
4.12. Adopting the Orchestration service
Adopting the Orchestration service (heat) means that an existing OpenStackControlPlane
custom resource (CR), where Orchestration service is supposed to be disabled, should be patched to start the service with the configuration parameters provided by the source environment.
After the adoption process has been completed, a user can expect that they will then have CRs for Heat
, HeatAPI
, HeatEngine
and HeatCFNAPI
. Additionally, a user should have endpoints created within Identity service (keystone) to facilitate the above mentioned servies.
This guide also assumes that:
- A director environment (the source Cloud) is running on one side;
- A Red Hat OpenShift Container Platform environment is running on the other side.
Prerequisites
- Previous Adoption steps completed. Notably, MariaDB and Identity service should be already adopted.
- In addition, if your existing Orchestration service stacks contain resources from other services such as Networking service (neutron), Compute service (nova), Object Storage service (swift), etc. Those services should be adopted first before trying to adopt Orchestration service.
Procedure
Patch the
osp-secret
to update theHeatAuthEncryptionKey
andHeatPassword
. This needs to match what you have configured in the existing director Orchestration service configuration. You can retrieve and verify the existingauth_encryption_key
andservice
passwords via:[stack@rhosp17 ~]$ grep -E 'HeatPassword|HeatAuth' ~/overcloud-deploy/overcloud/overcloud-passwords.yaml HeatAuthEncryptionKey: Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2 HeatPassword: dU2N0Vr2bdelYH7eQonAwPfI3
And verifying on one of the Controllers that this is indeed the value in use:
[stack@rhosp17 ~]$ ansible -i overcloud-deploy/overcloud/config-download/overcloud/tripleo-ansible-inventory.yaml overcloud-controller-0 -m shell -a "grep auth_encryption_key /var/lib/config-data/puppet-generated/heat/etc/heat/heat.conf | grep -Ev '^#|^$'" -b overcloud-controller-0 | CHANGED | rc=0 >> auth_encryption_key=Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2
This password needs to be base64 encoded and added to the
osp-secret
❯ echo Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2 | base64 UTYwSGo4UHFickROdTJkRENieUlRRTJkaWJwUVVQZzIK ❯ oc patch secret osp-secret --type='json' -p='[{"op" : "replace" ,"path" : "/data/HeatAuthEncryptionKey" ,"value" : "UTYwSGo4UHFickROdTJkRENieUlRRTJkaWJwUVVQZzIK"}]' secret/osp-secret patched
Patch
OpenStackControlPlane
to deploy the Orchestration service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: heat: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: heat secret: osp-secret memcachedInstance: memcached passwordSelectors: authEncryptionKey: HeatAuthEncryptionKey database: HeatDatabasePassword service: HeatPassword '
Verification
Ensure all of the CRs reach the "Setup Complete" state:
❯ oc get Heat,HeatAPI,HeatEngine,HeatCFNAPI NAME STATUS MESSAGE heat.heat.openstack.org/heat True Setup complete NAME STATUS MESSAGE heatapi.heat.openstack.org/heat-api True Setup complete NAME STATUS MESSAGE heatengine.heat.openstack.org/heat-engine True Setup complete NAME STATUS MESSAGE heatcfnapi.heat.openstack.org/heat-cfnapi True Setup complete
Check that the Orchestration service is registered in Identity service:
oc exec -it openstackclient -- openstack service list -c Name -c Type +------------+----------------+ | Name | Type | +------------+----------------+ | heat | orchestration | | glance | image | | heat-cfn | cloudformation | | ceilometer | Ceilometer | | keystone | identity | | placement | placement | | cinderv3 | volumev3 | | nova | compute | | neutron | network | +------------+----------------+
❯ oc exec -it openstackclient -- openstack endpoint list --service=heat -f yaml - Enabled: true ID: 1da7df5b25b94d1cae85e3ad736b25a5 Interface: public Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-public-openstack-operators.apps.okd.bne-shift.net/v1/%(tenant_id)s - Enabled: true ID: 414dd03d8e9d462988113ea0e3a330b0 Interface: internal Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-internal.openstack-operators.svc:8004/v1/%(tenant_id)s
Check the Orchestration service engine services are up:
oc exec -it openstackclient -- openstack orchestration service list -f yaml - Binary: heat-engine Engine ID: b16ad899-815a-4b0c-9f2e-e6d9c74aa200 Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000' - Binary: heat-engine Engine ID: 887ed392-0799-4310-b95c-ac2d3e6f965f Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 26ed9668-b3f2-48aa-92e8-2862252485ea Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 1011943b-9fea-4f53-b543-d841297245fd Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000'
Verify you can now see your the Orchestration service stacks again. Test whether you can create networks, subnets, ports, or routers:
❯ openstack stack list -f yaml - Creation Time: '2023-10-11T22:03:20Z' ID: 20f95925-7443-49cb-9561-a1ab736749ba Project: 4eacd0d1cab04427bc315805c28e66c9 Stack Name: test-networks Stack Status: CREATE_COMPLETE Updated Time: null
4.13. Adopting Telemetry services
Adopting Telemetry means that an existing OpenStackControlPlane
custom resource (CR), where Telemetry services are supposed to be disabled, should be patched to start the service with the configuration parameters provided by the source environment.
This guide also assumes that:
- A director environment (the source Cloud) is running on one side;
-
A
SNO
/CodeReadyContainers
is running on the other side.
Prerequisites
- Previous Adoption steps completed. MariaDB, the Identity service (keystone) and the data plane should be already adopted.
Procedure
Patch the
OpenStackControlPlane
CR to deploy Ceilometer services:cat << EOF > ceilometer_patch.yaml spec: ceilometer: enabled: true template: centralImage: registry.redhat.io/rhosp-dev-preview/openstack-ceilometer-central-rhel9:18.0 computeImage: registry.redhat.io/rhosp-dev-preview/openstack-ceilometer-compute-rhel9:18.0 customServiceConfig: | [DEFAULT] debug=true ipmiImage: registry.redhat.io/rhosp-dev-preview/openstack-ceilometer-ipmi-rhel9:18.0 nodeExporterImage: quay.io/prometheus/node-exporter:v1.5.0 notificationImage: registry.redhat.io/rhosp-dev-preview/openstack-ceilometer-notification-rhel9:18.0 secret: osp-secret sgCoreImage: quay.io/infrawatch/sg-core:v5.1.1 EOF
Optional: If you previously backed up your RHOSP services configuration file from the old environment, you can use os-diff to compare and make sure the configuration is correct. This will produce the difference between both ini configuration files:
os-diff diff /tmp/collect_tripleo_configs/ceilometer/etc/ceilometer/ceilometer.conf ceilometer_patch.yaml --crd
For more information, see Reviewing the Red Hat OpenStack Platform control plane configuration.
Patch the
OpenStackControlPlane
CR to deploy Ceilometer services:oc patch openstackcontrolplane openstack --type=merge --patch-file ceilometer_patch.yaml
Verification
Inspect the resulting Ceilometer pods:
CEILOMETETR_POD=`oc get pods -l service=ceilometer | tail -n 1 | cut -f 1 -d' '` oc exec -t $CEILOMETETR_POD -c ceilometer-central-agent -- cat /etc/ceilometer/ceilometer.conf
Inspect the resulting Ceilometer IPMI agent pod on data plane nodes:
podman ps | grep ceilometer-ipmi
Inspect enabled pollsters:
oc get secret ceilometer-config-data -o jsonpath="{.data['polling\.yaml']}" | base64 -d
Enable pollsters according to requirements:
cat << EOF > polling.yaml --- sources: - name: pollsters interval: 300 meters: - volume.size - image.size - cpu - memory EOF oc patch secret ceilometer-config-data --patch="{\"data\": { \"polling.yaml\": \"$(base64 -w0 polling.yaml)\"}}"
4.14. Adopting autoscaling
Adopting autoscaling means that an existing OpenStackControlPlane
custom resource (CR), where Aodh services are supposed to be disabled, should be patched to start the service with the configuration parameters provided by the source environment.
This guide also assumes that:
- A director environment (the source Cloud) is running on one side;
-
A
SNO
/CodeReadyContainers
is running on the other side.
Prerequisites
- Previous Adoption steps completed. MariaDB, the Identity service (keystone), the Orchestration service (heat), and Telemetry should be already adopted.
Procedure
Patch the
OpenStackControlPlane
CR to deploy autoscaling services:cat << EOF > aodh_patch.yaml spec: autoscaling: enabled: true prometheus: deployPrometheus: false aodh: customServiceConfig: | [DEFAULT] debug=true secret: osp-secret apiImage: "registry.redhat.io/rhosp-dev-preview/openstack-aodh-api-rhel9:18.0" evaluatorImage: "registry.redhat.io/rhosp-dev-preview/openstack-aodh-evaluator-rhel9:18.0" notifierImage: "registry.redhat.io/rhosp-dev-preview/openstack-aodh-notifier-rhel9:18.0" listenerImage: "registry.redhat.io/rhosp-dev-preview/openstack-aodh-listener-rhel9:18.0" passwordSelectors: databaseUser: aodh databaseInstance: openstack memcachedInstance: memcached EOF
Optional: If you have previously backed up your RHOSP services configuration file from the old environment, you can use os-diff to compare and make sure the configuration is correct. This will producre the difference between both ini configuration files:
os-diff diff /tmp/collect_tripleo_configs/aodh/etc/aodh/aodh.conf aodh_patch.yaml --crd
For more information, see Reviewing the Red Hat OpenStack Platform control plane configuration.
Patch the
OpenStackControlPlane
CR to deploy Aodh services:oc patch openstackcontrolplane openstack --type=merge --patch-file aodh_patch.yaml
Verification
If autoscaling services are enabled, inspect Aodh pods:
AODH_POD=`oc get pods -l service=aodh | tail -n 1 | cut -f 1 -d' '` oc exec -t $AODH_POD -c aodh-api -- cat /etc/aodh/aodh.conf
Check whether Aodh API service is registered in Identity service:
openstack endpoint list | grep aodh | 6a805bd6c9f54658ad2f24e5a0ae0ab6 | regionOne | aodh | network | True | public | http://aodh-public-openstack.apps-crc.testing | | b943243e596847a9a317c8ce1800fa98 | regionOne | aodh | network | True | internal | http://aodh-internal.openstack.svc:9696 | | f97f2b8f7559476bb7a5eafe3d33cee7 | regionOne | aodh | network | True | admin | http://192.168.122.99:9696 |
Create sample resources. You can test whether you can create alarms:
openstack alarm create \ --name low_alarm \ --type gnocchi_resources_threshold \ --metric cpu \ --resource-id b7ac84e4-b5ca-4f9e-a15c-ece7aaf68987 \ --threshold 35000000000 \ --comparison-operator lt \ --aggregation-method rate:mean \ --granularity 300 \ --evaluation-periods 3 \ --alarm-action 'log:\\' \ --ok-action 'log:\\' \ --resource-type instance
4.15. Reviewing the Red Hat OpenStack Platform control plane configuration
Before starting the adoption workflow, pull the configuration from the Red Hat OpenStack Platform services and director on your file system to back up the configuration files. You can then use the files later, during the configuration of the adopted services, and for the record to compare and make sure nothing has been missed or misconfigured.
Make sure you installed and configured the os-diff tool. For more information, see Comparing configuration files between deployments.
4.15.1. Pulling the configuration from a director deployment
You can pull configuration from your Red Hat OpenStack Platform (RHOSP) services.
All the services are describes in a yaml file:
Procedure
Update your ssh parameters according to your environment in the os-diff.cfg. Os-diff uses those parameters to connect to your director node, query and download the configuration files:
ssh_cmd=ssh -F ssh.config standalone container_engine=podman connection=ssh remote_config_path=/tmp/tripleo
Make sure the ssh command you provide in
ssh_cmd
parameter is correct and with key authentication.Enable or disable the services that you want in the
/etc/os-diff/config.yaml
file. Make sure that you have the correct rights to edit the file, for example:chown ospng:ospng /etc/os-diff/config.yaml
Example with default Identity service (keystone):
# service name and file location services: # Service name keystone: # Bool to enable/disable a service (not implemented yet) enable: true # Pod name, in both OCP and podman context. # It could be strict match or will only just grep the podman_name # and work with all the pods which matched with pod_name. # To enable/disable use strict_pod_name_match: true/false podman_name: keystone pod_name: keystone container_name: keystone-api # pod options # strict match for getting pod id in TripleO and podman context strict_pod_name_match: false # Path of the config files you want to analyze. # It could be whatever path you want: # /etc/<service_name> or /etc or /usr/share/<something> or even / # @TODO: need to implement loop over path to support multiple paths such as: # - /etc # - /usr/share path: - /etc/ - /etc/keystone - /etc/keystone/keystone.conf - /etc/keystone/logging.conf
Repeat this step for each RHOSP service that you want to disable or enable.
If you are using non-containerized services, such as the
ovs-external-ids
, os-diff can pull configuration or command output:services: ovs_external_ids: hosts: - standalone service_command: "ovs-vsctl list Open_vSwitch . | grep external_ids | awk -F ': ' '{ print $2; }'" cat_output: true path: - ovs_external_ids.json config_mapping: ovn-bridge-mappings: edpm_ovn_bridge_mappings ovn-bridge: edpm_ovn_bridge ovn-encap-type: edpm_ovn_encap_type ovn-match-northd-version: ovn_match_northd_version ovn-monitor-all: ovn_monitor_all ovn-remote-probe-interval: edpm_ovn_remote_probe_interval ovn-ofctrl-wait-before-clear: edpm_ovn_ofctrl_wait_before_clear
This service is not an Red Hat OpenStack Platform service executed in a container, so the description and the behavior is different. It is important to correctly configure an SSH config file or equivalent for non-standard services such as OVS. The
ovs_external_ids
does not run in a container, and the ovs data is stored on each host of our cloud: controller_1/controller_2/…With the
hosts
key, os-diff loops on each host and runs the command in theservice_command
key:ovs_external_ids: path: - ovs_external_ids.json hosts: - standalone
The
service_command
provides the required information. It could be a simple cat from a config file. If you want os-diff to get the output of the command and store the output in a file specified by the key path, setcat_output
to true. Then you can provide a mapping between in this case the EDPM CRD, and the ovs-vsctl output with config_mapping:service_command: 'ovs-vsctl list Open_vSwitch . | grep external_ids | awk -F '': '' ''{ print $2; }''' cat_output: true config_mapping: ovn-bridge: edpm_ovn_bridge ovn-bridge-mappings: edpm_ovn_bridge_mappings ovn-encap-type: edpm_ovn_encap_type ovn-match-northd-version: ovn_match_northd_version ovn-monitor-all: ovn_monitor_all ovn-ofctrl-wait-before-clear: edpm_ovn_ofctrl_wait_before_clear ovn-remote-probe-interval: edpm_ovn_remote_probe_interval
Then you can use the following command to compare the values:
os-diff diff ovs_external_ids.json edpm.crd --crd --service ovs_external_ids
For example, to check the
/etc/yum.conf
on every host, you must put the following statement in theconfig.yaml
file. The following example uses a file calledyum_config
:services: yum_config: hosts: - undercloud - controller_1 - compute_1 - compute_2 service_command: "cat /etc/yum.conf" cat_output: true path: - yum.conf
Pull the configuration:
This command will pull all the configuration files that are described in the
/etc/os-diff/config.yaml
file. Os-diff can update this file automatically according to your running environment with the command--update
or--update-only
. This option sets the podman information into theconfig.yaml
for all running containers. It can be useful later, when all the Red Hat OpenStack Platform services are turned off.
Note that when the config.yaml
file is populated automatically you must provide the configuration paths manually for each service.
# will only update the /etc/os-diff/config.yaml os-diff pull --update-only
# will update the /etc/os-diff/config.yaml and pull configuration os-diff pull --update
# will update the /etc/os-diff/config.yaml and pull configuration os-diff pull
The configuration will be pulled and stored by default:
/tmp/tripleo/
Verification
You should have into your local path a directory per services such as:
▾ tmp/ ▾ tripleo/ ▾ glance/ ▾ keystone/
4.16. Rolling back the control plane adoption
If you encountered a problem during the adoption of the Red Hat OpenStack Platform (RHOSP) control plane services that prevents you from completing the adoption procedure, you can roll back the control plane adoption.
The roll back operation is only possible during the control plane parts of the adoption procedure. If you altered the data plane nodes in any way during the procedure, the roll back is not possible.
During the control plane adoption, services on the source cloud’s control plane are stopped but not removed. The databases on the source control plane are not edited by the adoption procedure. The destination control plane received a copy of the original control plane databases. The roll back procedure assumes that the data plane has not yet been touched by the adoption procedure and it is still connected to the source control plane.
The rollback procedure consists of the following steps:
- Restoring the functionality of the source control plane.
- Removing the partially or fully deployed destination control plane.
Procedure
To restore the source cloud to a working state, start the RHOSP control plane services that you previously stopped during the adoption procedure:
ServicesToStart=("tripleo_horizon.service" "tripleo_keystone.service" "tripleo_barbican_api.service" "tripleo_barbican_worker.service" "tripleo_barbican_keystone_listener.service" "tripleo_cinder_api.service" "tripleo_cinder_api_cron.service" "tripleo_cinder_scheduler.service" "tripleo_cinder_volume.service" "tripleo_cinder_backup.service" "tripleo_glance_api.service" "tripleo_manila_api.service" "tripleo_manila_api_cron.service" "tripleo_manila_scheduler.service" "tripleo_neutron_api.service" "tripleo_placement_api.service" "tripleo_nova_api_cron.service" "tripleo_nova_api.service" "tripleo_nova_conductor.service" "tripleo_nova_metadata.service" "tripleo_nova_scheduler.service" "tripleo_nova_vnc_proxy.service" "tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_compute.service" "tripleo_ceilometer_agent_ipmi.service" "tripleo_ceilometer_agent_notification.service" "tripleo_ovn_cluster_north_db_server.service" "tripleo_ovn_cluster_south_db_server.service" "tripleo_ovn_cluster_northd.service") PacemakerResourcesToStart=("galera-bundle" "haproxy-bundle" "rabbitmq-bundle" "openstack-cinder-volume" "openstack-cinder-backup" "openstack-manila-share") echo "Starting systemd OpenStack services" for service in ${ServicesToStart[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ${!SSH_CMD} sudo systemctl is-enabled $service &> /dev/null; then echo "Starting the $service in controller $i" ${!SSH_CMD} sudo systemctl start $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStart[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ${!SSH_CMD} sudo systemctl is-enabled $service &> /dev/null; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=active >/dev/null; then echo "ERROR: Service $service is not running on controller $i" else echo "OK: Service $service is running in controller $i" fi fi fi done done echo "Starting pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStart[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then echo "Starting $resource" ${!SSH_CMD} sudo pcs resource enable $resource else echo "Service $resource not present" fi done break fi done echo "Checking pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then if ${!SSH_CMD} sudo pcs resource status $resource | grep Started >/dev/null; then echo "OK: Service $resource is started" else echo "ERROR: Service $resource is stopped" fi fi done break fi done
If the Ceph NFS service is running on the deployment as a Shared File Systems service (manila) backend, you must restore the pacemaker ordering and colocation constraints involving the "openstack-manila-share" service:
sudo pcs constraint order start ceph-nfs then openstack-manila-share kind=Optional id=order-ceph-nfs-openstack-manila-share-Optional sudo pcs constraint colocation add openstack-manila-share with ceph-nfs score=INFINITY id=colocation-openstack-manila-share-ceph-nfs-INFINITY
-
Verify that the source cloud is operational again, e.g. by running
openstack
CLI commands or using the Dashboard service (horizon). Remove the partially or fully deployed control plane so that another adoption attempt can be made later:
oc delete --ignore-not-found=true --wait=false openstackcontrolplane/openstack oc patch openstackcontrolplane openstack --type=merge --patch ' metadata: finalizers: [] ' || true while oc get pod | grep rabbitmq-server-0; do sleep 2 done while oc get pod | grep openstack-galera-0; do sleep 2 done oc delete --ignore-not-found=true --wait=false pod mariadb-copy-data oc delete --ignore-not-found=true --wait=false pvc mariadb-data oc delete --ignore-not-found=true --wait=false pod ovn-copy-data oc delete --ignore-not-found=true secret osp-secret
Since restoring the source control plane services, their internal state may have changed. Before retrying the adoption procedure, it is important to verify that the control plane resources have been removed and there are no leftovers which could affect the following adoption procedure attempt. Notably, the previously created copies of the database contents must not be used in another adoption attempt, and new copies must be made according to the adoption procedure documentation.