Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Adopting Red Hat OpenStack Platform control plane services
Adopt your Red Hat OpenStack Platform 17.1 control plane services to deploy them in the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 control plane.
4.1. Adopting the Identity service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Identity service (keystone), you patch an existing OpenStackControlPlane custom resource (CR) where the Identity service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
Prerequisites
Create the keystone secret that includes the Fernet keys that were copied from the RHOSP environment:
$ oc apply -f - <<EOF apiVersion: v1 data: CredentialKeys0: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/0 | base64 -w 0) CredentialKeys1: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/1 | base64 -w 0) FernetKeys0: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/0 | base64 -w 0) FernetKeys1: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/1 | base64 -w 0) kind: Secret metadata: name: keystone type: Opaque EOF
Procedure
Patch the
OpenStackControlPlaneCR to deploy the Identity service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: keystone: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret 'where:
- <172.17.0.80>
-
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.
Create an alias to use the
openstackcommand in the Red Hat OpenStack Services on OpenShift (RHOSO) deployment:$ alias openstack="oc exec -t openstackclient -- openstack"Remove services and endpoints that still point to the RHOSP control plane, excluding the Identity service and its endpoints:
$ openstack endpoint list | grep keystone | awk '/admin/{ print $2; }' | xargs ${BASH_ALIASES[openstack]} endpoint delete || true > for service in aodh heat heat-cfn barbican cinderv3 glance gnocchi manila manilav2 neutron nova placement swift ironic-inspector ironic octavia; do > openstack service list | awk "/ $service /{ print \$2; }" | xargs -r ${BASH_ALIASES[openstack]} service delete || true > done
Verification
-
Verify that you can access the
OpenStackClientpod. For more information, see Accessing the OpenStackClient pod in Maintaining the Red Hat OpenStack Services on OpenShift deployment. Confirm that the Identity service endpoints are defined and are pointing to the control plane FQDNs:
$ openstack endpoint list | grep keystoneWait for the
OpenStackControlPlaneresource to becomeReady:$ oc wait --for=condition=Ready --timeout=1m OpenStackControlPlane openstack
4.2. Configuring LDAP with domain-specific drivers Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If you need to integrate the Identity service (keystone) with one or more LDAP servers using domain-specific configurations, you can enable domain-specific drivers and provide the necessary LDAP settings.
This involves two main steps:
- Create the secret that holds the domain-specific LDAP configuration files that the Identity service uses. Each file within the secret corresponds to an LDAP domain.
-
Patch the
OpenStackControlPlanecustom resource (CR) to enable domain-specific drivers for the Identity service and mount a secret that contains the LDAP configurations.
Procedure
To create the
keystone-domainssecret that stores the actual LDAP configuration files that Identity service uses, create a local file that includes your LDAP configuration, for example,keystone.myldapdomain.conf:The following example file includes the configuration for a single LDAP domain. If you have multiple LDAP domains, create a configuration file for each, for example,
keystone.DOMAIN_ONE.conf,keystone.DOMAIN_TWO.conf.[identity] driver = ldap [ldap] url = ldap://<ldap_server_host>:<ldap_server_port> user = <bind_dn_user> password = <bind_dn_password> suffix = <user_tree_dn> query_scope = sub # User configuration user_tree_dn = <user_tree_dn> user_objectclass = <user_object_class> user_id_attribute = <user_id_attribute> user_name_attribute = <user_name_attribute> user_mail_attribute = <user_mail_attribute> user_enabled_attribute = <user_enabled_attribute> user_enabled_default = true # Group configuration group_tree_dn = <group_tree_dn> group_objectclass = <group_object_class> group_id_attribute = <group_id_attribute> group_name_attribute = <group_name_attribute> group_member_attribute = <group_member_attribute> group_members_are_ids = true-
Replace the values, such as
<ldap_server_host>,<bind_dn_user>,<user_tree_dn>, and so on, with your LDAP server details.
-
Replace the values, such as
Create the secret from this file:
$ oc create secret generic keystone-domains --from-file=<keystone.DOMAIN_NAME.conf>Replace
<keystone.DOMAIN_NAME.conf>with the name of your local configuration file. If applicable, include additional configuration files by using the--from-fileoption. After creating the secret, you can remove the local configuration file if it is no longer needed, or store it securely.ImportantThe name of the file that you provide to
--from-file, for examplekeystone.DOMAIN_NAME.conf, is critical. The Identity service uses this filename to map incoming authentication requests for a domain to the correct LDAP configuration. Ensure thatDOMAIN_NAMEmatches the name of the domain you are configuring in the Identity service.
Patch the
OpenStackControlPlaneCR:$ oc patch openstackcontrolplane <cr_name> --type=merge -p ' spec: keystone: template: customServiceConfig: | [identity] domain_specific_drivers_enabled = true extraMounts: - name: v1 region: r1 extraVol: - propagation: - Keystone extraVolType: Conf volumes: - name: keystone-domains secret: secretName: keystone-domains mounts: - name: keystone-domains mountPath: "/etc/keystone/domains" readOnly: true-
Replace
<cr_name>with the name of yourOpenStackControlPlaneCR (for example,openstack). This patch does the following:
-
Sets
spec.keystone.template.customServiceConfig. Ensure that you do not overwrite any previously defined value. Defines
spec.keystone.template.extraMountsto mount a secret namedkeystone-domainsinto the Identity service pods at/etc/keystone/domains. This secret contains your LDAP configuration files.NoteYou might need to wait a few minutes for the changes to propagate and for the Identity service pods to be updated.
-
Sets
-
Replace
Verification
Verify that users from the LDAP domain are accessible:
$ oc exec -t openstackclient -- openstack user list --domain <domain_name>Replace
<domain_name>with your LDAP domain name.This command returns a list of users from your LDAP server.
Verify that groups from the LDAP domain are accessible:
$ oc exec -t openstackclient -- openstack group list --domain <domain_name>This command returns a list of groups from your LDAP server.
Test authentication with an LDAP user:
$ oc exec -t openstackclient -- openstack --os-auth-url <keystone_auth_url> --os-identity-api-version 3 --os-user-domain-name <domain_name> --os-username <ldap_username> --os-password <ldap_password> token issue-
Replace
<keystone_auth_url>with the Identity service authentication URL. Replace
<ldap_username>and<ldap_password>with valid LDAP user credentials.If successful, this command returns a token, confirming that LDAP authentication is working correctly.
-
Replace
Verify group membership for an LDAP user:
$ oc exec -t openstackclient -- openstack group contains user --group-domain <domain_name> --user-domain <domain_name> <group_name> <username>Replace
<domain_name>,<group_name>, and<username>with the appropriate values from your LDAP server.This command verifies that the user is properly associated with the group through LDAP.
4.3. Adopting the Key Manager service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Key Manager service (barbican), you patch an existing OpenStackControlPlane custom resource (CR) where Key Manager service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. You configure the Key Manager service to use the simple_crypto back end.
The Key Manager service adoption is complete if you see the following results:
-
The
BarbicanAPI,BarbicanWorker, andBarbicanKeystoneListenerservices are up and running. - Keystone endpoints are updated, and the same crypto plugin of the source cloud is available.
To configure hardware security module (HSM) integration with Proteccio HSM, see Adopting the Key Manager service with Proteccio HSM integration.
Procedure
Add the kek secret:
$ oc set data secret/osp-secret "BarbicanSimpleCryptoKEK=$($CONTROLLER1_SSH "python3 -c \"import configparser; c = configparser.ConfigParser(); c.read('/var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf'); print(c['simple_crypto_plugin']['kek'])\"")"Patch the
OpenStackControlPlaneCR to deploy the Key Manager service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: barbican: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: barbican messagingBus: cluster: rabbitmq secret: osp-secret simpleCryptoBackendSecret: osp-secret serviceAccount: barbican serviceUser: barbican passwordSelectors: service: BarbicanPassword simplecryptokek: BarbicanSimpleCryptoKEK barbicanAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer barbicanWorker: replicas: 1 barbicanKeystoneListener: replicas: 1 'where:
<172.17.0.80>-
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80. messagingBus.Cluster- For more information about RHOSO RabbitMQ clusters, see RHOSO RabbitMQ clusters in Monitoring high availability services.
Verification
Ensure that the Identity service (keystone) endpoints are defined and are pointing to the control plane FQDNs:
$ openstack endpoint list | grep key-managerEnsure that Barbican API service is registered in the Identity service:
$ openstack service list | grep key-manager$ openstack endpoint list | grep key-managerList the secrets:
$ openstack secret list
4.4. Adopting the Key Manager service with HSM integration Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Adopt the Key Manager service (barbican) from director to Red Hat OpenStack Services on OpenShift (RHOSO) when your source environment includes hardware security module (HSM) integration to preserve HSM functionality and maintain access to HSM-backed secrets. HSM provides enhanced security for cryptographic operations by storing encryption keys in dedicated hardware devices.
For additional information about the Key Manager service before you start the adoption, see the following resources:
- Key Manager service service configuration documentation
- Hardware security module vendor-specific documentation
- OpenStack Barbican PKCS#11 plugin documentation
4.4.1. Key Manager service HSM adoption approaches Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
The Key Manager service (barbican) adoption approach depends on your source director environment configuration.
-
Use the standard adoption approach if your environment includes only the
simple_cryptoplugin for secret storage and has no HSM integration. Use the HSM-enabled adoption approach if your source environment has HSM integration that uses Public Key Cryptography Standard (PKCS) #11, Key Management Interoperability Protocol (KMIP), or other HSM back ends alongside
simple_crypto.- Standard adoption approach
- Uses the existing Key Manager service adoption procedure
- Migrates a simple crypto back-end configuration
- Provides a single-step adoption process
Is suitable for development, testing, and standard production environments
- HSM-enabled adoption approach
-
Uses the enhanced
barbican_adoptionrole with HSM awareness -
Configures HSM integration through a simple boolean flag (
barbican_hsm_enabled: true) -
Automatically creates required Kubernetes secrets (
hsm-loginandproteccio-data) - Preserves HSM metadata during database migration
- Supports both simple crypto and HSM back ends in the target environment
-
Requires HSM-specific configuration variables and custom container images with HSM client libraries (built using the
rhoso_proteccio_hsmAnsible role) - Uses HSM client certificates and configuration files accessible via URLs
- Requires proper HSM partition and key configuration that matches your source environment
The HSM-enabled adoption approach currently supports:
- Proteccio (Eviden Trustway): Fully supported with PKCS#11 integration
- Luna (Thales): PKCS#11 support available
- nCipher (Entrust): PKCS#11 support available
HSM adoption requires additional configuration steps, including:
-
Custom Barbican container images with HSM client libraries that are built using the
rhoso_proteccio_hsmAnsible role - HSM client certificates and configuration files that are accessible by using URLs
- Proper HSM partition and key configuration that matches your source environment
These approaches are mutually exclusive. Choose an approach based on your source environment configuration.
| Source environment characteristic | Approach | Rationale |
|---|---|---|
|
Only | Standard adoption | No HSM complexity needed |
| HSM integration present (PKCS#11, KMIP, and so on) | HSM-enabled adoption | Preserves HSM functionality and secrets |
| Development or testing environment | Standard adoption | Simpler setup and maintenance |
| Production with compliance requirements | HSM-enabled adoption | Maintains security compliance |
| Unknown back-end configuration | Check source environment first | Determine appropriate approach |
4.4.2. Adopting the Key Manager service with Proteccio HSM integration Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
To adopt the Key Manager service (barbican) with Proteccio hardware security module (HSM) integration, you use the enhanced Barbican adoption role with HSM support enabled through a configuration flag. This approach preserves HSM integration while adopting all existing secrets from your source Red Hat OpenStack Platform (RHOSP) environment. When you run the data plane adoption tests with HSM support enabled, the adoption process performs the following actions:
- Extracts the simple crypto KEK from the source configuration.
- Creates the required HSM secrets (hsm-login and proteccio-data) in the target namespace.
- Deploys Barbican with HSM-enabled configuration by using the PKCS#11 plugin.
- Verifies the HSM functionality and secret migration. When you run the data plane adoption tests with HSM support enabled, the adoption process performs the following actions:
- Extracts the simple crypto KEK from the source configuration.
- Creates the required HSM secrets (hsm-login and proteccio-data) in the target namespace.
- Deploys Barbican with HSM-enabled configuration by using the PKCS#11 plugin.
- Verifies the HSM functionality and secret migration.
The Key Manager service Proteccio HSM adoption is complete if you see the following results:
-
The
BarbicanAPIandBarbicanWorkerservices are up and running with HSM-enabled configuration. - All secrets from the source RHOSP 17.1 environment are available in Red Hat OpenStack Services on OpenShift (RHOSO) 18.0.
-
The PKCS11 crypto plugin is available alongside
simple_cryptofor new secret storage. - HSM functionality is verified and operational.
If your environment does not include Proteccio HSM, to adopt the Key Manager service by using simple_crypto, see Adopting the Key Manager service.
The enhanced Key Manager service adoption role supports HSM configuration through a simple boolean flag. This approach integrates seamlessly with the standard data plane adoption framework while providing HSM support.
Prerequisites
- You have a running director environment with Proteccio HSM integration (the source cloud).
- You have a Single Node OpenShift or OpenShift Local running in the Red Hat OpenShift Container Platform (RHOCP) cluster.
- You have SSH access to the source director undercloud and Controller nodes.
- You have configured HSM variables in your adoption configuration files.
- Custom Key Manager service container images with the Proteccio client libraries are available in your registry.
The HSM adoption process requires proper configuration of HSM-related variables. The adoption role automatically creates the required Kubernetes secrets (hsm-login and proteccio-data) when barbican_hsm_enabled is set to true. Ensure that your environment includes the following:
- All HSM-related variables are properly set in your configuration files
- The Proteccio client ISO, certificates, and configuration files are accessible from the configured URLs
- Custom Key Manager service images with Proteccio client are built and available in your container registry
Without proper HSM configuration, your HSM-protected secrets become inaccessible after adoption.
Procedure
Configure HSM integration variables in your adoption configuration (Zuul job vars or CI framework configuration):
# Enable HSM integration for the Barbican adoption role barbican_hsm_enabled: true # HSM login credentials proteccio_login_password: "your_hsm_password" # Kubernetes secret names (defaults shown) proteccio_login_secret_name: "hsm-login" proteccio_client_data_secret_name: "proteccio-data" # HSM partition and key configuration cifmw_hsm_proteccio_partition: "VHSM1" cifmw_hsm_mkek_label: "adoption_mkek_1" cifmw_hsm_hmac_label: "adoption_hmac_1" cifmw_hsm_proteccio_library_path: "/usr/lib64/libnethsm.so" cifmw_hsm_key_wrap_mechanism: "CKM_AES_CBC_PAD" # HSM client sources (URLs to download Proteccio client files) cifmw_hsm_proteccio_client_src: "<URL_of_Proteccio_ISO_file>" cifmw_hsm_proteccio_conf_src: "<URL_of_proteccio.rc_config_file>" cifmw_hsm_proteccio_client_crt_src: "<URL_of_client_certificate_file>" cifmw_hsm_proteccio_client_key_src: "<URL_of_client_certificate_key>" cifmw_hsm_proteccio_server_crt_src: - "<URL_of_HSM_certificate_file>"where:
- <URL_of_Proteccio_ISO_file>
- Specifies the full URL (including "http://" or "https://") of the Proteccio client ISO image file.
- <URL_of_proteccio.rc_config_file>
-
Specifies the full URL (including "http://" or "https://") of the
proteccio.rcconfiguration in your RHOSO environment. - <URL_of_client_certificate_file>
- Specifies the full URL (including "http://" or "https://") of the HSM client certificate file.
- <URL_of_client_certificate_key>
- Specifies the full URL (including "http://" or "https://") of the client key file.
- <URL_of_HSM_certificate_file>
- Specifies the full URL (including "http://" or "https://") of the HSM certificate file.
- Run the data plane adoption tests with HSM support enabled:
Verification
Ensure that the Identity service (keystone) endpoints are defined and are pointing to the control plane FQDNs:
$ openstack endpoint list | grep key-managerEnsure that the Barbican API service is registered in the Identity service:
$ openstack service list | grep key-managerVerify that all secrets from the source environment are available:
$ openstack secret listConfirm that Barbican services are running:
$ oc get pods -n openstack -l service=barbican -o wideTest secret creation to verify HSM functionality:
$ openstack secret store --name adoption-verification --payload 'HSM adoption successful'Verify that the HSM back end is operational:
$ openstack secret get <secret_id> --payloadwhere:
- <secret_id>
- Specifies the ID of the HSM secret.
4.4.3. Adopting the Key Manager service with HSM integration Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
When your source director environment includes hardware security module (HSM) integration, you must use the HSM-enabled adoption approach to preserve HSM functionality and maintain access to HSM-backed secrets.
Prerequisites
- The source director environment with HSM integration is configured.
- HSM client software and certificates are available from accessible URLs.
- The target Red Hat OpenStack Services on OpenShift (RHOSO) environment with HSM infrastructure is accessible.
- HSM-enabled Key Manager service (barbican) container images are built and available in your registry.
If you use the automated adoption process by setting barbican_hsm_enabled: true, the required HSM secrets (hsm-login and proteccio-data) are created automatically. You only need to manually create the secret when you perform the manual adoption steps.
Procedure
Confirm that your source environment configuration includes HSM integration:
$ ssh tripleo-admin@controller-0.ctlplane \ "sudo cat /var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf | grep -A5 '\[.*plugin\]'"If you see
[p11_crypto_plugin]or other HSM-specific sections, continue with the HSM adoption.Extract the simple crypto key encryption keys (KEK) from your source environment:
$ SIMPLE_CRYPTO_KEK=$(ssh tripleo-admin@controller-0.ctlplane \ "sudo python3 -c \"import configparser; c = configparser.ConfigParser(); c.read('/var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf'); print(c['simple_crypto_plugin']['kek'])\"")Add the KEK to the target environment:
$ oc set data secret/osp-secret "BarbicanSimpleCryptoKEK=${SIMPLE_CRYPTO_KEK}"If you are not using the automated adoption, create HSM-specific secrets in the target environment:
# Create HSM login credentials secret $ oc create secret generic hsm-login \ --from-literal=PKCS11Pin=<your_hsm_password> \ -n openstack # Create HSM client configuration and certificates secret $ oc create secret generic proteccio-data \ --from-file=client.crt=<path_to_client_cert> \ --from-file=client.key=<path_to_client_key> \ --from-file=10_8_60_93.CRT=<path_to_server_cert> \ --from-file=proteccio.rc=<path_to_hsm_config> \ -n openstackwhere:
- <your_hsm_password>
- Specifies the HSM password for your RHOSO environment.
- <path_to_client_cert>
- Specifies the path to the HSM client certificate.
- <path_to_client_key>
- Specifies the path to the client key.
- <path_to_server_cert>
- Specifies the path to the server certificate.
- <path_to_hsm_config>
Specifies the path to your HSM configuration in your RHOSO environment.
NoteWhen you use the automated adoption by setting
barbican_hsm_enabled: true, thebarbican_adoptionrole creates these secrets automatically. The secret names default tohsm-loginandproteccio-data.
Patch the
OpenStackControlPlanecustom resource to deploy Key Manager service with HSM support:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: barbican: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: barbican rabbitMqClusterName: rabbitmq secret: osp-secret simpleCryptoBackendSecret: osp-secret serviceAccount: barbican serviceUser: barbican passwordSelectors: database: BarbicanDatabasePassword service: BarbicanPassword simplecryptokek: BarbicanSimpleCryptoKEK customServiceConfig: | [p11_crypto_plugin] plugin_name = PKCS11 library_path = /usr/lib64/libnethsm.so token_labels = VHSM1 mkek_label = adoption_mkek_1 hmac_label = adoption_hmac_1 encryption_mechanism = CKM_AES_CBC hmac_key_type = CKK_GENERIC_SECRET hmac_keygen_mechanism = CKM_GENERIC_SECRET_KEY_GEN hmac_mechanism = CKM_SHA256_HMAC key_wrap_mechanism = CKM_AES_CBC_PAD key_wrap_generate_iv = true always_set_cka_sensitive = true os_locking_ok = false globalDefaultSecretStore: pkcs11 enabledSecretStores: ["simple_crypto", "pkcs11"] pkcs11: loginSecret: hsm-login clientDataSecret: proteccio-data clientDataPath: /etc/proteccio barbicanAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 1 barbicanKeystoneListener: replicas: 1 '-
library_pathspecifies the path to the PKCS#11 library, for example,/usr/lib64/libnethsm.sofor Proteccio). -
token_labelsspecifies the HSM partition name, for example,VHSM1. -
mkek_labelandhmac_labelspecify key labels that are configured in the HSM. -
loginSecretspecifies the name of the Kubernetes secret that contains the HSM PIN. -
clientDataSecretspecifies the name of the Kubernetes secret that contains the HSM certificates and configuration.
-
Verification
Verify that both secret stores are available:
$ openstack secret store listTest the HSM back-end functionality:
$ openstack secret store --name "hsm-test-$(date +%s)" \ --payload "test-payload" \ --algorithm aes --mode cbc --bit-length 256Verify that the migrated secrets are accessible:
$ openstack secret listCheck that the Key Manager service services are operational:
$ oc get pods -l service=barbican NAME READY STATUS RESTARTS AGE barbican-api-5d65949b4-xhkd7 2/2 Running 7 (10m ago) 29d barbican-keystone-listener-687cbdc77d-4kjnk 2/2 Running 3 (11m ago) 29d barbican-worker-5c4b947d5c-l9jdh 2/2 Running 3 (11m ago) 29d
HSM adoption preserves both simple crypto and HSM-backed secrets. The migration process maintains HSM metadata and secret references, ensuring continued access to existing secrets while enabling new secrets to use either back-end.
4.4.4. Troubleshooting Key Manager HSM adoption Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Review troubleshooting guidance for common issues that you might encounter while you perform the HSM-enabled Key Manager (Barbican) service adoption.
If issues persist after following the troubleshooting guide:
- Collect adoption logs and configuration for analysis.
- Check the HSM vendor documentation for vendor-specific troubleshooting.
- Verify HSM server status and connectivity independently.
- Review the adoption summary report for additional diagnostic information.
4.4.4.1. Resolving configuration validation failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the adoption fails with validation errors about placeholder values, replace the placeholder values with your environment’s configuration values.
Example error:
TASK [Validate all required variables are set] ****
fatal: [localhost]: FAILED! => {
"msg": "Required variable proteccio_certs_path contains placeholder value."
}
Procedure
- Edit your hardware security module configuration in the Zuul job vars or CI framework configuration file.
Check the following key variables and replace all placeholder values with actual configuration values for your environment:
cifmw_hsm_password: <your_actual_hsm_password> cifmw_barbican_proteccio_partition: <VHSM1> cifmw_barbican_proteccio_mkek_label: <your_mkek_label> cifmw_barbican_proteccio_hmac_label: <your_hmac_label> cifmw_hsm_proteccio_client_src: <https://your-server/path/to/Proteccio.iso> cifmw_hsm_proteccio_conf_src: <https://your-server/path/to/proteccio.rc>- Verify that no placeholder values remain in your configuration.
4.4.4.2. Resolving missing HSM file prerequisites Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the adoption fails because hardware security module (HSM) certificates or client software cannot be found, update your configuration to point to the files in their specific locations.
Example error:
TASK [Validate Proteccio prerequisites exist] ****
fatal: [localhost]: FAILED! => {
"msg": "Proteccio client ISO not found: /opt/proteccio/Proteccio3.06.05.iso"
}
Procedure
Verify that all required HSM files are accessible from the configured URLs. For example:
$ curl -I https://your-server/path/to/Proteccio3.06.05.iso $ curl -I https://your-server/path/to/proteccio.rc $ curl -I https://your-server/path/to/client.crt $ curl -I https://your-server/path/to/client.keyIf the files are in different locations, update the URL variables in your configuration. For example:
cifmw_hsm_proteccio_client_src: "https://correct-server/path/to/Proteccio3.06.05.iso" cifmw_hsm_proteccio_conf_src: "https://correct-server/path/to/proteccio.rc" cifmw_hsm_proteccio_client_crt_src: "https://correct-server/path/to/client.crt" cifmw_hsm_proteccio_client_key_src: "https://correct-server/path/to/client.key"- Check the network connectivity and authentication to ensure that the URLs are accessible from the CI environment.
4.4.4.3. Resolving source environment connectivity issues Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the adoption cannot connect to the source Red Hat OpenStack Platform environment to extract the configuration, check your SSH connectivity to the source Controller node and update the configuration if needed.
Example error:
TASK [detect source environment HSM configuration] ****
fatal: [localhost]: FAILED! => {
"msg": "SSH connection to source environment failed"
}
Procedure
Verify SSH connectivity to the source Controller node:
$ ssh -o StrictHostKeyChecking=no tripleo-admin@controller-0.ctlplaneUpdate the
controller1_sshvariable if needed:$ controller1_ssh: "ssh -o StrictHostKeyChecking=no tripleo-admin@<controller_ip>"where:
<controller_ip>- Specifies the IP address of your Controller node.
- Ensure that the SSH keys are properly configured for passwordless access.
4.4.4.4. Resolving HSM secret creation failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If hardware security module (HSM) secrets cannot be created in the target environment, check whether you need to update the names of your secrets in your source configuration file.
Example error:
TASK [Create HSM secrets in target environment] ****
fatal: [localhost]: FAILED! => {
"msg": "Failed to create secret proteccio-data"
}
Procedure
Verify target environment access:
$ export KUBECONFIG=/path/to/.kube/config $ oc get secrets -n openstackCheck if secrets already exist:
$ oc get secret proteccio-data hsm-login -n openstackIf secrets exist with different names, update the configuration variables:
proteccio_login_secret_name: "your-hsm-login-secret" proteccio_client_data_secret_name: "your-proteccio-data-secret"
4.4.4.5. Resolving custom image registry issues Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If custom Barbican images cannot be pushed to or pulled from the configured registry, you can verify the authentication, test image push permissions, and then update the configuration as needed.
Example error:
TASK [Create Proteccio-enabled Barbican images] ****
fatal: [localhost]: FAILED! => {
"msg": "Failed to push image to registry"
}
Procedure
Verify registry authentication:
$ podman login <registry_url>where:
<registry_url>- Specifies the URL of your configured registry.
Test image push permissions:
$ podman tag hello-world <registry>/<namespace>/test:latest $ podman push <registry>/<namespace>/test:latestwhere:
<registry>- Specifies the name of your registry server.
<namespace>- Specifies the namespace of your container image.
Update registry configuration variables if needed:
cifmw_update_containers_registry: "your-registry:5001" cifmw_update_containers_org: "your-namespace" cifmw_image_registry_verify_tls: false
4.4.4.6. Resolving HSM back-end detection failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the adoption role cannot detect hardware security module (HSM) configuration in the source environment, you must force the HSM adoption.
Example error:
TASK [detect source environment HSM configuration] ****
ok: [localhost] => {
"msg": "No HSM configuration found - using standard adoption"
}
Procedure
Manually verify that the HSM configuration exists in the source environment:
$ ssh tripleo-admin@controller-0.ctlplane \ "sudo grep -A 10 '\[p11_crypto_plugin\]' \ /var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf"If HSM is configured but not detected, force HSM adoption by setting the
barbican_hsm_enabledvariable:# In your Zuul job vars or CI framework configuration barbican_hsm_enabled: trueThis configuration ensures that the
barbican_adoptionrole uses the HSM-enabled patch for Key Manager service (barbican) deployment.
4.4.4.7. Resolving database migration issues Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If hardware security module (HSM) metadata is not preserved during database migration, check the database logs for any errors and verify that the source database includes the HSM secrets.
Example error:
TASK [Verify database migration preserves HSM references] ****
ok: [localhost] => {
"msg": "HSM secrets found in migrated database: 0"
}
Procedure
Verify that the source database contains the HSM secrets:
$ ssh tripleo-admin@controller-0.ctlplane \ "sudo mysql barbican -e 'SELECT COUNT(*) FROM secret_store_metadata WHERE key=\"plugin_name\" AND value=\"PKCS11\";'"Check the database migration logs for errors:
$ oc logs deployment/barbican-api | grep -i migration- If the migration failed, restore the database from backup and retry.
4.4.4.8. Resolving service startup failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the Key Manager service (barbican) services fail to start after the hardware security module (HSM) configuration is applied, check the configuration in the pod.
Example error:
$ oc get pods -l service=barbican
NAME READY STATUS RESTARTS AGE
barbican-api-xyz 0/1 Error 0 2m
Procedure
Check pod logs for HSM connectivity issues:
$ oc logs barbican-api-xyzVerify HSM library is accessible:
$ oc exec barbican-api-xyz -- ls -la /usr/lib64/libnethsm.soCheck HSM configuration in the pod:
$ oc exec barbican-api-xyz -- cat /etc/proteccio/proteccio.rc
4.4.4.9. Resolving performance and connectivity issues Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the hardware security module (HSM) operations are slow or fail intermittently, check the HSM connectivity and monitor the HSM server logs.
Procedure
Test HSM connectivity from Key Manager service (barbican) pods:
$ oc exec barbican-api-xyz -- pkcs11-tool --module /usr/lib64/libnethsm.so --list-slotsCheck HSM server connectivity:
$ oc exec barbican-api-xyz -- nc -zv <hsm_server_ip> <hsm_port>where:
<hsm_server_ip>- Specifies the IP address of the HSM server.
<hsm_port>- Specifies the port of your HSM server.
- Monitor HSM server logs for authentication or capacity issues.
4.4.5. Troubleshooting Key Manager service Proteccio HSM adoption Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Use this reference to troubleshoot common issues that might occur during Key Manager service (barbican) adoption with Proteccio HSM integration. If Proteccio HSM issues persist, consult the Eviden Trustway documentation and ensure that HSM server configuration matches the client settings.
4.4.5.1. Resolving prerequisite validation failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the adoption script fails during the prerequisites check, verify that your configuration includes all the required Proteccio files and that the HSM Ansible role is available.
Example error:
ERROR: Required file proteccio_files/YOUR_CERT_FILE not found
ERROR: Cannot connect to OpenShift cluster
ERROR: Proteccio HSM Ansible role not found
Procedure
Verify that all required Proteccio files are present:
$ ls -la /path/to/your/proteccio_files/Ensure that your configured certificate files, private key, HSM certificate file, and configuration file exist as specified in your
proteccio_required_filesconfiguration.Test OpenShift cluster connectivity:
$ oc cluster-info $ oc get pods -n openstackVerify that the HSM Ansible role is available:
$ ls -la /path/to/your/roles/ansible-role-rhoso-proteccio-hsm/
4.4.5.2. Resolving SSH connection failures to the source environment Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If you cannot connect to the source director environment, verify your SSH key access and test the SSH commands that the adoption uses.
Example error:
Warning: Permanently added 'YOUR_UNDERCLOUD_HOST' (ED25519) to the list of known hosts.
Permission denied (publickey).
Procedure
Verify SSH key access to the undercloud:
$ ssh YOUR_UNDERCLOUD_HOST echo "Connection test"Test the specific SSH commands used by the adoption:
$ sudo ssh -t YOUR_UNDERCLOUD_HOST 'sudo -u stack bash -lc "echo test"' $ sudo ssh -t YOUR_UNDERCLOUD_HOST 'sudo -u stack ssh -t tripleo-admin@YOUR_CONTROLLER_HOST.ctlplane "echo test"'- If the connection fails, verify the SSH configuration and ensure that the undercloud hostname resolves correctly.
4.4.5.3. Resolving database import failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the source database export or import fails, check the source Galera container, database connectivity, and the source Key Manager service (barbican) configuration.
Example error:
Error: no container with name or ID "galera-bundle-podman-0" found
mysqldump: Got error: 1045: "Access denied for user 'barbican'@'localhost'"
Procedure
Verify that the source Galera container is running:
$ sudo ssh -t YOUR_UNDERCLOUD_HOST 'sudo -u stack ssh -t tripleo-admin@YOUR_CONTROLLER_HOST.ctlplane "sudo podman ps | grep galera"'Test database connectivity with the extracted credentials:
$ sudo ssh -t YOUR_UNDERCLOUD_HOST 'sudo -u stack ssh -t tripleo-admin@YOUR_CONTROLLER_HOST.ctlplane "sudo podman exec galera-bundle-podman-0 mysql -u barbican -p<password> -e \"SELECT 1;\""'where:
<password>- Specifies your database password.
Check the source Key Manager service configuration for the correct database password:
$ sudo ssh -t YOUR_UNDERCLOUD_HOST 'sudo -u stack ssh -t tripleo-admin@YOUR_CONTROLLER_HOST.ctlplane "sudo grep connection /var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf"'
4.4.5.4. Resolving custom image pull failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If Proteccio custom images fail to pull or start, verify image registry access, image pull secrets, and registry authentication.
Example error:
Failed to pull image "<Your Custom Pod Image and Tag>": rpc error
Pod has unbound immediate PersistentVolumeClaims
Procedure
Verify image registry access:
$ podman pull <custom_pod_image_and_tag>where:
<custom_pod_image_and_tag>- Specifies your custom pod image and the image tag.
Check image pull secrets and registry authentication:
$ oc get secrets -n openstack | grep pull $ oc describe pod <barbican_pod_name> -n openstackwhere:
<barbican_pod_name>- Specifies your Barbican pod name.
Verify that the
OpenStackVersionresource was applied correctly:$ oc get openstackversion openstack -n openstack -o yaml
4.4.5.5. Resolving HSM certificate mounting issues Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If Proteccio client certificates are not properly mounted in pods, check the secret creation and ensure that the Key Manager service (barbican) configuration includes the correct volume mounts.
Example error:
$ oc exec <barbican-pod> -c barbican-api -- ls -la /etc/proteccio/
ls: cannot access '/etc/proteccio/': No such file or directory
Procedure
Verify that the
proteccio-datasecret was created correctly:$ oc describe secret proteccio-data -n openstackCheck that the secret contains the expected files:
$ oc get secret proteccio-data -n openstack -o yamlVerify that the Key Manager service configuration includes the correct volume mounts:
$ oc get barbican barbican -n openstack -o yaml | grep -A10 pkcs11
4.4.5.6. Resolving service startup failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the Key Manager service (barbican) services fail to start after the hardware security module (HSM) configuration is applied, check the configuration in the pod.
Example error:
$ oc get pods -l service=barbican
NAME READY STATUS RESTARTS AGE
barbican-api-xyz 0/1 Error 0 2m
Procedure
Check pod logs for HSM connectivity issues:
$ oc logs barbican-api-xyzVerify HSM library is accessible:
$ oc exec barbican-api-xyz -- ls -la /usr/lib64/libnethsm.soCheck HSM configuration in the pod:
$ oc exec barbican-api-xyz -- cat /etc/proteccio/proteccio.rc
4.4.5.7. Resolving adoption verification failures Link kopierenLink in die Zwischenablage kopiert!
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
If the secrets from the source environment are not accessible after adoption, verify that the database import completed successfully, test API connectivity, and check for schema adoption issues.
Example error:
$ openstack secret list
# Returns empty list or HTTP 500 errors
Procedure
Verify that the database import completed successfully:
$ oc exec openstack-galera-0 -n openstack -- mysql -u root -p<password> barbican -e "SELECT COUNT(*) FROM secrets;"where:
<password>- Specifies your database password.
Check for schema adoption issues:
$ oc logs job.batch/barbican-db-sync -n openstackTest API connectivity:
$ oc exec openstackclient -n openstack -- curl -s -k -H "X-Auth-Token: $(openstack token issue -f value -c id)" https://barbican-internal.openstack.svc:9311/v1/secrets- Verify that projects and users were adopted correctly, as secrets are project-scoped.
4.4.6. Rolling back the HSM adoption Link kopierenLink in die Zwischenablage kopiert!
If the hardware security module (HSM) adoption fails, you can restore your environment to its original state and attempt the adoption again.
This content in this section is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview.
Procedure
Restore the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 database backup:
$ oc exec -i openstack-galera-0 -n openstack -- mysql -u root -p<password> barbican < /path/to/your/backups/rhoso18_barbican_backup.sqlwhere:
- <password>
- Specifies your database password.
Reset to standard images:
$ oc delete openstackversion openstack -n openstackRestore the base control plane configuration:
$ oc apply -f /path/to/your/base_controlplane.yaml
Next steps
To avoid additional issues when attempting your adoption again, consider the following suggestions:
- Check the adoption logs that are stored in your configured working directory with timestamped summary reports.
- For HSM-specific issues, consult the Proteccio documentation and verify HSM connectivity from the target environment.
-
Run the adoption in dry-run mode (
./run_proteccio_adoption.shoption 3) to validate the environment before making changes.
4.5. Adopting the Networking service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Networking service (neutron), you patch an existing OpenStackControlPlane custom resource (CR) that has the Networking service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
The Networking service adoption is complete if you see the following results:
-
The
NeutronAPIservice is running. - The Identity service (keystone) endpoints are updated, and the same back end of the source cloud is available.
Prerequisites
- Ensure that Single Node OpenShift or OpenShift Local is running in the Red Hat OpenShift Container Platform (RHOCP) cluster.
- Adopt the Identity service. For more information, see Adopting the Identity service.
-
Migrate your OVN databases to
ovsdb-serverinstances that run in the Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Migrating OVN data.
Procedure
Patch the
OpenStackControlPlaneCR to deploy the Networking service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: neutron: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer databaseInstance: openstack databaseAccount: neutron secret: osp-secret networkAttachments: - internalapi 'where:
- <172.17.0.80>
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.NoteIf you used the
neutron-dhcp-agentin your RHOSP 17.1 deployment and you still need to use it after adoption, you must enable thedhcp_agent_notificationfor theneutron-apiservice:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: neutron: template: customServiceConfig: | [DEFAULT] dhcp_agent_notification = True '
Verification
Inspect the resulting Networking service pods:
$ oc get pods -l service=neutronEnsure that the
Neutron APIservice is registered in the Identity service:$ openstack service list | grep network$ openstack endpoint list | grep network | 6a805bd6c9f54658ad2f24e5a0ae0ab6 | regionOne | neutron | network | True | public | http://neutron-public-openstack.apps-crc.testing | | b943243e596847a9a317c8ce1800fa98 | regionOne | neutron | network | True | internal | http://neutron-internal.openstack.svc:9696 |Create sample resources so that you can test whether the user can create networks, subnets, ports, or routers:
$ openstack network create net $ openstack subnet create --network net --subnet-range 10.0.0.0/24 subnet $ openstack router create router
4.6. Configuring control plane networking for spine-leaf topologies Link kopierenLink in die Zwischenablage kopiert!
If you are adopting a spine-leaf or Distributed Compute Node (DCN) deployment, update the control plane networking for communication across sites. Add subnets for remote sites to your existing NetConfig custom resource (CR) and update NetworkAttachmentDefinition CRs with routes to enable connectivity between the central control plane and remote sites.
Prerequisites
- You have deployed the Red Hat OpenStack Services on OpenShift (RHOSO) control plane.
-
You have configured a
NetConfigCR for the central site. For more information, see Configuring isolated networks. You have the network topology information for all remote sites, including:
- IP address ranges for each service network at each site
- VLAN IDs for each service network at each site
- Gateway addresses for inter-site routing
Procedure
Update your existing
NetConfigCR to add subnets for each remote site. Each service network must include a subnet for the central site and each remote site. Use unique VLAN IDs for each site. For example:- Central site: VLANs 20-23
- Edge site 1: VLANs 30-33
Edge site 2: VLANs 40-43
apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: netconfig spec: networks: - name: ctlplane dnsDomain: ctlplane.example.com subnets: - name: <subnet1> allocationRanges: - end: 192.168.122.120 start: 192.168.122.100 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: <ctlplanesite1> allocationRanges: - end: 192.168.133.120 start: 192.168.133.100 cidr: 192.168.133.0/24 gateway: 192.168.133.1 - name: <ctlplanesite2> allocationRanges: - end: 192.168.144.120 start: 192.168.144.100 cidr: 192.168.144.0/24 gateway: 192.168.144.1 - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 cidr: 172.17.0.0/24 vlan: 20 - name: internalapisite1 allocationRanges: - end: 172.17.10.250 start: 172.17.10.100 cidr: 172.17.10.0/24 vlan: 30 - name: internalapisite2 allocationRanges: - end: 172.17.20.250 start: 172.17.20.100 cidr: 172.17.20.0/24 vlan: 40 - name: storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: storagesite1 allocationRanges: - end: 172.18.10.250 start: 172.18.10.100 cidr: 172.18.10.0/24 vlan: 31 - name: storagesite2 allocationRanges: - end: 172.18.20.250 start: 172.18.20.100 cidr: 172.18.20.0/24 vlan: 41 - name: tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22 - name: tenantsite1 allocationRanges: - end: 172.19.10.250 start: 172.19.10.100 cidr: 172.19.10.0/24 vlan: 32 - name: tenantsite2 allocationRanges: - end: 172.19.20.250 start: 172.19.20.100 cidr: 172.19.20.0/24 vlan: 42where:
<subnet1>- Specifies a user-defined subnet name for the central site subnet.
<ctlplanesite1>- Specifies a user-defined subnet for the first DCN edge site.
<ctlplanesite2>Specifies a user-defined subnet for the second DCN edge site.
NoteYou must have the
storagemgmtnetwork on OpenShift nodes when using DCN with Swift storage. It is not necessary when using Red Hat Ceph Storage.
Update the
NetworkAttachmentDefinitionCR for theinternalapinetwork to include routes to remote site subnets. Theseroutesfields enable control plane pods attached to theinternalapinetwork, such as OVN Southbound database, to communicate with Compute nodes at remote sites through the central site gateway, and are required for DCN:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi spec: config: | { "cniVersion": "0.3.1", "name": "internalapi", "type": "macvlan", "master": "internalapi", "ipam": { "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.30", "range_end": "172.17.0.70", "routes": [ { "dst": "172.17.10.0/24", "gw": "172.17.0.1" }, { "dst": "172.17.20.0/24", "gw": "172.17.0.1" } ] } }Update the
NetworkAttachmentDefinitionCR for thectlplanenetwork to include routes to remote site subnets:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane spec: config: | { "cniVersion": "0.3.1", "name": "ctlplane", "type": "macvlan", "master": "ospbr", "ipam": { "type": "whereabouts", "range": "192.168.122.0/24", "range_start": "192.168.122.30", "range_end": "192.168.122.70", "routes": [ { "dst": "192.168.133.0/24", "gw": "192.168.122.1" }, { "dst": "192.168.144.0/24", "gw": "192.168.122.1" } ] } }Update the
NetworkAttachmentDefinitionCR for thestoragenetwork to include routes to remote site subnets:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage spec: config: | { "cniVersion": "0.3.1", "name": "storage", "type": "macvlan", "master": "storage", "ipam": { "type": "whereabouts", "range": "172.18.0.0/24", "range_start": "172.18.0.30", "range_end": "172.18.0.70", "routes": [ { "dst": "172.18.10.0/24", "gw": "172.18.0.1" }, { "dst": "172.18.20.0/24", "gw": "172.18.0.1" } ] } }Update the
NetworkAttachmentDefinitionCR for thetenantnetwork to include routes to remote site subnets:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant spec: config: | { "cniVersion": "0.3.1", "name": "tenant", "type": "macvlan", "master": "tenant", "ipam": { "type": "whereabouts", "range": "172.19.0.0/24", "range_start": "172.19.0.30", "range_end": "172.19.0.70", "routes": [ { "dst": "172.19.10.0/24", "gw": "172.19.0.1" }, { "dst": "172.19.20.0/24", "gw": "172.19.0.1" } ] } }NoteAdjust the IP ranges, subnets, and gateway addresses in all NAD configurations to match your network topology. The
masterinterface name must match the interface on the OpenShift nodes where the VLAN is configured.If you have already deployed OVN services, restart the OVN Southbound database pods to pick up the new routes:
$ oc delete pod -l service=ovsdbserver-sbThe pods are automatically recreated with the updated network configuration.
Configure the Networking service (neutron) to recognize all site physnets. In the
OpenStackControlPlaneCR, ensure the Networking service configuration includes all physnets:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: neutron: template: customServiceConfig: | [ml2_type_vlan] network_vlan_ranges = leaf0:1:1000,leaf1:1:1000,leaf2:1:1000 [ovn] ovn_emit_need_to_frag = falsewhere:
- leaf0
- Represents the physnet for the central site.
- leaf1
- Represents the physnet for the first remote site.
- leaf2
Represents the physnet for the second remote site.
NoteAdjust the physnet names to match your Red Hat OpenStack Platform deployment. Common conventions include
leaf0/leaf1/leaf2ordatacentre/dcn1/dcn2.
Verification
Verify that the
NetConfigCR is created with all subnets:$ oc get netconfig netconfig -o yaml | grep -A2 "name: subnet1\|name: .*site"Verify that each
NetworkAttachmentDefinitionincludes routes to remote site subnets:for nad in ctlplane internalapi storage tenant; do echo "=== $nad ===" oc get net-attach-def $nad -o jsonpath='{.spec.config}' | jq '.ipam.routes doneAfter restarting OVN SB pods, verify they have routes to remote site subnets:
$ oc exec $(oc get pod -l service=ovsdbserver-sb -o name | head -1) -- ip route show | grep 172.17Sample output:
172.17.10.0/24 via 172.17.0.1 dev internalapi 172.17.20.0/24 via 172.17.0.1 dev internalapi
4.7. Adopting the Object Storage service Link kopierenLink in die Zwischenablage kopiert!
If you are using Object Storage as a service, adopt the Object Storage service (swift) to the Red Hat OpenStack Services on OpenShift (RHOSO) environment. If you are using the Object Storage API of the Ceph Object Gateway (RGW), skip the following procedure.
Prerequisites
- The Object Storage service storage back-end services are running in the Red Hat OpenStack Platform (RHOSP) deployment.
- The storage network is properly configured on the Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Create the
swift-confsecret that includes the Object Storage service hash path suffix and prefix:$ oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: swift-conf type: Opaque data: swift.conf: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/swift/etc/swift/swift.conf | base64 -w0) EOFCreate the
swift-ring-filesConfigMapthat includes the Object Storage service ring files:$ oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: swift-ring-files binaryData: swiftrings.tar.gz: $($CONTROLLER1_SSH "cd /var/lib/config-data/puppet-generated/swift/etc/swift && tar cz *.builder *.ring.gz backups/ | base64 -w0") account.ring.gz: $($CONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/account.ring.gz") container.ring.gz: $($CONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/container.ring.gz") object.ring.gz: $($CONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/object.ring.gz") EOFPatch the
OpenStackControlPlanecustom resource to deploy the Object Storage service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: swift: enabled: true template: memcachedInstance: memcached swiftRing: ringReplicas: 3 swiftStorage: replicas: 0 networkAttachments: - storage storageClass: local-storage storageRequest: 10Gi swiftProxy: secret: osp-secret replicas: 2 encryptionEnabled: false passwordSelectors: service: SwiftPassword serviceUser: swift override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer networkAttachments: - storage 'NoteIf
SwiftEncryptionEnabled: truewas set in Red Hat OpenStack Platform, ensure thatspec.swift.swiftProxy.encryptionEnabledis set totrueand that the Key Manager service (barbican) adoption is complete before proceeding.-
spec.swift.swiftStorage.storageClassmust match the RHOSO deployment storage class. -
metallb.universe.tf/loadBalancerIPs: <172.17.0.80>specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80. -
spec.swift.swiftProxy.networkAttachmentsmust match the network attachment for the previous Object Storage service configuration from the RHOSP deployment.
-
Verification
Inspect the resulting Object Storage service pods:
$ oc get pods -l component=swift-proxyVerify that the Object Storage proxy service is registered in the Identity service (keystone):
$ openstack service list | grep swift | b5b9b1d3c79241aa867fa2d05f2bbd52 | swift | object-store |$ openstack endpoint list | grep swift | 32ee4bd555414ab48f2dc90a19e1bcd5 | regionOne | swift | object-store | True | public | https://swift-public-openstack.apps-crc.testing/v1/AUTH_%(tenant_id)s | | db4b8547d3ae4e7999154b203c6a5bed | regionOne | swift | object-store | True | internal | http://swift-internal.openstack.svc:8080/v1/AUTH_%(tenant_id)s |Verify that you are able to upload and download objects:
$ openstack container create test +---------------------------------------+-----------+------------------------------------+ | account | container | x-trans-id | +---------------------------------------+-----------+------------------------------------+ | AUTH_4d9be0a9193e4577820d187acdd2714a | test | txe5f9a10ce21e4cddad473-0065ce41b9 | +---------------------------------------+-----------+------------------------------------+ $ openstack object create test --name obj <(echo "Hello World!") +--------+-----------+----------------------------------+ | object | container | etag | +--------+-----------+----------------------------------+ | obj | test | d41d8cd98f00b204e9800998ecf8427e | +--------+-----------+----------------------------------+ $ openstack object save test obj --file - Hello World!
The Object Storage data is still stored on the existing RHOSP nodes. For more information about migrating the actual data from the RHOSP deployment to the RHOSO deployment, see Migrating the Object Storage service (swift) data from RHOSP to Red Hat OpenStack Services on OpenShift (RHOSO) nodes.
4.8. Adopting the Image service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Image Service (glance) you patch an existing OpenStackControlPlane custom resource (CR) that has the Image service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
The Image service adoption is complete if you see the following results:
-
The
GlanceAPIservice up and running. - The Identity service endpoints are updated, and the same back end of the source cloud is available.
To complete the Image service adoption, ensure that your environment meets the following criteria:
- You have a running director environment (the source cloud).
- You have a Single Node OpenShift or OpenShift Local that is running in the Red Hat OpenShift Container Platform (RHOCP) cluster.
-
Optional: You can reach an internal/external
Cephcluster by bothcrcand director.
If you have image quotas in RHOSP 17.1, these quotas are transferred to Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 because the image quota system in 18.0 is disabled by default. For more information about enabling image quotas in 18.0, see Configuring image quotas in Customizing persistent storage. If you enable image quotas in RHOSO 18.0, the new quotas replace the legacy quotas from RHOSP 17.1.
4.8.1. Adopting the Image service that is deployed with a Object Storage service back end Link kopierenLink in die Zwischenablage kopiert!
Adopt the Image Service (glance) that you deployed with an Object Storage service (swift) back end in the Red Hat OpenStack Platform (RHOSP) environment. The control plane glanceAPI instance is deployed with the following configuration. You use this configuration in the patch manifest that deploys the Image service with the object storage back end:
..
spec
glance:
...
customServiceConfig: |
[DEFAULT]
enabled_backends = default_backend:swift
[glance_store]
default_backend = default_backend
[default_backend]
swift_store_create_container_on_put = True
swift_store_auth_version = 3
swift_store_auth_address = {{ .KeystoneInternalURL }}
swift_store_endpoint_type = internalURL
swift_store_user = service:glance
swift_store_key = {{ .ServicePassword }}
Prerequisites
- You have completed the previous adoption steps.
Procedure
Create a new file, for example,
glance_swift.patch, and include the following content:spec: glance: enabled: true apiOverride: route: {} template: secret: osp-secret databaseInstance: openstack storage: storageRequest: 10G customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }} glanceAPIs: default: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer networkAttachments: - storagewhere:
- <172.17.0.80>
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.NoteThe Object Storage service as a back end establishes a dependency with the Image service. Any deployed
GlanceAPIinstances do not work if the Image service is configured with the Object Storage service that is not available in theOpenStackControlPlanecustom resource. After the Object Storage service, and in particularSwiftProxy, is adopted, you can proceed with theGlanceAPIadoption. For more information, see Adopting the Object Storage service.
Verify that
SwiftProxyis available:$ oc get pod -l component=swift-proxy | grep Running swift-proxy-75cb47f65-92rxq 3/3 Running 0Patch the
GlanceAPIservice that is deployed in the control plane:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=glance_swift.patch
4.8.2. Adopting the Image service that is deployed with a Block Storage service back end Link kopierenLink in die Zwischenablage kopiert!
Adopt the Image Service (glance) that you deployed with a Block Storage service (cinder) back end in the Red Hat OpenStack Platform (RHOSP) environment. The control plane glanceAPI instance is deployed with the following configuration. You use this configuration in the patch manifest that deploys the Image service with the block storage back end:
..
spec
glance:
...
customServiceConfig: |
[DEFAULT]
enabled_backends = default_backend:cinder
[glance_store]
default_backend = default_backend
[default_backend]
description = Default cinder backend
cinder_store_auth_address = {{ .KeystoneInternalURL }}
cinder_store_user_name = {{ .ServiceUser }}
cinder_store_password = {{ .ServicePassword }}
cinder_store_project_name = service
cinder_catalog_info = volumev3::internalURL
cinder_use_multipath = true
[oslo_concurrency]
lock_path = /var/lib/glance/tmp
Prerequisites
- You have completed the previous adoption steps.
Procedure
Create a new file, for example
glance_cinder.patch, and include the following content:spec: glance: enabled: true apiOverride: route: {} template: secret: osp-secret databaseInstance: openstack storage: storageRequest: 10G customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true [oslo_concurrency] lock_path = /var/lib/glance/tmp glanceAPIs: default: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer networkAttachments: - storagewhere:
- <172.17.0.80>
Specifies the load balancer IP. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.NoteThe Block Storage service as a back end establishes a dependency with the Image service. Any deployed
GlanceAPIinstances do not work if the Image service is configured with the Block Storage service that is not available in theOpenStackControlPlanecustom resource. After the Block Storage service, and in particularCinderVolume, is adopted, you can proceed with theGlanceAPIadoption. For more information, see Adopting the Block Storage service.
Verify that
CinderVolumeis available:$ oc get pod -l component=cinder-volume | grep Running cinder-volume-75cb47f65-92rxq 3/3 Running 0Patch the
GlanceAPIservice that is deployed in the control plane:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=glance_cinder.patch
4.8.3. Adopting the Image service that is deployed with an NFS back end Link kopierenLink in die Zwischenablage kopiert!
Adopt the Image Service (glance) that you deployed with an NFS back end. To complete the following procedure, ensure that your environment meets the following criteria:
- The Storage network is propagated to the Red Hat OpenStack Platform (RHOSP) control plane.
-
The Image service can reach the Storage network and connect to the nfs-server through the port
2049.
Prerequisites
- You have completed the previous adoption steps.
In the source cloud, verify the NFS parameters that the overcloud uses to configure the Image service back end. Specifically, in yourdirector heat templates, find the following variables that override the default content that is provided by the
glance-nfs.yamlfile in the/usr/share/openstack-tripleo-heat-templates/environments/storagedirectory:GlanceBackend: file GlanceNfsEnabled: true GlanceNfsShare: 192.168.24.1:/var/nfsNoteIn this example, the
GlanceBackendvariable shows that the Image service has no notion of an NFS back end. The variable is using theFiledriver and, in the background, thefilesystem_store_datadir. Thefilesystem_store_datadiris mapped to the export value provided by theGlanceNfsSharevariable instead of/var/lib/glance/images/. If you do not export theGlanceNfsSharethrough a network that is propagated to the adopted Red Hat OpenStack Services on OpenShift (RHOSO) control plane, you must stop thenfs-serverand remap the export to thestoragenetwork. Before doing so, ensure that the Image service is stopped in the source Controller nodes.In the control plane, the Image service is attached to the Storage network, then propagated through the associated
NetworkAttachmentsDefinitioncustom resource (CR), and the resulting pods already have the right permissions to handle the Image service traffic through this network. In a deployed RHOSP control plane, you can verify that the network mapping matches with what has been deployed in the director-based environment by checking both theNodeNetworkConfigPolicy(nncp) and theNetworkAttachmentDefinition(net-attach-def). The following is an example of the output that you should check in the Red Hat OpenShift Container Platform (RHOCP) environment to make sure that there are no issues with the propagated networks:$ oc get nncp NAME STATUS REASON enp6s0-crc-8cf2w-master-0 Available SuccessfullyConfigured $ oc get net-attach-def NAME ctlplane internalapi storage tenant $ oc get ipaddresspool -n metallb-system NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES ctlplane true false ["192.168.122.80-192.168.122.90"] internalapi true false ["172.17.0.80-172.17.0.90"] storage true false ["172.18.0.80-172.18.0.90"] tenant true false ["172.19.0.80-172.19.0.90"]
Procedure
Adopt the Image service and create a new
defaultGlanceAPIinstance that is connected with the existing NFS share:$ cat << EOF > glance_nfs_patch.yaml spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <exported_path> server: <ip_address> name: r1 region: r1 glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images/ storage: storageRequest: 10G keystoneEndpoint: nfs glanceAPIs: nfs: replicas: 3 type: single override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer networkAttachments: - storage EOFwhere:
- <exported_path>
-
Specifies the exported path in the
nfs-server. - <ip_address>
-
Specifies the IP address that you use to communicate with the
nfs-server. - <172.17.0.80>
-
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.
Patch the
OpenStackControlPlaneCR to deploy the Image service with an NFS back end:$ oc patch openstackcontrolplane openstack --type=merge --patch-file glance_nfs_patch.yamlPatch the
OpenStackControlPlaneCR to remove the default Image service:$ oc patch openstackcontrolplane openstack --type=json -p="[{'op': 'remove', 'path': '/spec/glance/template/glanceAPIs/default'}]"
Verification
When
GlanceAPIis active, confirm that you can see a single API instance:$ oc get pods -l service=glance NAME READY STATUS RESTARTS glance-nfs-single-0 2/2 Running 0 glance-nfs-single-1 2/2 Running 0 glance-nfs-single-2 2/2 Running 0Ensure that the description of the pod reports the following output:
Mounts: ... nfs: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: {{ server ip address }} Path: {{ nfs export path }} ReadOnly: false ...Check that the mountpoint that points to
/var/lib/glance/imagesis mapped to the expectednfs server ipandnfs paththat you defined in the new defaultGlanceAPIinstance:$ oc rsh -c glance-api glance-default-single-0 sh-5.1# mount ... ... {{ ip address }}:/var/nfs on /var/lib/glance/images type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.18.0.5,local_lock=none,addr=172.18.0.5) ... ...Confirm that the UUID is created in the exported directory on the NFS node. For example:
$ oc rsh openstackclient $ openstack image list sh-5.1$ curl -L -o /tmp/cirros-0.6.3-x86_64-disk.img http://download.cirros-cloud.net/0.6.3/cirros-0.6.3-x86_64-disk.img ... ... sh-5.1$ openstack image create --container-format bare --disk-format raw --file /tmp/cirros-0.6.3-x86_64-disk.img cirros ... ... sh-5.1$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 634482ca-4002-4a6d-b1d5-64502ad02630 | cirros | active | +--------------------------------------+--------+--------+On the
nfs-servernode, the sameuuidis in the exported/var/nfs:$ ls /var/nfs/ 634482ca-4002-4a6d-b1d5-64502ad02630
4.8.4. Adopting the Image service that is deployed with a Red Hat Ceph Storage back end Link kopierenLink in die Zwischenablage kopiert!
Adopt the Image Service (glance) that you deployed with a Red Hat Ceph Storage back end. Use the customServiceConfig parameter to inject the right configuration to the GlanceAPI instance.
Prerequisites
- You have completed the previous adoption steps.
Ensure that the Ceph-related secret (
ceph-conf-files) is created in theopenstacknamespace and that theextraMountsproperty of theOpenStackControlPlanecustom resource (CR) is configured properly. For more information, see Configuring a Ceph back end.$ cat << EOF > glance_patch.yaml spec: glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends=default_backend:rbd [glance_store] default_backend=default_backend [default_backend] rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=openstack rbd_store_pool=images store_description=Ceph glance store backend. storage: storageRequest: 10G glanceAPIs: default: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer networkAttachments: - storage EOFwhere:
- <172.17.0.80>
-
Specifies the load balancer IP. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.
If you backed up your Red Hat OpenStack Platform (RHOSP) services configuration file from the original environment, you can compare it with the confgiuration file that you adopted and ensure that the configuration is correct. For more information, see Pulling the configuration from a director deployment.
os-diff diff /tmp/collect_tripleo_configs/glance/etc/glance/glance-api.conf glance_patch.yaml --crd
This command produces the difference between both ini configuration files.
Procedure
Patch the
OpenStackControlPlaneCR to deploy the Image service with a Red Hat Ceph Storage back end:$ oc patch openstackcontrolplane openstack --type=merge --patch-file glance_patch.yaml
4.8.5. Adopting the Image service with multiple Red Hat Ceph Storage back ends (DCN) Link kopierenLink in die Zwischenablage kopiert!
Adopt the Image Service (glance) in a Distributed Compute Node (DCN) deployment where multiple Red Hat Ceph Storage clusters provide storage at different sites. This configuration deploys multiple GlanceAPI instances: a central API with access to all Red Hat Ceph Storage clusters, and edge APIs at each DCN site with access to their local cluster and the central cluster.
During adoption, the Image service instances that ran on edge site Compute nodes are migrated to run on Red Hat OpenShift Container Platform (RHOCP) at the central site. Although the control path for API requests now traverses the WAN to reach the Image service running on Red Hat OpenShift Container Platform (RHOCP), the data path remains local. Image data continues to be stored in the Red Hat Ceph Storage cluster at each edge site. When you create a virtual machine or volume from an image, the operation occurs at the local Red Hat Ceph Storage cluster. This architecture uses Red Hat Ceph Storage shallow copies (copy-on-write clones) to enable fast boot times without transferring image data across the WAN.
The virtual IP addresses (VIPs) used by Compute service nodes to reach the Image service change during adoption. Before adoption, edge site nodes contact a local Image service VIP on the internalapi subnet. After adoption, they contact a Red Hat OpenShift Container Platform (RHOCP) service endpoint on a different internalapi subnet. The following table shows an example of this change:
| Site | Before adoption | After adoption |
|---|---|---|
| Central | Identity service catalog VIP |
Identity service catalog updated to |
| DCN1 |
|
|
| DCN2 |
|
|
In Red Hat OpenStack Platform, the internal Image service endpoint at edge sites used TCP port 9293, after adoption, all Image service endpoints use port 9292. The new endpoints are backed by MetalLB load balancer IPs that you assign using the metallb.universe.tf/loadBalancerIPs annotation on each GlanceAPI. When you patch the OpenStackControlPlane custom resource (CR), Red Hat OpenShift Container Platform (RHOCP) creates internal Kubernetes services (for example, glance-dcn1-internal.openstack.svc) that resolve to those MetalLB IPs. The Compute service nodes are configured to use these endpoints when you adopt the data plane. For more information, see Adopting Compute services with multiple Ceph back ends (DCN). The examples in this procedure use http:// for the Image service endpoints. If your Red Hat OpenStack Platform deployment uses TLS for internal endpoints, use https:// and ensure that you have completed the TLS migration. For more information, see Migrating TLS-e to the RHOSO deployment.
Prerequisites
- You have completed the previous adoption steps.
-
The per-site Red Hat Ceph Storage secrets (
ceph-conf-central,ceph-conf-dcn1,ceph-conf-dcn2) exist and contain the configuration and keyrings for each site’s Red Hat Ceph Storage cluster. For more information, see Configuring a Red Hat Ceph Storage back end. -
The
extraMountsproperty of theOpenStackControlPlaneCR is configured to mount the Red Hat Ceph Storage configuration to all Glance instances. -
You have stopped the Image service on all DCN nodes. If your deployment includes
DistributedComputeHCIScaleOutorDistributedComputeScaleOutnodes, you have also stopped HAProxy on those nodes. For more information, see Stopping Red Hat OpenStack Platform services.
Procedure
Create a patch file for the Image service with multiple Red Hat Ceph Storage back ends. Use MetalLB loadbalancer IPs for the Image service endpoints:
Example DCN deployment with a central site and two edge sites:
$ cat << EOF > glance_dcn_patch.yaml spec: glance: enabled: true template: databaseInstance: openstack databaseAccount: glance keystoneEndpoint: central storage: storageRequest: <10G> glanceAPIs: central: type: split replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer networkAttachments: - storage customServiceConfig: | [DEFAULT] enabled_import_methods = [web-download,copy-image,glance-direct] enabled_backends = central:rbd,dcn1:rbd,dcn2:rbd [glance_store] default_backend = central [central] rbd_store_ceph_conf = /etc/ceph/central.conf store_description = "Central RBD backend" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True [dcn1] rbd_store_ceph_conf = /etc/ceph/dcn1.conf store_description = "DCN1 RBD backend" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True [dcn2] rbd_store_ceph_conf = /etc/ceph/dcn2.conf store_description = "DCN2 RBD backend" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True dcn1: type: edge replicas: 2 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.81> spec: type: LoadBalancer networkAttachments: - storage customServiceConfig: | [DEFAULT] enabled_import_methods = [web-download,copy-image,glance-direct] enabled_backends = central:rbd,dcn1:rbd [glance_store] default_backend = dcn1 [central] rbd_store_ceph_conf = /etc/ceph/central.conf store_description = "Central RBD backend" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True [dcn1] rbd_store_ceph_conf = /etc/ceph/dcn1.conf store_description = "DCN1 RBD backend" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True dcn2: type: edge replicas: 2 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.82> spec: type: LoadBalancer networkAttachments: - storage customServiceConfig: | [DEFAULT] enabled_import_methods = [web-download,copy-image,glance-direct] enabled_backends = central:rbd,dcn2:rbd [glance_store] default_backend = dcn2 [central] rbd_store_ceph_conf = /etc/ceph/central.conf store_description = "Central RBD backend" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True [dcn2] rbd_store_ceph_conf = /etc/ceph/dcn2.conf store_description = "DCN2 RBD backend" rbd_store_pool = images rbd_store_user = openstack rbd_thin_provisioning = True EOFwhere:
<172.17.0.80>- Specifies the load balancer IP for the central Image service API.
<172.17.0.81>- Specifies the load balancer IP for the DCN1 edge Image service API.
<172.17.0.82>Specifies the load balancer IP for the DCN2 edge Image service API.
You must configure the Compute nodes at each site to use their local Image service endpoints. For example, Compute nodes at central use 172.17.0.80, Compute nodes at dcn1 use 172.17.0.81, and Compute nodes at dcn2 use 172.17.0.82. This configuration is applied when you adopt the data plane by adding a per-site ConfigMap with the
glance_api_serverssetting to eachOpenStackDataPlaneNodeSet. For more information, see Adopting Compute services to the data plane.
Note-
The central
GlanceAPIusestype: splitand has access to all Red Hat Ceph Storage clusters. ThekeystoneEndpoint: centralsetting registers this API as the public endpoint in the Identity service. -
Each edge
GlanceAPIusestype: edgeand has access to its local Red Hat Ceph Storage cluster plus the central cluster. This enables image copying between sites. -
Set the
storageRequestPVC size based on the storage requirements of each edge site. - Adjust the number of edge sites and their names to match your DCN deployment.
Patch the
OpenStackControlPlaneCR to deploy the Image service with multiple Red Hat Ceph Storage back ends:$ oc patch openstackcontrolplane openstack --type=merge --patch-file glance_dcn_patch.yamlVerify that the Image service stores are available for each site:
$ glance stores-info +----------+----------------------------------------------------------------------------------+ | Property | Value | +----------+----------------------------------------------------------------------------------+ | stores | [{"id": "central", "description": "Central RBD backend", "default": "true"}, | | | {"id": "dcn1", "description": "dcn1 RBD backend"}, {"id": "dcn2", "description": | | | "dcn2 RBD backend"}] | +----------+----------------------------------------------------------------------------------+The output should list one store for each Red Hat Ceph Storage back end configured in the central
GlanceAPI, and the central store should be marked as the default. If any stores are missing, check thecustomServiceConfigin theglanceAPIssection of the patch and verify that the Red Hat Ceph Storage configuration files are present in theceph-conf-centralsecret.Verify that image import methods include
copy-image, which is required for copying images between stores:$ glance import-info +----------------+----------------------------------------------------------------------------------+ | Property | Value | +----------------+----------------------------------------------------------------------------------+ | import-methods | {"description": "Import methods available.", "type": "array", "value": ["web- | | | download", "copy-image", "glance-direct"]} | +----------------+----------------------------------------------------------------------------------+Upload a test image to the central store. Note the image ID:
$ glance image-create --disk-format raw --container-format bare --name test-image \ --file <image-file> --store centralVerify that the image ID from the previous command is shown in the central Red Hat Ceph Storage cluster’s
imagespool:$ sudo cephadm shell --config /etc/ceph/central.conf --keyring /etc/ceph/central.client.openstack.keyring \ -- rbd -p images --cluster central ls -l NAME SIZE PARENT FMT PROT LOCK <image-id> 20 MiB 2Copy the image to an edge site using the
copy-imageimport method:$ glance image-import <image-id> --stores dcn1 --import-method copy-imageAfter the import completes, verify that the
storesfield on the image now includes bothcentralanddcn1:$ glance image-show <image-id> | grep stores | stores | central,dcn1 |Verify the image was copied to the DCN1 Red Hat Ceph Storage cluster:
$ sudo cephadm shell --config /etc/ceph/dcn1.conf --keyring /etc/ceph/dcn1.client.openstack.keyring \ -- rbd -p images --cluster dcn1 ls -l NAME SIZE PARENT FMT PROT LOCK <image-id> 20 MiB 2The image is now present on the DCN1 Red Hat Ceph Storage cluster, confirming that Image service can copy images between sites. Repeat the
glance image-importcommand for each additional edge site to distribute the image to all DCN locations.
4.8.6. Verifying the Image service adoption Link kopierenLink in die Zwischenablage kopiert!
Verify that you adopted the Image Service (glance) to the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment.
Procedure
Test the Image service from the Red Hat OpenStack Platform CLI. You can compare and ensure that the configuration is applied to the Image service pods:
$ os-diff diff /etc/glance/glance.conf.d/02-config.conf glance_patch.yaml --frompod -p glance-apiIf no line appears, then the configuration is correct.
Inspect the resulting Image service pods:
GLANCE_POD=`oc get pod |grep glance-default | cut -f 1 -d' ' | head -n 1` oc exec -t $GLANCE_POD -c glance-api -- cat /etc/glance/glance.conf.d/02-config.conf [DEFAULT] enabled_backends=default_backend:rbd [glance_store] default_backend=default_backend [default_backend] rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=openstack rbd_store_pool=images store_description=Ceph glance store backend.If you use a Red Hat Ceph Storage back end, ensure that the Red Hat Ceph Storage secrets are mounted:
$ oc exec -t $GLANCE_POD -c glance-api -- ls /etc/ceph ceph.client.openstack.keyring ceph.confCheck that the service is active, and that the endpoints are updated in the RHOSP CLI:
$ oc rsh openstackclient $ openstack service list | grep image | fc52dbffef36434d906eeb99adfc6186 | glance | image | $ openstack endpoint list | grep image | 569ed81064f84d4a91e0d2d807e4c1f1 | regionOne | glance | image | True | internal | http://glance-internal-openstack.apps-crc.testing | | 5843fae70cba4e73b29d4aff3e8b616c | regionOne | glance | image | True | public | http://glance-public-openstack.apps-crc.testing |Check that the images that you previously listed in the source cloud are available in the adopted service:
$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | c3158cad-d50b-452f-bec1-f250562f5c1f | cirros | active | +--------------------------------------+--------+--------+
4.9. Adopting the Placement service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Placement service, you patch an existing OpenStackControlPlane custom resource (CR) that has the Placement service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
Prerequisites
- You import your databases to MariaDB instances on the control plane. For more information, see Migrating databases to MariaDB instances.
- You adopt the Identity service (keystone). For more information, see Adopting the Identity service.
Procedure
Patch the
OpenStackControlPlaneCR to deploy the Placement service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: placement: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: placement secret: osp-secret override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer 'where:
- <172.17.0.80>
-
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.
Verification
Check that the Placement service endpoints are defined and pointing to the control plane FQDNs, and that the Placement API responds:
$ alias openstack="oc exec -t openstackclient -- openstack" $ openstack endpoint list | grep placement # Without OpenStack CLI placement plugin installed: $ PLACEMENT_PUBLIC_URL=$(openstack endpoint list -c 'Service Name' -c 'Service Type' -c URL | grep placement | grep public | awk '{ print $6; }') $ oc exec -t openstackclient -- curl "$PLACEMENT_PUBLIC_URL" # With OpenStack CLI placement plugin installed: $ openstack resource class list
4.10. Adopting the Bare Metal Provisioning service Link kopierenLink in die Zwischenablage kopiert!
Review information about your Bare Metal Provisioning service (ironic) configuration and then adopt the Bare Metal Provisioning service to the Red Hat OpenStack Services on OpenShift control plane.
4.10.1. Bare Metal Provisioning service configurations Link kopierenLink in die Zwischenablage kopiert!
You configure the Bare Metal Provisioning service (ironic) by using configuration snippets. For more information about configuring the control plane with the Bare Metal Provisioning service, see Customizing the Red Hat OpenStack Services on OpenShift deployment.
Some Bare Metal Provisioning service configuration is overridden in director, for example, PXE Loader file names are often overridden at intermediate layers. You must pay attention to the settings you apply in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The ironic-operator applies a reasonable working default configuration, but if you override them with your prior configuration, your experience might not be ideal or your new Bare Metal Provisioning service fails to operate. Similarly, additional configuration might be necessary, for example, if you enable and use additional hardware types in your ironic.conf file.
The model of reasonable defaults includes commonly used hardware-types and driver interfaces. For example, the redfish-virtual-media boot interface and the ramdisk deploy interface are enabled by default. If you add new bare metal nodes after the adoption is complete, the driver interface selection occurs based on the order of precedence in the configuration if you do not explicitly set it on the node creation request or as an established default in the ironic.conf file.
Some configuration parameters do not need to be set on an individual node level, for example, network UUID values, or they are centrally configured in the ironic.conf file, as the setting controls security behavior.
It is critical that you maintain the following parameters that you configured and formatted as [section] and parameter name from the prior deployment to the new deployment. These parameters that govern the underlying behavior and values in the previous configuration would have used specific values if set.
- [neutron]cleaning_network
- [neutron]provisioning_network
- [neutron]rescuing_network
- [neutron]inspection_network
- [conductor]automated_clean
- [deploy]erase_devices_priority
- [deploy]erase_devices_metadata_priority
- [conductor]force_power_state_during_sync
You can set the following parameters individually on a node. However, you might choose to use embedded configuration options to avoid the need to set the parameters individually when creating or managing bare metal nodes. Check your prior ironic.conf file for these parameters, and if set, apply a specific override configuration.
- [conductor]bootloader
- [conductor]rescue_ramdisk
- [conductor]rescue_kernel
- [conductor]deploy_kernel
- [conductor]deploy_ramdisk
The instances of kernel_append_params, formerly pxe_append_params in the [pxe] and [redfish] configuration sections, are used to apply boot time options like "console" for the deployment ramdisk and as such often must be changed.
You cannot migrate hardware types that are set with the ironic.conf file enabled_hardware_types parameter, and hardware type driver interfaces starting with staging- into the adopted configuration.
4.10.2. Deploying the Bare Metal Provisioning service Link kopierenLink in die Zwischenablage kopiert!
To deploy the Bare Metal Provisioning service (ironic), you patch an existing OpenStackControlPlane custom resource (CR) that has the Bare Metal Provisioning service disabled. The ironic-operator applies the configuration and starts the Bare Metal Provisioning services. After the services are running, the Bare Metal Provisioning service automatically begins polling the power state of the bare-metal nodes that it manages.
By default, RHOSO versions 18.0 and later of the Bare Metal Provisioning service include a new multi-tenant aware role-based access control (RBAC) model. As a result, bare-metal nodes might be missing when you run the openstack baremetal node list command after you adopt the Bare Metal Provisioning service. Your nodes are not deleted. Due to the increased access restrictions in the RBAC model, you must identify which project owns the missing bare-metal nodes and set the owner field on each missing bare-metal node.
Prerequisites
- You have imported the service databases into the control plane database.
The Bare Metal Provisioning service is disabled in the RHOSO 18.0. The following command should return a string of
false:$ oc get openstackcontrolplanes.core.openstack.org <name> -o jsonpath='{.spec.ironic.enabled}'-
Replace
<name>with the name of your existingOpenStackControlPlaneCR, for example,openstack-control-plane.
-
Replace
The Identity service (keystone), Networking service (neutron), and Image Service (glance) are operational.
NoteIf you use the Bare Metal Provisioning service in a Bare Metal as a Service configuration, do not adopt the Compute service (nova) before you adopt the Bare Metal Provisioning service.
- For the Bare Metal Provisioning service conductor services, the services must be able to reach Baseboard Management Controllers of hardware that is configured to be managed by the Bare Metal Provisioning service. If this hardware is unreachable, the nodes might enter "maintenance" state and be unavailable until connectivity is restored later.
You have downloaded the
ironic.conffile locally:$CONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/ironic/etc/ironic/ironic.conf > ironic.confNoteThis configuration file must come from one of the Controller nodes and not a director undercloud node. The director undercloud node operates with different configuration that does not apply when you adopt the Overcloud Ironic deployment.
-
If you are adopting the Ironic Inspector service, you need the value of the
IronicInspectorSubnetsdirector parameter. Use the same values to populate thedhcpRangesparameter in the RHOSO environment. You have defined the following shell variables. Replace the following example values with values that apply to your environment:
$ alias openstack="oc exec -t openstackclient -- openstack"
Procedure
Patch the
OpenStackControlPlanecustom resource (CR) to deploy the Bare Metal Provisioning service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ironic: enabled: true template: rpcTransport: oslo databaseInstance: openstack ironicAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <loadBalancer_IP> spec: type: LoadBalancer ironicConductors: - replicas: 1 networkAttachments: - baremetal provisionNetwork: baremetal storageRequest: 10G customServiceConfig: | [neutron] cleaning_network=<cleaning network uuid> provisioning_network=<provisioning network uuid> rescuing_network=<rescuing network uuid> inspection_network=<introspection network uuid> [conductor] automated_clean=true ironicInspector: replicas: 1 inspectionNetwork: baremetal networkAttachments: - baremetal dhcpRanges: - name: inspector-0 cidr: 172.20.1.0/24 start: 172.20.1.190 end: 172.20.1.199 gateway: 172.20.1.1 serviceUser: ironic-inspector databaseAccount: ironic-inspector passwordSelectors: database: IronicInspectorDatabasePassword service: IronicInspectorPassword ironicNeutronAgent: replicas: 1 messagingBus: cluster: rabbitmq secret: osp-secret 'where:
<loadBalancer_IP>-
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80. messagingBus.Cluster- For more information about RHOSO RabbitMQ clusters, see RHOSO RabbitMQ clusters in Monitoring high availability services.
Wait for the Bare Metal Provisioning service control plane services CRs to become ready:
$ oc wait --for condition=Ready --timeout=300s ironics.ironic.openstack.org ironicVerify that the individual services are ready:
$ oc wait --for condition=Ready --timeout=300s ironicapis.ironic.openstack.org ironic-api $ oc wait --for condition=Ready --timeout=300s ironicconductors.ironic.openstack.org ironic-conductor $ oc wait --for condition=Ready --timeout=300s ironicinspectors.ironic.openstack.org ironic-inspector $ oc wait --for condition=Ready --timeout=300s ironicneutronagents.ironic.openstack.org ironic-ironic-neutron-agentUpdate the DNS Nameservers on the provisioning, cleaning, and rescue networks:
NoteFor name resolution to work for Bare Metal Provisioning service operations, you must set the DNS nameserver to use the internal DNS servers in the RHOSO control plane:
$ openstack subnet set --dns-nameserver 192.168.122.80 provisioning-subnetVerify that no Bare Metal Provisioning service nodes are missing from the node list:
$ openstack baremetal node listImportantIf the
openstack baremetal node listcommand output reports an incorrect power status, wait a few minutes and re-run the command to see if the output syncs with the actual state of the hardware being managed. The time required for the Bare Metal Provisioning service to review and reconcile the power state of bare-metal nodes depends on the number of operating conductors through thereplicasparameter and which are present in the Bare Metal Provisioning service deployment being adopted.If any Bare Metal Provisioning service nodes are missing from the
openstack baremetal node listcommand, temporarily disable the new RBAC policy to see the nodes again:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ironic: enabled: true template: databaseInstance: openstack ironicAPI: replicas: 1 customServiceConfig: | [oslo_policy] enforce_scope=false enforce_new_defaults=false 'After this configuration is applied, the operator restarts the Ironic API service and disables the new RBAC policy that is enabled by default.
View the bare-metal nodes that do not have an owner assigned:
$ openstack baremetal node list --long -c UUID -c Owner -c 'Provisioning State'Assign all bare-metal nodes with no owner to a new project, for example, the admin project:
ADMIN_PROJECT_ID=$(openstack project show -c id -f value --domain default admin) for node in $(openstack baremetal node list -f json -c UUID -c Owner | jq -r '.[] | select(.Owner == null) | .UUID'); do openstack baremetal node set --owner $ADMIN_PROJECT_ID $node; doneRe-apply the default RBAC by removing the
customServiceConfigsection or by setting the following values in thecustomServiceConfigsection totrue. For example:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ironic: enabled: true template: databaseInstance: openstack ironicAPI: replicas: 1 customServiceConfig: | [oslo_policy] enforce_scope=true enforce_new_defaults=true '
Verification
Verify the list of endpoints:
$ openstack endpoint list |grep ironicVerify the list of bare-metal nodes:
$ openstack baremetal node listReset the deploy images on all bare-metal nodes to use the new centrally configured images:
NoteAfter adoption, bare-metal nodes might still reference the old deployment’s kernel and ramdisk images in their
driver_infofields. Resetting these values causes the Bare Metal Provisioning service to use the new centrally configureddeploy_kernelanddeploy_ramdiskvalues from theironic.conffile.for node in $(openstack baremetal node list -c UUID -f value); do openstack baremetal node set $node \ --driver-info deploy_ramdisk= \ --driver-info deploy_kernel= done
4.11. Adopting the Compute service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Compute service (nova), you patch an existing OpenStackControlPlane custom resource (CR) where the Compute service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. The following procedure describes a single-cell setup.
Prerequisites
- You have completed the previous adoption steps.
You have defined the following shell variables. Replace the following example values with the values that are correct for your environment:
alias openstack="oc exec -t openstackclient -- openstack" DEFAULT_CELL_NAME="cell3" RENAMED_CELLS="cell1 cell2 $DEFAULT_CELL_NAME"-
The source cloud
defaultcell takes a new$DEFAULT_CELL_NAME. In a multi-cell adoption scenario, the default cell might retain its original name,DEFAULT_CELL_NAME=default, or become renamed as a cell that is free for use. Do not use other existing cell names forDEFAULT_CELL_NAME, except fordefault. If you deployed the source cloud with a
defaultcell, and want to rename it during adoption, define the new name that you want to use, as shown in the following example:DEFAULT_CELL_NAME="cell1" RENAMED_CELLS="cell1"
-
The source cloud
Procedure
Patch the
OpenStackControlPlaneCR to deploy the Compute service:NoteThis procedure assumes that Compute service metadata is deployed on the top level and not on each cell level. If the RHOSP deployment has a per-cell metadata deployment, adjust the following patch as needed. You cannot run the metadata service in
cell0. To enable the metadata services of a local cell, set theenabledproperty in themetadataServiceTemplatefield of the local cell totruein theOpenStackControlPlaneCR.$ rm -f celltemplates $ for CELL in $(echo $RENAMED_CELLS); do > cat >> celltemplates << EOF > ${CELL}: > hasAPIAccess: true > cellDatabaseAccount: nova-$CELL > cellDatabaseInstance: openstack-$CELL > cellMessageBusInstance: rabbitmq-$CELL > metadataServiceTemplate: > enabled: false > override: > service: > metadata: > annotations: > metallb.universe.tf/address-pool: internalapi > metallb.universe.tf/allow-shared-ip: internalapi > metallb.universe.tf/loadBalancerIPs: 172.17.0.$(( 79 + ${CELL##*cell} )) > spec: > type: LoadBalancer > customServiceConfig: | > [workarounds] > disable_compute_service_check_for_ffu=true > conductorServiceTemplate: > customServiceConfig: | > [workarounds] > disable_compute_service_check_for_ffu=true >EOF >done $ cat > oscp-patch.yaml << EOF >spec: > nova: > enabled: true > apiOverride: > route: {} > template: > secret: osp-secret > apiDatabaseAccount: nova-api > apiServiceTemplate: > override: > service: > internal: > metadata: > annotations: > metallb.universe.tf/address-pool: internalapi > metallb.universe.tf/allow-shared-ip: internalapi > metallb.universe.tf/loadBalancerIPs: <172.17.0.80> > spec: > type: LoadBalancer > customServiceConfig: | > [workarounds] > disable_compute_service_check_for_ffu=true > metadataServiceTemplate: > enabled: true > override: > service: > metadata: > annotations: > metallb.universe.tf/address-pool: internalapi > metallb.universe.tf/allow-shared-ip: internalapi > metallb.universe.tf/loadBalancerIPs: <172.17.0.80> > spec: > type: LoadBalancer > customServiceConfig: | > [workarounds] > disable_compute_service_check_for_ffu=true > schedulerServiceTemplate: > customServiceConfig: | > [workarounds] > disable_compute_service_check_for_ffu=true > cellTemplates: > cell0: > hasAPIAccess: true > cellDatabaseAccount: nova-cell0 > cellDatabaseInstance: openstack > cellMessageBusInstance: rabbitmq > conductorServiceTemplate: > customServiceConfig: | > [workarounds] > disable_compute_service_check_for_ffu=true >EOF $ cat celltemplates >> oscp-patch.yaml $ oc patch openstackcontrolplane openstack --type=merge --patch-file=oscp-patch.yaml-
${CELL}.hasAPIAccessspecifies upcall access to the API. In the source cloud, cells are always configured with the main Nova API database upcall access. You can disable upcall access to the API by settinghasAPIAccesstofalse. However, do not make changes to the API during adoption. -
${CELL}.cellDatabaseInstancespecifies the database instance that is used by the cell. The database instance names must match the names that are defined in theOpenStackControlPlaneCR that you created in when you deployed the back-end services as described in Deploying back-end services. -
${CELL}.cellMessageBusInstancespecifies the message bus instance that is used by the cell. The message bus instance names must match the names that are defined in theOpenStackControlPlaneCR. -
metallb.universe.tf/loadBalancerIPs: <172.17.0.80>specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.
-
If you are adopting the Compute service with the Bare Metal Provisioning service (ironic), append the
novaComputeTemplatesfield with the following content in each cell in the Compute service CR patch. For example:cell1: novaComputeTemplates: standalone: customServiceConfig: | [DEFAULT] host = <hostname> [workarounds] disable_compute_service_check_for_ffu=true computeDriver: ironic.IronicDriver ...-
Replace
<hostname>with the hostname of the node that is running theironicCompute driver in the source cloud.
-
Replace
Wait for the CRs for the Compute control plane services to be ready:
$ oc wait --for condition=Ready --timeout=300s Nova/novaNoteThe local Conductor services are started for each cell, while the superconductor runs in
cell0. Note thatdisable_compute_service_check_for_ffuis mandatory for all imported Compute services until the external data plane is imported, and until the Compute services are fast-forward upgraded. For more information, see Adopting Compute services to the RHOSO data plane and Performing a fast-forward upgrade on Compute services.
Verification
Check that Compute service endpoints are defined and pointing to the control plane FQDNs, and that the Nova API responds:
$ openstack endpoint list | grep nova $ openstack server list- Compare the outputs with the topology-specific configuration in Retrieving topology-specific service configuration.
Query the superconductor to check that the expected cells exist, and compare it to its pre-adoption values:
for CELL in $(echo $CELLS); do set +u . ~/.source_cloud_exported_variables_$CELL set -u RCELL=$CELL [ "$CELL" = "default" ] && RCELL=$DEFAULT_CELL_NAME echo "comparing $CELL to $RCELL" echo $PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS | grep -F "| $CELL |" oc rsh nova-cell0-conductor-0 nova-manage cell_v2 list_cells | grep -F "| $RCELL |" doneThe following changes are expected for each cell:
-
The
cellXnovadatabase and username becomenova_cellX. -
The
defaultcell is renamed toDEFAULT_CELL_NAME. Thedefaultcell might retain the original name if there are multiple cells. -
The RabbitMQ transport URL no longer uses
guest.
-
The
At this point, the Compute service control plane services do not control the existing Compute service workloads. The control plane manages the data plane only after the data adoption process is completed. For more information, see Adopting Compute services to the RHOSO data plane.
To import external Compute services to the RHOSO data plane, you must upgrade them first. For more information, see Adopting Compute services to the RHOSO data plane, and Performing a fast-forward upgrade on Compute services.
4.12. Adopting the Block Storage service Link kopierenLink in die Zwischenablage kopiert!
To adopt a director-deployed Block Storage service (cinder), create the manifest based on the existing cinder.conf file, deploy the Block Storage service, and validate the new deployment.
Prerequisites
- You have reviewed the Block Storage service limitations. For more information, see Limitations for adopting the Block Storage service.
- You have planned the placement of the Block Storage services.
- You have prepared the Red Hat OpenShift Container Platform (RHOCP) nodes where the volume and backup services run. For more information, see RHOCP preparation for Block Storage service adoption.
- The Block Storage service (cinder) is stopped.
- The service databases are imported into the control plane MariaDB.
- The Identity service (keystone) is adopted.
- If your Red Hat OpenStack Platform 17.1 deployment included the Key Manager service (barbican), the Key Manager service is adopted.
- The Storage network is correctly configured on the RHOCP cluster.
The contents of
cinder.conffile. Download the file so that you can access it locally:$CONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf > cinder.conf
Procedure
Create a new file, for example,
cinder_api.patch, and apply the configuration:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=<patch_name>Replace
<patch_name>with the name of your patch file.The following example shows a
cinder_api.patchfile:spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files cinder: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: cinder secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: <172.17.0.80> spec: type: LoadBalancer replicas: 1 customServiceConfig: | [DEFAULT] default_volume_type=tripleo cinderScheduler: replicas: 0 cinderBackup: networkAttachments: - storage replicas: 0 cinderVolumes: ceph: networkAttachments: - storage replicas: 0where:
- <172.17.0.80>
-
Specifies the load balancer IP in your environment. If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example,
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80.
Retrieve the list of the previous scheduler and backup services:
$ openstack volume service list +------------------+------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | standalone.localdomain | nova | enabled | down | 2024-11-04T17:47:14.000000 | | cinder-backup | standalone.localdomain | nova | enabled | down | 2024-11-04T17:47:14.000000 | | cinder-volume | hostgroup@tripleo_ceph | nova | enabled | down | 2024-11-04T17:47:14.000000 | +------------------+------------------------+------+---------+-------+----------------------------+Remove services for hosts that are in the
downstate:$ oc exec -t cinder-api-0 -c cinder-api -- cinder-manage service remove <service_binary> <service_host>-
Replace
<service_binary>with the name of the binary, for example,cinder-backup. -
Replace
<service_host>with the host name, for example,cinder-backup-0.
-
Replace
Deploy the scheduler, backup, and volume services:
Create another file, for example,
cinder_services.patch, and apply the configuration:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=<patch_name>-
Replace
<patch_name>with the name of your patch file. The following example shows a
cinder_services.patchfile for a Ceph RBD deployment:spec: cinder: enabled: true template: cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] backup_driver=cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=openstack backup_ceph_pool=backups cinderVolumes: ceph: networkAttachments: - storage replicas: 1 customServiceConfig: | [tripleo_ceph] backend_host=hostgroup volume_backend_name=tripleo_ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False report_discard_supported=TrueNoteEnsure that you use the same configuration group name for the driver that you used in the source cluster. In this example, the driver configuration group in
customServiceConfigis calledtripleo_cephbecause it reflects the value of the configuration group name in thecinder.conffile of the source OpenStack cluster.
Configure the NetApp NFS Block Storage volume service:
Create a secret that includes sensitive information such as hostnames, passwords, and usernames to access the third-party NetApp NFS storage. You can find the credentials in the
cinder.conffile that was generated from the director deployment:$ oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: labels: service: cinder component: cinder-volume name: cinder-volume-ontap-secrets type: Opaque stringData: ontap-cinder-secrets: | [tripleo_netapp] netapp_login= netapp_username netapp_password= netapp_password netapp_vserver= netapp_vserver nas_host= netapp_nfsip nas_share_path=/netapp_nfspath netapp_pool_name_search_pattern=(netapp_poolpattern) EOFPatch the
OpenStackControlPlaneCR to deploy NetApp NFS Block Storage volume back end:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=<cinder_netappNFS.patch>Replace
<cinder_netappNFS.patch>with the name of the patch file for your NetApp NFS Block Storage volume back end.The following example shows a
cinder_netappNFS.patchfile that configures a NetApp NFS Block Storage volume service:spec: cinder: enabled: true template: cinderVolumes: ontap-nfs: networkAttachments: - storage customServiceConfig: | [tripleo_netapp] volume_backend_name=ontap-nfs volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false netapp_server_hostname= netapp_backendip netapp_server_port=80 netapp_storage_protocol=nfs netapp_storage_family=ontap_cluster customServiceConfigSecrets: - cinder-volume-ontap-secrets
Configure the NetApp iSCSI Block Storage volume service:
Create a secret that includes sensitive information such as hostnames, passwords, and usernames to access the third-party NetApp iSCSI storage. You can find the credentials in the
cinder.conffile that was generated from the director deployment:$ oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: labels: service: cinder component: cinder-volume name: cinder-volume-ontap-secrets type: Opaque stringData: ontap-cinder-secrets: | [tripleo_netapp] netapp_server_hostname = netapp_host netapp_login = netapp_username netapp_password = netapp_password netapp_vserver = netapp_vserver netapp_pool_name_search_pattern=(netapp_poolpattern) EOF
Patch the
OpenStackControlPlanecustom resource (CR) to deploy the NetApp iSCSI Block Storage volume back end:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=<cinder_netappISCSI.patch>Replace
<cinder_netappISCSI.patch>with the name of the patch file for your NetApp iSCSI Block Storage volume back end.The following example shows a
cinder_netappISCSI.patchfile that configures a NetApp iSCSI Block Storage volume service:spec: cinder: enabled: true template: cinderVolumes: ontap-iscsi: networkAttachments: - storage customServiceConfig: | [tripleo_netapp] volume_backend_name=ontap-iscsi volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_protocol=iscsi netapp_storage_family=ontap_cluster consistencygroup_support=True customServiceConfigSecrets: - cinder-volume-ontap-secrets
Check if all the services are up and running:
$ openstack volume service list +------------------+--------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+--------------------------+------+---------+-------+----------------------------+ | cinder-volume | hostgroup@tripleo_netapp | nova | enabled | up | 2023-06-28T17:00:03.000000 | | cinder-scheduler | cinder-scheduler-0 | nova | enabled | up | 2023-06-28T17:00:02.000000 | | cinder-backup | cinder-backup-0 | nova | enabled | up | 2023-06-28T17:00:01.000000 | +------------------+--------------------------+------+---------+-------+----------------------------+Apply the DB data migrations:
NoteYou are not required to run the data migrations at this step, but you must run them before the next upgrade. However, for adoption, you can run the migrations now to ensure that there are no issues before you run production workloads on the deployment.
$ oc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations
Verification
Ensure that the
openstackalias is defined:$ alias openstack="oc exec -t openstackclient -- openstack"Confirm that Block Storage service endpoints are defined and pointing to the control plane FQDNs:
$ openstack endpoint list --service <endpoint>-
Replace
<endpoint>with the name of the endpoint that you want to confirm.
-
Replace
Confirm that the Block Storage services are running:
$ openstack volume service listNoteCinder API services do not appear in the list. However, if you get a response from the
openstack volume service listcommand, that means at least one of the cinder API services is running.Confirm that you have your previous volume types, volumes, snapshots, and backups:
$ openstack volume type list $ openstack volume list $ openstack volume snapshot list $ openstack volume backup listTo confirm that the configuration is working, perform the following steps:
Create a volume from an image to check that the connection to Image Service (glance) is working:
$ openstack volume create --image cirros --bootable --size 1 disk_newBack up the previous attached volume:
$ openstack --os-volume-api-version 3.47 volume create --backup <backup_name>Replace
<backup_name>with the name of your new backup location.NoteYou do not boot a Compute service (nova) instance by using the new
volume fromimage or try to detach the previous volume because the Compute service and the Block Storage service are still not connected.
4.13. Adopt the Block Storage service with multiple Red Hat Ceph Storage back ends (DCN) Link kopierenLink in die Zwischenablage kopiert!
Adopt the Block Storage service (cinder) in a Distributed Compute Node (DCN) deployment where multiple Red Hat Ceph Storage clusters provide storage at different sites. You can deploy multiple CinderVolume instances, one for each availability zone, with each volume service configured to use its local Red Hat Ceph Storage cluster.
During adoption, the Block Storage service volume services that ran on edge site Compute nodes are migrated to run on Red Hat OpenShift Container Platform (RHOCP) at the central site. Although the control path for API requests now traverses the WAN to reach the Block Storage service running on Red Hat OpenShift Container Platform (RHOCP), the data path remains local. Volume data continues to be stored in the Red Hat Ceph Storage cluster at each edge site. When you create a volume or clone a volume from a snapshot, the operation occurs entirely within the local Red Hat Ceph Storage cluster. This preserves data locality.
Prerequisites
- You have completed the previous adoption steps.
-
The per-site Red Hat Ceph Storage secrets (
ceph-conf-central,ceph-conf-dcn1,ceph-conf-dcn2) exist and contain the configuration and keyrings for each site’s Red Hat Ceph Storage cluster. For more information, see Configuring a Red Hat Ceph Storage back end. -
The
extraMountsproperty of theOpenStackControlPlanecustom resource (CR) is configured to mount the Red Hat Ceph Storage configuration to all Block Storage service instances. -
You have stopped the Block Storage service on all DCN nodes. For more information, see Stopping Red Hat OpenStack Platform services. On edge sites, the Block Storage service volume service runs on Compute nodes with the service name
tripleo_cinder_volume.service.
Procedure
Retrieve the
fsidfor each Red Hat Ceph Storage cluster in your DCN deployment. Thefsidis used as therbd_secret_uuidfor libvirt integration:$ oc get secret ceph-conf-central -o json | jq -r '.data | to_entries[] | select(.key | endswith(".conf")) | "\(.key): \(.value | @base64d)"' | grep fsidCreate a patch file for the Block Storage service with multiple Red Hat Ceph Storage back ends. The following example shows a DCN deployment with a central site and two edge sites:
$ cat << EOF > cinder_dcn_patch.yaml spec: cinder: enabled: true template: cinderAPI: customServiceConfig: | [DEFAULT] default_availability_zone = az-central cinderScheduler: replicas: 1 cinderVolumes: central: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] enabled_backends = central glance_api_servers = http://glance-central-internal.openstack.svc:9292 [central] backend_host = hostgroup volume_backend_name = central volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf = /etc/ceph/central.conf rbd_user = openstack rbd_pool = volumes rbd_flatten_volume_from_snapshot = False report_discard_supported = True rbd_secret_uuid = <central_fsid> rbd_cluster_name = central backend_availability_zone = az-central dcn1: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] enabled_backends = dcn1 glance_api_servers = http://glance-dcn1-internal.openstack.svc:9292 [dcn1] backend_host = hostgroup volume_backend_name = dcn1 volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf = /etc/ceph/dcn1.conf rbd_user = openstack rbd_pool = volumes rbd_flatten_volume_from_snapshot = False report_discard_supported = True rbd_secret_uuid = <dcn1_fsid> rbd_cluster_name = dcn1 backend_availability_zone = az-dcn1 dcn2: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] enabled_backends = dcn2 glance_api_servers = http://glance-dcn2-internal.openstack.svc:9292 [dcn2] backend_host = hostgroup volume_backend_name = dcn2 volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf = /etc/ceph/dcn2.conf rbd_user = openstack rbd_pool = volumes rbd_flatten_volume_from_snapshot = False report_discard_supported = True rbd_secret_uuid = <dcn2_fsid> rbd_cluster_name = dcn2 backend_availability_zone = az-dcn2 EOFwhere:
<central_fsid>-
Specifies the
fsidof the central Red Hat Ceph Storage cluster, used as the libvirt secret UUID. <dcn1_fsid>-
Specifies the
fsidof the DCN1 edge Red Hat Ceph Storage cluster. <dcn2_fsid>-
Specifies the
fsidof the DCN2 edge Red Hat Ceph Storage cluster.
Note-
You must configure each
CinderVolumewith thebackend_availability_zonevalue that matches your Compute service availability zone for that site, becausecross_az_attach = Falseis set in the Compute service configuration. If the names do not match, instances cannot attach volumes. Replace the examples (az-central,az-dcn1,az-dcn2) with the names used in your Red Hat OpenStack Platform deployment. -
Each
CinderVolumepoints to its local Image service API endpoint throughglance_api_servers. This ensures that volume creation from images uses the local Image service and Red Hat Ceph Storage cluster. The examples usehttp://for the Image service endpoints. If your Red Hat OpenStack Platform deployment uses TLS for internal endpoints, usehttps://instead, and ensure that you have completed the TLS migration. For more information, see Migrating TLS-e to the RHOSO deployment. -
The
rbd_cluster_namesetting identifies which Red Hat Ceph Storage cluster configuration to use from the mounted secrets. - Adjust the number of edge sites and their names to match your DCN deployment.
Patch the
OpenStackControlPlaneCR to deploy the Block Storage service with multiple Red Hat Ceph Storage back ends:$ oc patch openstackcontrolplane openstack --type=merge --patch-file cinder_dcn_patch.yamlConfigure the Block Storage service backup service. In this example, the backup service runs at the central site and uses the central Red Hat Ceph Storage cluster. Add the
cinderBackupssection to your patch file and re-apply it:$ cat << EOF >> cinder_dcn_patch.yaml cinderBackups: central: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] backup_driver=cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf=/etc/ceph/central.conf backup_ceph_user=openstack backup_ceph_pool=backups storage_availability_zone=az-central EOF $ oc patch openstackcontrolplane openstack --type=merge --patch-file cinder_dcn_patch.yamlNoteUnlike a single-site Red Hat Ceph Storage deployment where the backup config references
/etc/ceph/ceph.conf, in a DCN deployment the Red Hat Ceph Storage configuration files in theceph-conf-filessecret are named by cluster. Setbackup_ceph_confto the path of the Red Hat Ceph Storage configuration file for whichever cluster hosts yourbackupspool. In this example the file is namedcentral.conf, so the path is/etc/ceph/central.conf. Using a path that does not match a file in the secret will cause the backup service to fail with aconf_read_fileerror.Set
storage_availability_zoneto match the availability zone of the volumes you want to back up. The backup scheduler uses this to route backup requests to a service in the correct zone. If the backup service zone does not match the volume zone, backup creation fails withService not found for creating backup.Verify that the Block Storage service volume services are running for each availability zone:
$ openstack volume service list --service cinder-volume +------------------+---------------------+------------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+---------------------+------------+---------+-------+----------------------------+ | cinder-volume | hostgroup@central | az-central | enabled | up | 2024-01-01T00:00:00.000000 | | cinder-volume | hostgroup@dcn1 | az-dcn1 | enabled | up | 2024-01-01T00:00:00.000000 | | cinder-volume | hostgroup@dcn2 | az-dcn2 | enabled | up | 2024-01-01T00:00:00.000000 | +------------------+---------------------+------------+---------+-------+----------------------------+Verify that the Block Storage service backup service is running and in the correct availability zone:
$ openstack volume service list --service cinder-backup +---------------+-------------------------+------------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +---------------+-------------------------+------------+---------+-------+----------------------------+ | cinder-backup | cinder-backup-central-0 | az-central | enabled | up | 2024-01-01T00:00:00.000000 | +---------------+-------------------------+------------+---------+-------+----------------------------+Test the backup service by creating a volume, backing it up, and restoring the backup:
$ openstack volume create --size 1 backup-test-vol $ openstack volume backup create --name backup-test-backup backup-test-vol $ openstack volume backup show backup-test-backup +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | container | backups | | fail_reason | None | | name | backup-test-backup | | size | 1 | | status | available | +-----------------------+--------------------------------------+ $ openstack volume backup restore backup-test-backup backup-test-restoreNoteSome versions of the Red Hat OpenShift Container Platform (RHOCP) client display a
cannot unpack non-iterable VolumeBackupsRestore objecterror after the restore command. This is a known issue in the client, the restore operation might not have failed. Verify by checking the restored volume status directly.$ openstack volume show backup-test-restore -c status -c availability_zone -c os-vol-host-attr:host -f value available az-central hostgroup@central#central
4.14. Adopting the Dashboard service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Dashboard service (horizon), you patch an existing OpenStackControlPlane custom resource (CR) that has the Dashboard service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform environment.
Prerequisites
- You adopted Memcached. For more information, see Deploying back-end services.
- You adopted the Identity service (keystone). For more information, see Adopting the Identity service.
Procedure
Patch the
OpenStackControlPlaneCR to deploy the Dashboard service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: horizon: enabled: true apiOverride: route: {} template: memcachedInstance: memcached secret: osp-secret '
Verification
Verify that the Dashboard service instance is successfully deployed and ready:
$ oc get horizonConfirm that the Dashboard service is reachable and returns a
200status code:PUBLIC_URL=$(oc get horizon horizon -o jsonpath='{.status.endpoint}') curl --silent --output /dev/stderr --head --write-out "%{http_code}" "$PUBLIC_URL/dashboard/auth/login/?next=/dashboard/" -k | grep 200
4.16. Adopting the Orchestration service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Orchestration service (heat), you patch an existing OpenStackControlPlane custom resource (CR), where the Orchestration service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
After you complete the adoption process, you have CRs for Heat, HeatAPI, HeatEngine, and HeatCFNAPI, and endpoints within the Identity service (keystone) to facilitate these services.
Prerequisites
- The source director environment is running.
- The target Red Hat OpenShift Container Platform (RHOCP) environment is running.
- You adopted MariaDB and the Identity service.
- If your existing Orchestration service stacks contain resources from other services such as Networking service (neutron), Compute service (nova), Object Storage service (swift), and so on, adopt those sevices before adopting the Orchestration service.
Procedure
Retrieve the existing
auth_encryption_keyandservicepasswords. You use these passwords to patch theosp-secret. In the following example, theauth_encryption_keyis used asHeatAuthEncryptionKeyand theservicepassword is used asHeatPassword:[stack@rhosp17 ~]$ grep -E 'HeatPassword|HeatAuth|HeatStackDomainAdmin' ~/overcloud-deploy/overcloud/overcloud-passwords.yaml HeatAuthEncryptionKey: Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2 HeatPassword: dU2N0Vr2bdelYH7eQonAwPfI3 HeatStackDomainAdminPassword: dU2N0Vr2bdelYH7eQonAwPfI3Log in to a Controller node and verify the
auth_encryption_keyvalue in use:[stack@rhosp17 ~]$ ansible -i overcloud-deploy/overcloud/config-download/overcloud/tripleo-ansible-inventory.yaml overcloud-controller-0 -m shell -a "grep auth_encryption_key /var/lib/config-data/puppet-generated/heat/etc/heat/heat.conf | grep -Ev '^#|^$'" -b overcloud-controller-0 | CHANGED | rc=0 >> auth_encryption_key=Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2Encode the password to Base64 format:
$ echo Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2 | base64 UTYwSGo4UHFickROdTJkRENieUlRRTJkaWJwUVVQZzIKPatch the
osp-secretto update theHeatAuthEncryptionKeyandHeatPasswordparameters. These values must match the values in the director Orchestration service configuration:$ oc patch secret osp-secret --type='json' -p='[{"op" : "replace" ,"path" : "/data/HeatAuthEncryptionKey" ,"value" : "UTYwSGo4UHFickROdTJkRENieUlRRTJkaWJwUVVQZzIK"}]' secret/osp-secret patchedPatch the
OpenStackControlPlaneCR to deploy the Orchestration service:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: heat: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: heat secret: osp-secret memcachedInstance: memcached passwordSelectors: authEncryptionKey: HeatAuthEncryptionKey service: HeatPassword stackDomainAdminPassword: HeatStackDomainAdminPassword '
Verification
Ensure that the statuses of all the CRs are
Setup complete:$ oc get Heat,HeatAPI,HeatEngine,HeatCFNAPI NAME STATUS MESSAGE heat.heat.openstack.org/heat True Setup complete NAME STATUS MESSAGE heatapi.heat.openstack.org/heat-api True Setup complete NAME STATUS MESSAGE heatengine.heat.openstack.org/heat-engine True Setup complete NAME STATUS MESSAGE heatcfnapi.heat.openstack.org/heat-cfnapi True Setup completeCheck that the Orchestration service is registered in the Identity service:
$ oc exec -it openstackclient -- openstack service list -c Name -c Type +------------+----------------+ | Name | Type | +------------+----------------+ | heat | orchestration | | glance | image | | heat-cfn | cloudformation | | ceilometer | Ceilometer | | keystone | identity | | placement | placement | | cinderv3 | volumev3 | | nova | compute | | neutron | network | +------------+----------------+$ oc exec -it openstackclient -- openstack endpoint list --service=heat -f yaml - Enabled: true ID: 1da7df5b25b94d1cae85e3ad736b25a5 Interface: public Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-public-openstack-operators.apps.okd.bne-shift.net/v1/%(tenant_id)s - Enabled: true ID: 414dd03d8e9d462988113ea0e3a330b0 Interface: internal Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-internal.openstack-operators.svc:8004/v1/%(tenant_id)sCheck that the Orchestration service engine services are running:
$ oc exec -it openstackclient -- openstack orchestration service list -f yaml - Binary: heat-engine Engine ID: b16ad899-815a-4b0c-9f2e-e6d9c74aa200 Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000' - Binary: heat-engine Engine ID: 887ed392-0799-4310-b95c-ac2d3e6f965f Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 26ed9668-b3f2-48aa-92e8-2862252485ea Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 1011943b-9fea-4f53-b543-d841297245fd Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000'Verify that you can see your Orchestration service stacks:
$ openstack stack list -f yaml - Creation Time: '2023-10-11T22:03:20Z' ID: 20f95925-7443-49cb-9561-a1ab736749ba Project: 4eacd0d1cab04427bc315805c28e66c9 Stack Name: test-networks Stack Status: CREATE_COMPLETE Updated Time: null
4.17. Adopting the Load-balancing service Link kopierenLink in die Zwischenablage kopiert!
To adopt the Load-balancing service (octavia), you patch an existing OpenStackControlPlane custom resource (CR) where the Load-balancing service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. After completing the data plane adoption, you must trigger a failover of existing load balancers to upgrade their amphora virtual machines to use the new image and to establish connectivity with the new control plane.
Procedure
Migrate the server certificate authority (CA) passphrase from the previous deployment:
SERVER_CA_PASSPHRASE=$($CONTROLLER1_SSH grep ^ca_private_key_passphrase /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf) oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: octavia-ca-passphrase type: Opaque data: server-ca-passphrase: $(echo -n $SERVER_CA_PASSPHRASE | base64 -w0) EOFTo isolate the management network, add the network interface for the VLAN base interface:
$ oc get --no-headers nncp | cut -f 1 -d ' ' | grep -v nncp-dns | while read; do interfaces=$(oc get nncp $REPLY -o jsonpath="{.spec.desiredState.interfaces[*].name}") (echo $interfaces | grep -w -q "octbr\|enp6s0.24") || \ oc patch nncp $REPLY --type json --patch ' [{ "op": "add", "path": "/spec/desiredState/interfaces/-", "value": { "description": "Octavia VLAN host interface", "name": "enp6s0.24", "state": "up", "type": "vlan", "vlan": { "base-iface": "<enp6s0>", "id": 24 } } }, { "op": "add", "path": "/spec/desiredState/interfaces/-", "value": { "description": "Octavia Bridge", "mtu": <mtu>, "state": "up", "type": "linux-bridge", "name": "octbr", "bridge": { "options": { "stp": { "enabled": "false" } }, "port": [ { "name": "enp6s0.24" } ] } } }]' donewhere:
- <enp6s0>
- Specifies the name of the network interface in your RHOCP setup.
- <mtu>
-
Specifies the
mtuvalue in your environment.
To connect pods that manage load balancer virtual machines (amphorae) and the OpenvSwitch pods the OVN operator manages, configure the Load-balancing service network attachment definition:
$ cat octavia-nad.yaml << EOF_CAT apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia spec: config: | { "cniVersion": "0.3.1", "name": "octavia", "type": "bridge", "bridge": "octbr", "ipam": { "type": "whereabouts", "range": "172.23.0.0/24", "range_start": "172.23.0.30", "range_end": "172.23.0.70", "routes": [ { "dst": "172.24.0.0/16", "gw" : "172.23.0.150" } ] } } EOF_CATCreate the
NetworkAttachmentDefinitionCR:$ oc apply -f octavia-nad.yamlEnable the Load-balancing service in RHOCP:
$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: template: ovnController: networkAttachment: tenant nicMappings: octavia: octbr octavia: enabled: true template: amphoraImageContainerImage: quay.io/gthiemonge/octavia-amphora-image octaviaHousekeeping: networkAttachments: - octavia octaviaHealthManager: networkAttachments: - octavia octaviaWorker: networkAttachments: - octavia 'Wait for the Load-balancing service control plane services CRs to be ready:
$ oc wait --for condition=Ready --timeout=600s octavia.octavia.openstack.org/octaviaEnsure that the Load-balancing service is registered in the Identity service:
$ alias openstack="oc exec -t openstackclient -- openstack" $ openstack service list | grep load-balancer | bd078ca6f90c4b86a48801f45eb6f0d7 | octavia | load-balancer | $ openstack endpoint list --service load-balancer +----------------------------------+-----------+--------------+---------------+---------+-----------+---------------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+---------------+---------+-----------+---------------------------------------------------+ | f1ae7756b6164baf9cb82a1a670067a2 | regionOne | octavia | load-balancer | True | public | https://octavia-public-openstack.apps-crc.testing | | ff3222b4621843669e89843395213049 | regionOne | octavia | load-balancer | True | internal | http://octavia-internal.openstack.svc:9876 | +----------------------------------+-----------+--------------+---------------+---------+-----------+---------------------------------------------------+
Next steps
After you complete the data plane adoption, you must upgrade existing load balancers and remove old resources. For more information, see Post-adoption tasks for the Load-balancing service.
4.18. Adopting Telemetry services Link kopierenLink in die Zwischenablage kopiert!
To adopt Telemetry services, you patch an existing OpenStackControlPlane custom resource (CR) that has Telemetry services disabled to start the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) 17.1 environment.
If you adopt Telemetry services, the observability solution that is used in the RHOSP 17.1 environment, Service Telemetry Framework, is removed from the cluster. The new solution is deployed in the Red Hat OpenStack Services on OpenShift (RHOSO) environment, allowing for metrics, and optionally logs, to be retrieved and stored in the new back ends.
You cannot automatically migrate old data because different back ends are used. Metrics and logs are considered short-lived data and are not intended to be migrated to the RHOSO environment. For information about adopting legacy autoscaling stack templates to the RHOSO environment, see Adopting Autoscaling services.
Prerequisites
- The director environment is running (the source cloud).
- The Single Node OpenShift or OpenShift Local is running in the Red Hat OpenShift Container Platform (RHOCP) cluster.
- Previous adoption steps are completed.
Procedure
Patch the
OpenStackControlPlaneCR to deploycluster-observability-operator:$ oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-observability-operator namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: cluster-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOFWait for the installation to succeed:
$ oc wait --for jsonpath="{.status.phase}"=Succeeded csv --namespace=openshift-operators -l operators.coreos.com/cluster-observability-operator.openshift-operatorsPatch the
OpenStackControlPlaneCR to deploy Ceilometer services:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: enabled: true template: ceilometer: passwordSelector: ceilometerService: CeilometerPassword enabled: true secret: osp-secret serviceUser: ceilometer 'Enable the metrics storage back end:
$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G '
Verification
Verify that the
alertmanagerandprometheuspods are available:$ oc get pods -l alertmanager=metric-storage NAME READY STATUS RESTARTS AGE alertmanager-metric-storage-0 2/2 Running 0 46s alertmanager-metric-storage-1 2/2 Running 0 46s $ oc get pods -l prometheus=metric-storage NAME READY STATUS RESTARTS AGE prometheus-metric-storage-0 3/3 Running 0 46sInspect the resulting Ceilometer pods:
CEILOMETETR_POD=`oc get pods -l service=ceilometer | tail -n 1 | cut -f 1 -d' '` oc exec -t $CEILOMETETR_POD -c ceilometer-central-agent -- cat /etc/ceilometer/ceilometer.confInspect enabled pollsters:
$ oc get secret ceilometer-config-data -o jsonpath="{.data['polling\.yaml\.j2']}" | base64 -dOptional: Override default pollsters according to the requirements of your environment:
$ oc patch openstackcontrolplane controlplane --type=merge --patch ' spec: telemetry: template: ceilometer: defaultConfigOverwrite: polling.yaml.j2: | --- sources: - name: pollsters interval: 100 meters: - volume.* - image.size enabled: true secret: osp-secret '
Next steps
Optional: Patch the
OpenStackControlPlaneCR to includelogging:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: template: logging: enabled: false ipaddr: 172.17.0.80 port: 10514 cloNamespace: openshift-logging '
4.19. Adopting autoscaling services Link kopierenLink in die Zwischenablage kopiert!
To adopt services that enable autoscaling, you patch an existing OpenStackControlPlane custom resource (CR) where the Alarming services (aodh) are disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform environment.
Prerequisites
- The source director environment is running.
- A Single Node OpenShift or OpenShift Local is running in the Red Hat OpenShift Container Platform (RHOCP) cluster.
You have adopted the following services:
- MariaDB
- Identity service (keystone)
- Orchestration service (heat)
- Telemetry service
Procedure
Patch the
OpenStackControlPlaneCR to deploy the autoscaling services:$ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: enabled: true template: autoscaling: enabled: true aodh: passwordSelector: aodhService: AodhPassword databaseAccount: aodh databaseInstance: openstack secret: osp-secret serviceUser: aodh heatInstance: heat 'Inspect the aodh pods:
$ AODH_POD=`oc get pods -l service=aodh | tail -n 1 | cut -f 1 -d' '` $ oc exec -t $AODH_POD -c aodh-api -- cat /etc/aodh/aodh.confCheck whether the aodh API service is registered in the Identity service:
$ openstack endpoint list | grep aodh | d05d120153cd4f9b8310ac396b572926 | regionOne | aodh | alarming | True | internal | http://aodh-internal.openstack.svc:8042 | | d6daee0183494d7a9a5faee681c79046 | regionOne | aodh | alarming | True | public | http://aodh-public.openstack.svc:8042 |Optional: Create aodh alarms with the
PrometheusAlarmalarm type:NoteYou must use the
PrometheusAlarmalarm type instead ofGnocchiAggregationByResourcesAlarm.$ openstack alarm create --name high_cpu_alarm \ --type prometheus \ --query "(rate(ceilometer_cpu{resource_name=~'cirros'})) * 100" \ --alarm-action 'log://' \ --granularity 15 \ --evaluation-periods 3 \ --comparison-operator gt \ --threshold 7000000000Verify that the alarm is enabled:
$ openstack alarm list +--------------------------------------+------------+------------------+-------------------+----------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+------------+------------------+-------------------+----------+ | 209dc2e9-f9d6-40e5-aecc-e767ce50e9c0 | prometheus | prometheus_alarm | ok | low | True | +--------------------------------------+------------+------------------+-------------------+----------+
4.20. Pulling the configuration from a director deployment Link kopierenLink in die Zwischenablage kopiert!
Before you start the data plane adoption workflow, back up the configuration from the Red Hat OpenStack Platform (RHOSP) services and director. You can then use the files during the configuration of the adopted services to ensure that nothing is missed or misconfigured.
Prerequisites
- The os-diff tool is installed and configured. For more information, see Comparing configuration files between deployments.
Procedure
Update your ssh parameters according to your environment in the
os-diff.cfg. Os-diff uses the ssh parameters to connect to your director node, and then query and download the configuration files:ssh_cmd=ssh -F ssh.config standalone container_engine=podman connection=ssh remote_config_path=/tmp/tripleoEnsure that the ssh command you provide in
ssh_cmdparameter is correct and includes key authentication.Enable the services that you want to include in the
/etc/os-diff/config.yamlfile, and disable the services that you want to exclude from the file. Ensure that you have the correct permissions to edit the file:$ chown ospng:ospng /etc/os-diff/config.yamlThe following example enables the default Identity service (keystone) to be included in the
/etc/os-diff/config.yamlfile:# service name and file location services: # Service name keystone: # Bool to enable/disable a service (not implemented yet) enable: true # Pod name, in both OCP and podman context. # It could be strict match or will only just grep the podman_name # and work with all the pods which matched with pod_name. # To enable/disable use strict_pod_name_match: true/false podman_name: keystone pod_name: keystone container_name: keystone-api # pod options # strict match for getting pod id in TripleO and podman context strict_pod_name_match: false # Path of the config files you want to analyze. # It could be whatever path you want: # /etc/<service_name> or /etc or /usr/share/<something> or even / # @TODO: need to implement loop over path to support multiple paths such as: # - /etc # - /usr/share path: - /etc/ - /etc/keystone - /etc/keystone/keystone.conf - /etc/keystone/logging.confRepeat this step for each RHOSP service that you want to disable or enable.
If you use non-containerized services, such as the
ovs-external-ids, pull the configuration or the command output. For example:services: ovs_external_ids: hosts: - standalone service_command: "ovs-vsctl list Open_vSwitch . | grep external_ids | awk -F ': ' '{ print $2; }'" cat_output: true path: - ovs_external_ids.json config_mapping: ovn-bridge-mappings: edpm_ovn_bridge_mappings ovn-bridge: edpm_ovn_bridge ovn-encap-type: edpm_ovn_encap_type ovn-monitor-all: ovn_monitor_all ovn-remote-probe-interval: edpm_ovn_remote_probe_interval ovn-ofctrl-wait-before-clear: edpm_ovn_ofctrl_wait_before_clearNoteYou must correctly configure an SSH configuration file or equivalent for non-standard services, such as OVS. The
ovs_external_idsservice does not run in a container, and the OVS data is stored on each host of your cloud, for example,controller_1/controller_2/, and so on.-
hostsspecifies the list of hosts, for example,compute-1,compute-2. -
service_command: "ovs-vsctl list Open_vSwitch . | grep external_ids | awk -F ': ' '{ print $2; }'"runs against the hosts. -
cat_output: trueprovides os-diff with the output of the command and stores the output in a file that is specified by the key path. -
config_mappingprovides a mapping between, in this example, the data plane custom resource definition and theovs-vsctloutput. ovn-bridge-mappings: edpm_ovn_bridge_mappingsmust be a list of strings, for example,["datacentre:br-ex"].Compare the values:
$ os-diff diff ovs_external_ids.json edpm.crd --crd --service ovs_external_idsFor example, to check the
/etc/yum.confon every host, you must put the following statement in theconfig.yamlfile. The following example uses a file calledyum_config:services: yum_config: hosts: - undercloud - controller_1 - compute_1 - compute_2 service_command: "cat /etc/yum.conf" cat_output: true path: - yum.conf
-
Pull the configuration:
NoteThe following command pulls all the configuration files that are included in the
/etc/os-diff/config.yamlfile. You can configure os-diff to update this file automatically according to your running environment by using the--updateor--update-onlyoption. These options set the podman information into theconfig.yamlfor all running containers. The podman information can be useful later, when all the Red Hat OpenStack Platform services are turned off.Note that when the
config.yamlfile is populated automatically you must provide the configuration paths manually for each service.# will only update the /etc/os-diff/config.yaml os-diff pull --update-only# will update the /etc/os-diff/config.yaml and pull configuration os-diff pull --update# will update the /etc/os-diff/config.yaml and pull configuration os-diff pullThe configuration is pulled and stored by default in the following directory:
/tmp/tripleo/
Verification
Verify that you have a directory for each service configuration in your local path:
▾ tmp/ ▾ tripleo/ ▾ glance/ ▾ keystone/
4.21. Rolling back the control plane adoption Link kopierenLink in die Zwischenablage kopiert!
If you encountered a problem and are unable to complete the adoption of the Red Hat OpenStack Platform (RHOSP) control plane services, you can roll back the control plane adoption.
Do not attempt the rollback if you altered the data plane nodes in any way. You can only roll back the control plane adoption if you altered the control plane.
During the control plane adoption, services on the RHOSP control plane are stopped but not removed. The databases on the RHOSP control plane are not edited during the adoption procedure. The Red Hat OpenStack Services on OpenShift (RHOSO) control plane receives a copy of the original control plane databases. The rollback procedure assumes that the data plane has not yet been modified by the adoption procedure, and it is still connected to the RHOSP control plane.
The rollback procedure consists of the following steps:
- Restoring the functionality of the RHOSP control plane.
- Removing the partially or fully deployed RHOSO control plane.
Procedure
To restore the source cloud to a working state, start the RHOSP control plane services that you previously stopped during the adoption procedure:
ServicesToStart=("tripleo_horizon.service" "tripleo_keystone.service" "tripleo_barbican_api.service" "tripleo_barbican_worker.service" "tripleo_barbican_keystone_listener.service" "tripleo_cinder_api.service" "tripleo_cinder_api_cron.service" "tripleo_cinder_scheduler.service" "tripleo_cinder_volume.service" "tripleo_cinder_backup.service" "tripleo_glance_api.service" "tripleo_manila_api.service" "tripleo_manila_api_cron.service" "tripleo_manila_scheduler.service" "tripleo_neutron_api.service" "tripleo_placement_api.service" "tripleo_nova_api_cron.service" "tripleo_nova_api.service" "tripleo_nova_conductor.service" "tripleo_nova_metadata.service" "tripleo_nova_scheduler.service" "tripleo_nova_vnc_proxy.service" "tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_compute.service" "tripleo_ceilometer_agent_ipmi.service" "tripleo_ceilometer_agent_notification.service" "tripleo_ovn_cluster_north_db_server.service" "tripleo_ovn_cluster_south_db_server.service" "tripleo_ovn_cluster_northd.service" "tripleo_octavia_api.service" "tripleo_octavia_health_manager.service" "tripleo_octavia_rsyslog.service" "tripleo_octavia_driver_agent.service" "tripleo_octavia_housekeeping.service" "tripleo_octavia_worker.service") PacemakerResourcesToStart=("galera-bundle" "haproxy-bundle" "rabbitmq-bundle" "openstack-cinder-volume" "openstack-cinder-backup" "openstack-manila-share") echo "Starting systemd OpenStack services" for service in ${ServicesToStart[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ${!SSH_CMD} sudo systemctl is-enabled $service &> /dev/null; then echo "Starting the $service in controller $i" ${!SSH_CMD} sudo systemctl start $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStart[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ${!SSH_CMD} sudo systemctl is-enabled $service &> /dev/null; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=active >/dev/null; then echo "ERROR: Service $service is not running on controller $i" else echo "OK: Service $service is running in controller $i" fi fi fi done done echo "Starting pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStart[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then echo "Starting $resource" ${!SSH_CMD} sudo pcs resource enable $resource else echo "Service $resource not present" fi done break fi done echo "Checking pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then if ${!SSH_CMD} sudo pcs resource status $resource | grep Started >/dev/null; then echo "OK: Service $resource is started" else echo "ERROR: Service $resource is stopped" fi fi done break fi doneIf the Ceph NFS service is running on the deployment as a Shared File Systems service (manila) back end, you must restore the Pacemaker order and colocation constraints for the
openstack-manila-shareservice:$ sudo pcs constraint order start ceph-nfs then openstack-manila-share kind=Optional id=order-ceph-nfs-openstack-manila-share-Optional $ sudo pcs constraint colocation add openstack-manila-share with ceph-nfs score=INFINITY id=colocation-openstack-manila-share-ceph-nfs-INFINITY-
Verify that the source cloud is operational again, for example, you can run
openstackCLI commands such asopenstack server list, or check that you can access the Dashboard service (horizon). Remove the partially or fully deployed control plane so that you can attempt the adoption again later:
$ oc delete --ignore-not-found=true --wait=false openstackcontrolplane/openstack $ oc patch openstackcontrolplane openstack --type=merge --patch ' > metadata: > finalizers: [] > ' || true > >while oc get pod | grep rabbitmq-server-0; do > sleep 2 >done >while oc get pod | grep openstack-galera-0; do > sleep 2 >done $ oc delete --ignore-not-found=true --wait=false pod mariadb-copy-data $ oc delete --ignore-not-found=true --wait=false pvc mariadb-data $ oc delete --ignore-not-found=true --wait=false pod ovn-copy-data $ oc delete --ignore-not-found=true secret osp-secret
Next steps
After you restore the RHOSP control plane services, their internal state might have changed. Before you retry the adoption procedure, verify that all the control plane resources are removed and that there are no leftovers which could affect the following adoption procedure attempt. You must not use previously created copies of the database contents in another adoption attempt. You must make a new copy of the latest state of the original source database contents. For more information about making new copies of the database, see Migrating databases to the control plane.