Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 10. Changing the cloud provider credentials configuration
For supported configurations, you can change how OpenShift Container Platform authenticates with your cloud provider.
To determine which cloud credentials strategy your cluster uses, see Determining the Cloud Credential Operator mode.
10.1. Rotating or removing cloud provider credentials Link kopierenLink in die Zwischenablage kopiert!
After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation.
To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
10.1.1. Rotating cloud provider credentials with the Cloud Credential Operator utility Link kopierenLink in die Zwischenablage kopiert!
The Cloud Credential Operator (CCO) utility ccoctl supports updating secrets for clusters installed on IBM Cloud®.
10.1.1.1. Rotating API keys Link kopierenLink in die Zwischenablage kopiert!
You can rotate API keys for your existing service IDs and update the corresponding secrets.
Prerequisites
-
You have configured the
ccoctlbinary. - You have existing service IDs in a live OpenShift Container Platform cluster installed.
Procedure
Use the
ccoctlutility to rotate your API keys for the service IDs and update the secrets:$ ccoctl <provider_name> refresh-keys \1 --kubeconfig <openshift_kubeconfig_file> \2 --credentials-requests-dir <path_to_credential_requests_directory> \3 --name <name>4 NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.
10.1.2. Rotating cloud provider credentials manually Link kopierenLink in die Zwischenablage kopiert!
If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.
The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.
Prerequisites
Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:
- For mint mode, Amazon Web Services (AWS) and Google Cloud are supported.
- For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud, Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported.
- You have changed the credentials that are used to interface with your cloud provider.
- The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.
Procedure
-
In the Administrator perspective of the web console, navigate to Workloads
Secrets. In the table on the Secrets page, find the root secret for your cloud provider.
Expand Platform Secret name AWS
aws-credsAzure
azure-credentialsGoogle Cloud
gcp-credentialsRHOSP
openstack-credentialsVMware vSphere
vsphere-creds-
Click the Options menu
in the same row as the secret and select Edit Secret.
- Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
- Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials.
NoteIf the vSphere CSI Driver Operator is enabled, this step is not required.
To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the
cluster-adminrole and run the following command:$ oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \ --type=mergeWhile the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports
Progressing=true. To view the status, run the following command:$ oc get co kube-controller-managerIf the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual
CredentialsRequestobjects.-
Log in to the OpenShift Container Platform CLI as a user with the
cluster-adminrole. Get the names and namespaces of all referenced component secrets:
$ oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'where
<provider_spec>is the corresponding value for your cloud provider:-
AWS:
AWSProviderSpec -
Google Cloud:
GCPProviderSpec
Partial example output for AWS
{ "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" }-
AWS:
Delete each of the referenced component secrets:
$ oc delete secret <secret_name> \1 -n <secret_namespace>2 Example deletion of an AWS secret
$ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-driversYou do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.
-
Log in to the OpenShift Container Platform CLI as a user with the
Verification
To verify that the credentials have changed:
-
In the Administrator perspective of the web console, navigate to Workloads
Secrets. - Verify that the contents of the Value field or fields have changed.
10.1.3. Removing cloud provider credentials Link kopierenLink in die Zwischenablage kopiert!
For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions.
After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest custom resources, such as minor cluster version updates.
Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.16 to 4.17), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked.
Prerequisites
- Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and Google Cloud.
Procedure
-
In the Administrator perspective of the web console, navigate to Workloads
Secrets. In the table on the Secrets page, find the root secret for your cloud provider.
Expand Platform Secret name AWS
aws-credsGoogle Cloud
gcp-credentials-
Click the Options menu
in the same row as the secret and select Delete Secret.
10.2. Enabling token-based authentication Link kopierenLink in die Zwischenablage kopiert!
After installing an OpenShift Container Platform cluster on Microsoft Azure or Amazon Web Services (AWS), you can enable Microsoft Entra Workload ID or Security Token Service (STS) to use short-term credentials.
10.2.1. Configuring the Cloud Credential Operator utility Link kopierenLink in die Zwischenablage kopiert!
To configure an existing cluster to create and manage cloud credentials from outside of the cluster, extract and prepare the Cloud Credential Operator utility (ccoctl) binary.
The ccoctl utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc).
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(oc get clusterversion -o jsonpath={..desired.image})Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)NoteEnsure that the architecture of the
$RELEASE_IMAGEmatches the architecture of the environment in which you will use theccoctltool.Extract the
ccoctlbinary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \1 -a ~/.pull-secret- 1
- For
<rhel_version>, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified,ccoctl.rhel8is used by default. The following values are valid:-
rhel8: Specify this value for hosts that use RHEL 8. -
rhel9: Specify this value for hosts that use RHEL 9.
-
NoteThe
ccoctlbinary is created in the directory from where you executed the command and not in/usr/bin/. You must rename the directory or move theccoctl.<rhel_version>binary toccoctl.Change the permissions to make
ccoctlexecutable by running the following command:$ chmod 775 ccoctl
Verification
To verify that
ccoctlis ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctlExample output
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
10.2.2. Enabling Microsoft Entra Workload ID on an existing cluster Link kopierenLink in die Zwischenablage kopiert!
If you did not configure your Microsoft Azure OpenShift Container Platform cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster.
The process to enable Workload ID on an existing cluster is disruptive and takes a significant amount of time. Before proceeding, observe the following considerations:
- Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour.
- During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready.
- After starting this process, do not attempt to update the cluster until it is complete. If an update is triggered, the process to enable Workload ID on an existing cluster fails.
Prerequisites
- You have installed an OpenShift Container Platform cluster on Microsoft Azure.
-
You have access to the cluster using an account with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). -
You have extracted and prepared the Cloud Credential Operator utility (
ccoctl) binary. -
You have access to your Azure account by using the Azure CLI (
az).
Procedure
-
Create an output directory for the manifests that the
ccoctlutility generates. This procedure uses./output_diras an example. Extract the service account public signing key for the cluster to the output directory by running the following command:
$ oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 -d \ > output_dir/serviceaccount-signer.public1 - 1
- This procedure uses a file named
serviceaccount-signer.publicas an example.
Use the extracted service account public signing key to create an OpenID Connect (OIDC) issuer and Azure blob storage container with OIDC configuration files by running the following command:
$ ./ccoctl azure create-oidc-issuer \ --name <azure_infra_name> \1 --output-dir ./output_dir \ --region <azure_region> \2 --subscription-id <azure_subscription_id> \3 --tenant-id <azure_tenant_id> \ --public-key-file ./output_dir/serviceaccount-signer.public4 - 1
- The value of the
nameparameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the--oidc-resource-group-nameargument with the existing group name as its value. - 2
- Specify the region of the existing cluster.
- 3
- Specify the subscription ID of the existing cluster.
- 4
- Specify the file that contains the service account public signing key for the cluster.
Verify that the configuration file for the Azure pod identity webhook was created by running the following command:
$ ll ./output_dir/manifestsExample output
total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml- 1
- The file
azure-ad-pod-identity-webhook-config.yamlcontains the Azure pod identity webhook configuration.
Set an
OIDC_ISSUER_URLvariable with the OIDC issuer URL from the generated manifests in the output directory by running the following command:$ OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print $2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`Update the
spec.serviceAccountIssuerparameter of the clusterauthenticationconfiguration by running the following command:$ oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"${OIDC_ISSUER_URL}\"}}"Monitor the configuration update progress by running the following command:
$ oc adm wait-for-stable-clusterThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stableRestart all of the pods in the cluster by running the following command:
$ oc adm reboot-machine-config-pool mcp/worker mcp/masterRestarting a pod updates the
serviceAccountIssuerfield and refreshes the service account public signing key.Monitor the restart and update process by running the following command:
$ oc adm wait-for-node-reboot nodes --allThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebootedUpdate the Cloud Credential Operator
spec.credentialsModeparameter toManualby running the following command:$ oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}'Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secretNoteThis command might take a few moments to run.
Set an
AZURE_INSTALL_RGvariable with the Azure resource group name by running the following command:$ AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`Use the
ccoctlutility to create managed identities for allCredentialsRequestobjects by running the following command:NoteThe following command does not show all available options. For a complete list of options, including those that might be necessary for your specific use case, run
$ ccoctl azure create-managed-identities --help.$ ccoctl azure create-managed-identities \ --name <azure_infra_name> \ --output-dir ./output_dir \ --region <azure_region> \ --subscription-id <azure_subscription_id> \ --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "${OIDC_ISSUER_URL}" \ --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \1 --installation-resource-group-name "${AZURE_INSTALL_RG}" \ --network-resource-group-name <azure_resource_group>2 Apply the Azure pod identity webhook configuration for Workload ID by running the following command:
$ oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yamlApply the secrets generated by the
ccoctlutility by running the following command:$ find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {}This process might take several minutes.
Restart all of the pods in the cluster by running the following command:
$ oc adm reboot-machine-config-pool mcp/worker mcp/masterRestarting a pod updates the
serviceAccountIssuerfield and refreshes the service account public signing key.Monitor the restart and update process by running the following command:
$ oc adm wait-for-node-reboot nodes --allThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebootedMonitor the configuration update progress by running the following command:
$ oc adm wait-for-stable-clusterThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stableOptional: Remove the Azure root credentials secret by running the following command:
$ oc delete secret -n kube-system azure-credentials
10.2.3. Enabling AWS Security Token Service (STS) on an existing cluster Link kopierenLink in die Zwischenablage kopiert!
If you did not configure your Amazon Web Services (AWS) OpenShift Container Platform cluster to use Security Token Service (STS) during installation, you can enable this authentication method on an existing cluster.
The process to enable STS on an existing cluster is disruptive and takes a significant amount of time. Before proceeding, observe the following considerations:
- Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour.
- During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready.
- Do not update the cluster until this process is complete.
Prerequisites
- You have installed an OpenShift Container Platform cluster on AWS.
-
You have access to the cluster using an account with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). -
You have extracted and prepared the Cloud Credential Operator utility (
ccoctl) binary. - You have access to your AWS account by using the AWS CLI (aws).
Procedure
Create an output directory for
ccoctlgenerated manifests by running the following command:$ mkdir ./output_dirCreate the AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider.
Extract the service account public signing key for the cluster by running the following command:
$ oc get secret/next-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 -d \ > output_dir/serviceaccount-signer.public1 - 1
- This procedure uses a file named
serviceaccount-signer.publicas an example.
Create the AWS IAM identity provider and S3 bucket by running the following command:
$ ./ccoctl aws create-identity-provider \ --output-dir output_dir \1 --name <name_you_choose> \2 --region us-east-2 \3 --public-key-file output_dir/serviceaccount-signer.public4 - Save or note the Amazon Resource Name (ARN) for the IAM identity provider. You can find this information in the final line of the output of the previous command.
Update the cluster authentication configuration.
Extract the OIDC issuer URL and update the authentication configuration of the cluster by running the following commands:
$ OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print $2 }' output_dir/manifests/cluster-authentication-02-config.yaml` $ oc patch authentication cluster --type=merge -p "{\"spec\":{\"serviceAccountIssuer\":\"${OIDC_ISSUER_URL}\"}}"Monitor the configuration update progress by running the following command:
$ oc adm wait-for-stable-clusterThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
Restart pods to apply the issuer update.
Restart all of the pods in the cluster by running the following command:
$ oc adm reboot-machine-config-pool mcp/worker mcp/masterRestarting a pod updates the
serviceAccountIssuerfield and refreshes the service account public signing key.Monitor the restart and update process by running the following command:
$ oc adm wait-for-node-reboot nodes --allThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebooted
Update the Cloud Credential Operator
spec.credentialsModeparameter toManualby running the following command:$ oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}'Extract
CredentialsRequestsobjects.Create a
CLUSTER_VERSIONenvironment variable by running the following command:$ CLUSTER_VERSION=$(oc get clusterversion version -o json | jq -r '.status.desired.version')Create a
CLUSTER_IMAGEenvironment variable by running the following command:$ CLUSTER_IMAGE=$(oc get clusterversion version -o json | jq -r ".status.history[] | select(.version == \"${CLUSTER_VERSION}\") | .image")Extract
CredentialsRequestsobjects from the release image by running the following command:$ oc adm release extract \ --credentials-requests \ --cloud=aws \ --from ${CLUSTER_IMAGE} \ --to output_dir/cred-reqs
Create AWS IAM roles and apply secrets.
Create an IAM role for each
CredentialsRequestsobject by running the following command:$ ./ccoctl aws create-iam-roles \ --output-dir ./output_dir/ \1 --name <name_you_choose> \2 --identity-provider-arn <identity_provider_arn> \3 --region us-east-2 \4 --credentials-requests-dir ./output_dir/cred-reqs/5 - 1
- Specify the output directory you created earlier.
- 2
- Specify a globally unique name. This name functions as a prefix for AWS resources created by this command.
- 3
- Specify the ARN for the IAM identity provider.
- 4
- Specify the AWS region of the cluster.
- 5
- Specify the relative path to the folder where you extracted the
CredentialsRequestfiles with theoc adm release extractcommand.
Apply the generated secrets by running the following command:
$ find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {}
Finish the configuration process by restarting the cluster.
Restart all of the pods in the cluster by running the following command:
$ oc adm reboot-machine-config-pool mcp/worker mcp/masterMonitor the restart and update process by running the following command:
$ oc adm wait-for-node-reboot nodes --allThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All nodes rebootedMonitor the configuration update progress by running the following command:
$ oc adm wait-for-stable-clusterThis process might take 15 minutes or longer. The following output indicates that the process is complete:
All clusteroperators are stable
Optional: Remove the AWS root credentials secret by running the following command:
$ oc delete secret -n kube-system aws-creds
10.2.4. Verifying that a cluster uses short-term credentials Link kopierenLink in die Zwischenablage kopiert!
You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster.
Prerequisites
-
You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility (
ccoctl) to implement short-term credentials. -
You installed the OpenShift CLI (
oc). -
You are logged in as a user with
cluster-adminprivileges.
Procedure
Verify that the CCO is configured to operate in manual mode by running the following command:
$ oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode}The following output confirms that the CCO is operating in manual mode:
Example output
ManualVerify that the cluster does not have
rootcredentials by running the following command:$ oc get secrets \ -n kube-system <secret_name>where
<secret_name>is the name of the root secret for your cloud provider.Expand Platform Secret name Amazon Web Services (AWS)
aws-credsMicrosoft Azure
azure-credentialsGoogle Cloud
gcp-credentialsAn error confirms that the root secret is not present on the cluster.
Example output for an AWS cluster
Error from server (NotFound): secrets "aws-creds" not foundVerify that the components are using short-term security credentials for individual components by running the following command:
$ oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }'This command displays the value of the
.spec.serviceAccountIssuerparameter in the clusterAuthenticationobject. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster.Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command:
$ oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}'An output that contains the
azure_client_idandazure_federated_token_filefelids confirms that the components are assuming the Azure client ID.Azure clusters: Verify that the pod identity webhook is running by running the following command:
$ oc get pods \ -n openshift-cloud-credential-operatorExample output
NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m