1.3. Known issues
Review the known issues for Red Hat Advanced Cluster Management for Kubernetes. The following list contains known issues for this release, or known issues that continued from the previous release.
1.3.1. Installation known issues 复制链接链接已复制到粘贴板!
When an OpenShift Container Platform cluster is in the upgrade stage, the cluster pods are restarted and the cluster might remain in upgrade failed status for a variation of 1-5 minutes. This behavior is expected and resolves after a few minutes.
Certificate manager must not exist on a cluster when you install Red Hat Advanced Cluster Management for Kubernetes.
When certificate manager already exists on the cluster, Red Hat Advanced Cluster Management for Kubernetes installation fails.
To resolve this issue, verify if the certificate manager is present in your cluster by running the following command:
kubectl get crd | grep certificates.certmanager
kubectl get crd | grep certificates.certmanager
1.3.2. Web console known issues 复制链接链接已复制到粘贴板!
1.3.2.1. LDAP user names are case-sensitive 复制链接链接已复制到粘贴板!
LDAP user names are case-sensitive. You must use the name exactly the way it is configured in your LDAP directory.
The product supports Mozilla Firefox 74.0 or the latest version that is available for Linux, macOS, and Windows. Upgrade to the latest version for the best console compatibility.
1.3.2.3. Unable to search using values with empty spaces 复制链接链接已复制到粘贴板!
From the console and Visual Web Terminal, users are unable to search for values that contain an empty space.
When you are logged in as kubeadmin and you click the Log out option in the drop-down menu, the console returns to the login screen, but a browser tab opens with a /logout URL. The page is blank and you can close the tab without impact to your console.
1.3.3. Cluster management known issues 复制链接链接已复制到粘贴板!
After a cluster is imported, log in to the imported cluster and make sure all pods that are deployed by the Klusterlet are running. Otherwise, you might see inconsistent data in the console.
For example, if a policy controller is not running, you might not get the same results of violations on the Governance and risk page and the Cluster status.
For instance, you might see 0 violations listed in the Overview status, but you might have 12 violations reported on the Governance and risk page.
In this case, inconsistency between the pages represents a disconnection between the policy-controller-addon on managed clusters and the policy controller on the hub cluster. Additionally, the managed cluster might not have enough resources to run all the Klusterlet components.
As a result, the policy was not propagated to managed cluster, or the violation was not reported back from managed clusters.
1.3.3.2. Importing clusters might require two attempts 复制链接链接已复制到粘贴板!
When you import a cluster that was previously managed and detached by a Red Hat Advanced Cluster Management hub cluster, the import process might fail the first time. The cluster status is pending import. Run the command again, and the import should be successful.
1.3.3.3. Klusterlet runs on a detached cluster 复制链接链接已复制到粘贴板!
If you detach an online cluster immediately after it was attached, the Klusterlet starts to run on the detached cluster before the manifestwork syncs. Removal of the managed cluster from the hub cluster does not uninstall the Klusterlet. Complete the following steps to fix the issue:
-
Download the
cleanup-managed-clusterscript from thedeployGit repository. Run the
cleanup-managed-cluster.shscript by entering the following command:./cleanup-managed-cluster.sh
./cleanup-managed-cluster.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You cannot import IBM Red Hat OpenShift Kubernetes Service version 3.11 clusters. Later versions of IBM OpenShift Kubernetes Service are supported.
When you detach managed clusters on OpenShift Container Platform 3.11, the open-cluster-management-agent namesapce is not automatically deleted. Manually remove the namespace by running the following command:
oc delete ns open-cluster-management-agent
oc delete ns open-cluster-management-agent
When you change your cloud provider access key, the provisioned cluster access key is not updated in the namespace. Run the following command for your cloud provider to update the access key:
Amazon Web Services (AWS)
oc patch secret {CLUSTER-NAME}-aws-creds -n {CLUSTER-NAME} --type json -p='[{"op": "add", "path": "/stringData", "value":{"aws_access_key_id": "{YOUR-NEW-ACCESS-KEY-ID}","aws_secret_access_key":"{YOUR-NEW-aws_secret_access_key}"} }]'oc patch secret {CLUSTER-NAME}-aws-creds -n {CLUSTER-NAME} --type json -p='[{"op": "add", "path": "/stringData", "value":{"aws_access_key_id": "{YOUR-NEW-ACCESS-KEY-ID}","aws_secret_access_key":"{YOUR-NEW-aws_secret_access_key}"} }]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Google Cloud Platform (GCP)
oc set data secret/{CLUSTER-NAME}-gcp-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.gcp/osServiceAccount.jsonoc set data secret/{CLUSTER-NAME}-gcp-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.gcp/osServiceAccount.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Microsoft Azure
oc set data secret/{CLUSTER-NAME}-azure-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.azure/osServiceAccount.jsonoc set data secret/{CLUSTER-NAME}-azure-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.azure/osServiceAccount.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
When you detach a managed cluster that is in an offline state, there are some resources that cannot be removed from managed cluster. Complete the following steps to remove the additional resources:
-
Make sure you have the
occommand line interface configured. Make sure you have
KUBECONFIGconfigured on your managed cluster.If you run
oc get ns | grep open-cluster-management-agentyou should see two namespaces:open-cluster-management-agent Active 10m open-cluster-management-agent-addon Active 10m
open-cluster-management-agent Active 10m open-cluster-management-agent-addon Active 10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Download the
cleanup-managed-clusterscript from thedeployGit repository. Run the
cleanup-managed-cluster.shscript by entering the following command:./cleanup-managed-cluster.sh
./cleanup-managed-cluster.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to ensure that both namespaces are removed:
oc get ns | grep open-cluster-management-agent
oc get ns | grep open-cluster-management-agentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.3.8. Cannot run management ingress as non-root user 复制链接链接已复制到粘贴板!
You must be logged in as root to run the management-ingress service.
Search maps RBAC for resources in the hub cluster. Depending on user RBAC settings for Red Hat Advanced Cluster Management, users might not see node data from the managed cluster. Results from search might be different from what is displayed on the Nodes page for a cluster.
1.3.4. Application management known issues 复制链接链接已复制到粘贴板!
1.3.4.1. YAML manifest cannot create multiple resoures 复制链接链接已复制到粘贴板!
The managedclusteraction doesn’t support multiple resources. You cannot apply the YAML manifest with multiple resource from console create resources features.
Search results for your pipeline return an accurate number of resources, but that number might be different in the pipeline card because the card displays resources not yet used by an application.
For instance, after you search for kind:channel, you might see you have 10 channels, but the pipeline card on the console might represent only 5 channels that are used.
When you subscribe to a namespace channel and the subscription remains in FAILED state after you fixed other associated resources such as channel, secret, configmap, or placement rule, the namespace subscription is not continuously reconciled.
To force the subscription reconcile again to get out of FAILED state, complete the following steps:
- Log in to your hub cluster.
- Manually add a label to the subscription using the following command:
oc label subscriptions.apps.open-cluster-management.io the_subscription_name reconcile=true
oc label subscriptions.apps.open-cluster-management.io the_subscription_name reconcile=true
1.3.4.4. Deployable resources in a namespace channel 复制链接链接已复制到粘贴板!
You need to manually create deployable resources within the channel namespace.
To create deployable resources correctly, add the following two labels that are required in the deployable to the subscription controller that identifies which deployable resources are added:
labels:
apps.open-cluster-management.io/channel: <channel name>
apps.open-cluster-management.io/channel-type: Namespace
labels:
apps.open-cluster-management.io/channel: <channel name>
apps.open-cluster-management.io/channel-type: Namespace
Don’t specify template namespace in each deployable spec.template.metadata.namespace.
For the namespace type channel and subscription, all the deployable templates are deployed to the subscription namespace on managed clusters. As a result, those deployable templates that are defined outside of the subscription namespace are skipped.
See Creating and managing channels for more information.
1.3.4.5. Edit role for application error 复制链接链接已复制到粘贴板!
A user performing in an Editor role should only have read or update authority on an application, but erroneously editor can also create and delete an application. Red Hat OpenShift Operator Lifecycle Manager default settings change the setting for the product. To workaround the issue, see the following procedure:
-
Run
oc edit clusterrole applications.app.k8s.io-v1beta1-edit -o yamlto open the application edit cluster role. -
Remove
createanddeletefrom the verbs list. - Save the change.
1.3.4.6. Edit role for placement rule error 复制链接链接已复制到粘贴板!
A user performing in an Editor role should only have read or update authority on an placement rule, but erroneously editor can also create and delete, as well. Red Hat OpenShift Operator Lifecycle Manager default settings change the setting for the product. To workaround the issue, see the following procedure:
-
Run
oc edit clusterrole placementrules.apps.open-cluster-management.io-v1-editto open the application edit cluster role. -
Remove
createanddeletefrom the verbs list. - Save the change.
If applications are not deploying after an update to a placement rule, verify that the klusterlet-addon-appmgr pod is running. The klusterlet-addon-appmgr is the subscription container that needs to run on endpoint clusters.
You can run `oc get pods -n open-cluster-management-agent-addon ` to verify.
You can also search for kind:pod cluster:yourcluster in the console and see if the klusterlet-addon-appmgr is running.
If you cannot verify, attempt to import the cluster again and verify again.
1.3.4.8. Subscription operator does not create an SCC 复制链接链接已复制到粘贴板!
Learn about OpenShift Container Platform SCC at Managing Security Context Constraints (SCC), which is an additional configuration required on the managed cluster.
Different deployments have different security context and different service accounts. The subscription operator cannot create an SCC automatically. Administrators control permissions for pods. A Security Context Constraints (SCC) CR is required to enable appropriate permissions for the relative service accounts to create pods in the non-default namespace:
To manually create an SCC CR in your namespace, complete the following:
Find the service account that is defined in the deployments. For example, see the following
nginxdeployments:nginx-ingress-52edb nginx-ingress-52edb-backend
nginx-ingress-52edb nginx-ingress-52edb-backendCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an SCC CR in your namespace to assign the required permissions to the service account or accounts. See the following example where
kind: SecurityContextConstraintsis added:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.4.9. Application channels in unique namespaces 复制链接链接已复制到粘贴板!
Creating more than one channel in the same namespace can cause errors with the hub cluster. For instance, namespace charts-v1 is used by the installer as a Helm type channel, so do not create any additional channels in charts-v1.
It is best practice to create each channel in a unique namespace. However, a Git channel can share a namespace with another type of channel including Git, Helm, Kubernetes Namespace, and Object store.
1.3.5. Security known issues 复制链接链接已复制到粘贴板!
1.3.5.1. Internal error 500 during login to the console 复制链接链接已复制到粘贴板!
When Red Hat Advanced Cluster Management for Kubernetes is installed and the OpenShift Container Platform is customized with a custom ingress certificate, a 500 Internal Error message appears. You are unable to access the console because the OpenShift Container Platform certificate is not included in the Red Hat Advanced Cluster Management for Kubernetes management ingress. Add the OpenShift Container Platform certificate by completing the following steps:
Create a ConfigMap that includes the certificate authority used to sign the new certificate. Your ConfigMap must be identical to the one you created in the
openshift-confignamespace. Run the following command:oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ -n open-cluster-managementoc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ -n open-cluster-managementCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit your
multiclusterhubYAML file by running the following command:oc edit multiclusterhub multiclusterhub
oc edit multiclusterhub multiclusterhubCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the
specsection by editing the parameter value forcustomCAConfigmap. The parameter might resemble the following content:
customCAConfigmap: custom-ca
customCAConfigmap: custom-caCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the
After you complete the steps, wait a few minutes for the changes to propagate to the charts and log in again. The OpenShift Container Platform certificate is added.
All cluster violations from specific policies are listed in the policy detail panel. If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol: -
1.3.5.3. Empty status in policies 复制链接链接已复制到粘贴板!
The policies that are applied to the cluster are considered NonCompliant when clusters are not running. When you view violation details, the status parameter is empty.
1.3.5.4. Placement rule and policy binding empty 复制链接链接已复制到粘贴板!
After creating or modifying a policy, the placement rule and the policy binding might be empty in the policy details of the Red Hat Advanced Cluster Management console. This is generally because the policy is disabled, or there was some other updates made to the policy. Ensure that the settings are set correctly for the policy in the YAML view.
If you remove the cert-manager and the cert-manager-webhook-helmreleases, the Helm releases are triggered to automatically redeploy the charts and generate a new certificate. The new certificate must be synced to the other helm charts that create other Red Hat Advanced Cluster Management components. To recover the certificate components from the hub cluster, complete the following steps:
Remove the helm release for
cert-managerby running the following commands:oc delete helmrelease cert-manager-5ffd5 oc delete helmrelease cert-manager-webhook-5ca82
oc delete helmrelease cert-manager-5ffd5 oc delete helmrelease cert-manager-webhook-5ca82Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the helm release is recreated and the pods are running.
Make sure the certificate is generated by running the following command:
oc get certificates.certmanager.k8s.io
oc get certificates.certmanager.k8s.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow You might receive the following respoonse:
(base) ➜ cert-manager git:(master) ✗ oc get certificates.certmanager.k8s.io NAME READY SECRET AGE EXPIRATION multicloud-ca-cert True multicloud-ca-cert 61m 2025-09-27T17:10:47Z
(base) ➜ cert-manager git:(master) ✗ oc get certificates.certmanager.k8s.io NAME READY SECRET AGE EXPIRATION multicloud-ca-cert True multicloud-ca-cert 61m 2025-09-27T17:10:47ZCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the other components with this certificate, by downloading and running
generate-update-issuer-cert-manifest.shscript. -
Verify that all of the secrets from
oc get certificates.certmanager.k8s.iohave the ready stateTrue.