1.3. Known issues


Review the known issues for Red Hat Advanced Cluster Management for Kubernetes. The following list contains known issues for this release, or known issues that continued from the previous release.

1.3.1. Installation known issues

When an OpenShift Container Platform cluster is in the upgrade stage, the cluster pods are restarted and the cluster might remain in upgrade failed status for a variation of 1-5 minutes. This behavior is expected and resolves after a few minutes.

Certificate manager must not exist on a cluster when you install Red Hat Advanced Cluster Management for Kubernetes.

When certificate manager already exists on the cluster, Red Hat Advanced Cluster Management for Kubernetes installation fails.

To resolve this issue, verify if the certificate manager is present in your cluster by running the following command:

kubectl get crd | grep certificates.certmanager
Copy to Clipboard Toggle word wrap

1.3.2. Web console known issues

1.3.2.1. LDAP user names are case-sensitive

LDAP user names are case-sensitive. You must use the name exactly the way it is configured in your LDAP directory.

The product supports Mozilla Firefox 74.0 or the latest version that is available for Linux, macOS, and Windows. Upgrade to the latest version for the best console compatibility.

1.3.2.3. Unable to search using values with empty spaces

From the console and Visual Web Terminal, users are unable to search for values that contain an empty space.

When you are logged in as kubeadmin and you click the Log out option in the drop-down menu, the console returns to the login screen, but a browser tab opens with a /logout URL. The page is blank and you can close the tab without impact to your console.

1.3.3. Cluster management known issues

After a cluster is imported, log in to the imported cluster and make sure all pods that are deployed by the Klusterlet are running. Otherwise, you might see inconsistent data in the console.

For example, if a policy controller is not running, you might not get the same results of violations on the Governance and risk page and the Cluster status.

For instance, you might see 0 violations listed in the Overview status, but you might have 12 violations reported on the Governance and risk page.

In this case, inconsistency between the pages represents a disconnection between the policy-controller-addon on managed clusters and the policy controller on the hub cluster. Additionally, the managed cluster might not have enough resources to run all the Klusterlet components.

As a result, the policy was not propagated to managed cluster, or the violation was not reported back from managed clusters.

1.3.3.2. Importing clusters might require two attempts

When you import a cluster that was previously managed and detached by a Red Hat Advanced Cluster Management hub cluster, the import process might fail the first time. The cluster status is pending import. Run the command again, and the import should be successful.

1.3.3.3. Klusterlet runs on a detached cluster

If you detach an online cluster immediately after it was attached, the Klusterlet starts to run on the detached cluster before the manifestwork syncs. Removal of the managed cluster from the hub cluster does not uninstall the Klusterlet. Complete the following steps to fix the issue:

  1. Download the cleanup-managed-cluster script from the deploy Git repository.
  2. Run the cleanup-managed-cluster.sh script by entering the following command:

    ./cleanup-managed-cluster.sh
    Copy to Clipboard Toggle word wrap

You cannot import IBM Red Hat OpenShift Kubernetes Service version 3.11 clusters. Later versions of IBM OpenShift Kubernetes Service are supported.

When you detach managed clusters on OpenShift Container Platform 3.11, the open-cluster-management-agent namesapce is not automatically deleted. Manually remove the namespace by running the following command:

oc delete ns open-cluster-management-agent
Copy to Clipboard Toggle word wrap

When you change your cloud provider access key, the provisioned cluster access key is not updated in the namespace. Run the following command for your cloud provider to update the access key:

  • Amazon Web Services (AWS)

    oc patch secret {CLUSTER-NAME}-aws-creds -n {CLUSTER-NAME} --type json -p='[{"op": "add", "path": "/stringData", "value":{"aws_access_key_id": "{YOUR-NEW-ACCESS-KEY-ID}","aws_secret_access_key":"{YOUR-NEW-aws_secret_access_key}"} }]'
    Copy to Clipboard Toggle word wrap
  • Google Cloud Platform (GCP)

    oc set data secret/{CLUSTER-NAME}-gcp-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.gcp/osServiceAccount.json
    Copy to Clipboard Toggle word wrap
  • Microsoft Azure

    oc set data secret/{CLUSTER-NAME}-azure-creds -n {CLUSTER-NAME} --from-file=osServiceAccount.json=$HOME/.azure/osServiceAccount.json
    Copy to Clipboard Toggle word wrap

When you detach a managed cluster that is in an offline state, there are some resources that cannot be removed from managed cluster. Complete the following steps to remove the additional resources:

  1. Make sure you have the oc command line interface configured.
  2. Make sure you have KUBECONFIG configured on your managed cluster.

    If you run oc get ns | grep open-cluster-management-agent you should see two namespaces:

    open-cluster-management-agent         Active   10m
    open-cluster-management-agent-addon   Active   10m
    Copy to Clipboard Toggle word wrap
  3. Download the cleanup-managed-cluster script from the deploy Git repository.
  4. Run the cleanup-managed-cluster.sh script by entering the following command:

    ./cleanup-managed-cluster.sh
    Copy to Clipboard Toggle word wrap
  5. Run the following command to ensure that both namespaces are removed:

    oc get ns | grep open-cluster-management-agent
    Copy to Clipboard Toggle word wrap

1.3.3.8. Cannot run management ingress as non-root user

You must be logged in as root to run the management-ingress service.

1.3.4. Application management known issues

1.3.4.1. YAML manifest cannot create multiple resoures

The managedclusteraction doesn’t support multiple resources. You cannot apply the YAML manifest with multiple resource from console create resources features.

Search results for your pipeline return an accurate number of resources, but that number might be different in the pipeline card because the card displays resources not yet used by an application.

For instance, after you search for kind:channel, you might see you have 10 channels, but the pipeline card on the console might represent only 5 channels that are used.

When you subscribe to a namespace channel and the subscription remains in FAILED state after you fixed other associated resources such as channel, secret, configmap, or placement rule, the namespace subscription is not continuously reconciled.

To force the subscription reconcile again to get out of FAILED state, complete the following steps:

  1. Log in to your hub cluster.
  2. Manually add a label to the subscription using the following command:
oc label subscriptions.apps.open-cluster-management.io the_subscription_name reconcile=true
Copy to Clipboard Toggle word wrap

1.3.4.4. Deployable resources in a namespace channel

You need to manually create deployable resources within the channel namespace.

To create deployable resources correctly, add the following two labels that are required in the deployable to the subscription controller that identifies which deployable resources are added:

labels:
    apps.open-cluster-management.io/channel: <channel name>
    apps.open-cluster-management.io/channel-type: Namespace
Copy to Clipboard Toggle word wrap

Don’t specify template namespace in each deployable spec.template.metadata.namespace.

For the namespace type channel and subscription, all the deployable templates are deployed to the subscription namespace on managed clusters. As a result, those deployable templates that are defined outside of the subscription namespace are skipped.

See Creating and managing channels for more information.

1.3.4.5. Edit role for application error

A user performing in an Editor role should only have read or update authority on an application, but erroneously editor can also create and delete an application. Red Hat OpenShift Operator Lifecycle Manager default settings change the setting for the product. To workaround the issue, see the following procedure:

  1. Run oc edit clusterrole applications.app.k8s.io-v1beta1-edit -o yaml to open the application edit cluster role.
  2. Remove create and delete from the verbs list.
  3. Save the change.

1.3.4.6. Edit role for placement rule error

A user performing in an Editor role should only have read or update authority on an placement rule, but erroneously editor can also create and delete, as well. Red Hat OpenShift Operator Lifecycle Manager default settings change the setting for the product. To workaround the issue, see the following procedure:

  1. Run oc edit clusterrole placementrules.apps.open-cluster-management.io-v1-edit to open the application edit cluster role.
  2. Remove create and delete from the verbs list.
  3. Save the change.

If applications are not deploying after an update to a placement rule, verify that the klusterlet-addon-appmgr pod is running. The klusterlet-addon-appmgr is the subscription container that needs to run on endpoint clusters.

You can run `oc get pods -n open-cluster-management-agent-addon ` to verify.

You can also search for kind:pod cluster:yourcluster in the console and see if the klusterlet-addon-appmgr is running.

If you cannot verify, attempt to import the cluster again and verify again.

1.3.4.8. Subscription operator does not create an SCC

Learn about OpenShift Container Platform SCC at Managing Security Context Constraints (SCC), which is an additional configuration required on the managed cluster.

Different deployments have different security context and different service accounts. The subscription operator cannot create an SCC automatically. Administrators control permissions for pods. A Security Context Constraints (SCC) CR is required to enable appropriate permissions for the relative service accounts to create pods in the non-default namespace:

To manually create an SCC CR in your namespace, complete the following:

  1. Find the service account that is defined in the deployments. For example, see the following nginx deployments:

     nginx-ingress-52edb
     nginx-ingress-52edb-backend
    Copy to Clipboard Toggle word wrap
  2. Create an SCC CR in your namespace to assign the required permissions to the service account or accounts. See the following example where kind: SecurityContextConstraints is added:

     apiVersion: security.openshift.io/v1
     defaultAddCapabilities:
     kind: SecurityContextConstraints
     metadata:
       name: ingress-nginx
       namespace: ns-sub-1
     priority: null
     readOnlyRootFilesystem: false
     requiredDropCapabilities:
     fsGroup:
       type: RunAsAny
     runAsUser:
       type: RunAsAny
     seLinuxContext:
       type: RunAsAny
     users:
     - system:serviceaccount:my-operator:nginx-ingress-52edb
     - system:serviceaccount:my-operator:nginx-ingress-52edb-backend
    Copy to Clipboard Toggle word wrap

1.3.4.9. Application channels in unique namespaces

Creating more than one channel in the same namespace can cause errors with the hub cluster. For instance, namespace charts-v1 is used by the installer as a Helm type channel, so do not create any additional channels in charts-v1.

It is best practice to create each channel in a unique namespace. However, a Git channel can share a namespace with another type of channel including Git, Helm, Kubernetes Namespace, and Object store.

1.3.5. Security known issues

1.3.5.1. Internal error 500 during login to the console

When Red Hat Advanced Cluster Management for Kubernetes is installed and the OpenShift Container Platform is customized with a custom ingress certificate, a 500 Internal Error message appears. You are unable to access the console because the OpenShift Container Platform certificate is not included in the Red Hat Advanced Cluster Management for Kubernetes management ingress. Add the OpenShift Container Platform certificate by completing the following steps:

  1. Create a ConfigMap that includes the certificate authority used to sign the new certificate. Your ConfigMap must be identical to the one you created in the openshift-config namespace. Run the following command:

    oc create configmap custom-ca \
         --from-file=ca-bundle.crt=</path/to/example-ca.crt> \
         -n open-cluster-management
    Copy to Clipboard Toggle word wrap
  2. Edit your multiclusterhub YAML file by running the following command:

    oc edit multiclusterhub multiclusterhub
    Copy to Clipboard Toggle word wrap
    1. Update the spec section by editing the parameter value for customCAConfigmap. The parameter might resemble the following content:
    customCAConfigmap: custom-ca
    Copy to Clipboard Toggle word wrap

After you complete the steps, wait a few minutes for the changes to propagate to the charts and log in again. The OpenShift Container Platform certificate is added.

All cluster violations from specific policies are listed in the policy detail panel. If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol: -

1.3.5.3. Empty status in policies

The policies that are applied to the cluster are considered NonCompliant when clusters are not running. When you view violation details, the status parameter is empty.

1.3.5.4. Placement rule and policy binding empty

After creating or modifying a policy, the placement rule and the policy binding might be empty in the policy details of the Red Hat Advanced Cluster Management console. This is generally because the policy is disabled, or there was some other updates made to the policy. Ensure that the settings are set correctly for the policy in the YAML view.

If you remove the cert-manager and the cert-manager-webhook-helmreleases, the Helm releases are triggered to automatically redeploy the charts and generate a new certificate. The new certificate must be synced to the other helm charts that create other Red Hat Advanced Cluster Management components. To recover the certificate components from the hub cluster, complete the following steps:

  1. Remove the helm release for cert-manager by running the following commands:

    oc delete helmrelease cert-manager-5ffd5
    oc delete helmrelease cert-manager-webhook-5ca82
    Copy to Clipboard Toggle word wrap
  2. Verify that the helm release is recreated and the pods are running.
  3. Make sure the certificate is generated by running the following command:

    oc get certificates.certmanager.k8s.io
    Copy to Clipboard Toggle word wrap

    You might receive the following respoonse:

    (base) ➜  cert-manager git:(master) ✗ oc get certificates.certmanager.k8s.io
    NAME                                            READY   SECRET                                          AGE   EXPIRATION
    multicloud-ca-cert                              True    multicloud-ca-cert                              61m   2025-09-27T17:10:47Z
    Copy to Clipboard Toggle word wrap
  4. Update the other components with this certificate, by downloading and running generate-update-issuer-cert-manifest.sh script.
  5. Verify that all of the secrets from oc get certificates.certmanager.k8s.io have the ready state True.
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat