Secure clusters


Red Hat Advanced Cluster Management for Kubernetes 2.14

Secure your clusters with role-based access and certificates.

Abstract

Ensure users have access to resources that are required to perform specific roles.

Chapter 1. Securing clusters

You might need to manually create and manage the access control on your cluster. To do this, you must configure authentication service requirements for Red Hat Advanced Cluster Management for Kubernetes to onboard workloads to Identity and Access Management (IAM).

Use the role-based access control and authentication to identify the user associated roles and cluster credentials. To create and manage your cluster credentials, access the credentials by going to the Kubernetes secrets where they are stored. See the following documentation for information about access and credentials:

Required access: Cluster administrator

1.1. Role-based access control

Red Hat Advanced Cluster Management for Kubernetes supports role-based access control (RBAC). Your role determines the actions that you can perform. RBAC is based on the authorization mechanisms in Kubernetes, similar to Red Hat OpenShift Container Platform. For more information about RBAC, see the OpenShift RBAC overview in the OpenShift Container Platform documentation.

Note: Action buttons are disabled from the console if the user-role access is impermissible.

1.1.1. Overview of roles

Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported in Red Hat Advanced Cluster Management for Kubernetes:

Expand
Table 1.1. Role definition table

Role

Definition

cluster-admin

This is an OpenShift Container Platform default role. A user with cluster binding to the cluster-admin role is an OpenShift Container Platform super user, who has all access.

open-cluster-management:cluster-manager-admin

A user with cluster binding to the open-cluster-management:cluster-manager-admin role is a Red Hat Advanced Cluster Management for Kubernetes super user, who has all access. This role allows the user to create a ManagedCluster resource.

open-cluster-management:admin:<managed_cluster_name>

A user with cluster binding to the open-cluster-management:admin:<managed_cluster_name> role has administrator access to the ManagedCluster resource named, <managed_cluster_name>. When a user has a managed cluster, this role is automatically created.

open-cluster-management:view:<managed_cluster_name>

A user with cluster binding to the open-cluster-management:view:<managed_cluster_name> role has view access to the ManagedCluster resource named, <managed_cluster_name>.

open-cluster-management:managedclusterset:admin:<managed_clusterset_name>

A user with cluster binding to the open-cluster-management:managedclusterset:admin:<managed_clusterset_name> role has administrator access to ManagedCluster resource named <managed_clusterset_name>. The user also has administrator access to managedcluster.cluster.open-cluster-management.io, clusterclaim.hive.openshift.io, clusterdeployment.hive.openshift.io, and clusterpool.hive.openshift.io resources, which has the managed cluster set label: cluster.open-cluster-management.io/clusterset=<managed_clusterset_name>. A role binding is automatically generated when you are using a cluster set. See Creating a ManagedClusterSet to learn how to manage the resource.

open-cluster-management:managedclusterset:view:<managed_clusterset_name>

A user with cluster binding to the open-cluster-management:managedclusterset:view:<managed_clusterset_name> role has view access to the ManagedCluster resource named, <managed_clusterset_name>`. The user also has view access to managedcluster.cluster.open-cluster-management.io, clusterclaim.hive.openshift.io, clusterdeployment.hive.openshift.io, and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io, clusterset=<managed_clusterset_name>. For more details on how to manage managed cluster set resources, see Creating a ManagedClusterSet.

open-cluster-management:subscription-admin

A user with the open-cluster-management:subscription-admin role can create Git subscriptions that deploy resources to multiple namespaces. The resources are specified in Kubernetes resource YAML files in the subscribed Git repository. Note: When a non-subscription-admin user creates a subscription, all resources are deployed into the subscription namespace regardless of specified namespaces in the resources. For more information, see the Application lifecycle RBAC section.

admin, edit, view

Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to open-cluster-management resources in a specific namespace, while cluster-wide binding to the same roles gives access to all of the open-cluster-management resources cluster-wide.

open-cluster-management:managedclusterset:bind:<managed_clusterset_name>

A user with the open-cluster-management:managedclusterset:bind:<managed_clusterset_name> role has view access to the managed cluster resource called <managed_clusterset_name>. The user can bind <managed_clusterset_name> to a namespace. The user also has view access to managedcluster.cluster.open-cluster-management.io, clusterclaim.hive.openshift.io, clusterdeployment.hive.openshift.io, and clusterpool.hive.openshift.io resources, which have the following managed cluster set label: cluster.open-cluster-management.io/clusterset=<managed_clusterset_name>. See Creating a ManagedClusterSet to learn how to manage the resource.

Important:

  • Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace.
  • If a user does not have role access to a cluster, the cluster name is not displayed. The cluster name might be displayed with the following symbol: -.

See Implementing role-based access control for more details.

1.2. Implementing role-based access control

Red Hat Advanced Cluster Management for Kubernetes RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions.

The multicluster engine operator is a prerequisite and the cluster lifecycle function of Red Hat Advanced Cluster Management. To manage RBAC for clusters with the multicluster engine operator, use the RBAC guidance from the cluster lifecycle multicluster engine for Kubernetes operator Role-based access control documentation.

View the following sections for more information on RBAC for specific lifecycles for Red Hat Advanced Cluster Management:

1.2.1. Application lifecycle RBAC

When you create an application, the subscription namespace is created and the configuration map is created in the subscription namespace. You must also have access to the channel namespace. When you want to apply a subscription, you must be a subscription administrator. For more information on managing applications, see Creating an allow and deny list as subscription administrator.

View the following application lifecycle RBAC operations:

  • Create and administer applications on all managed clusters with a user named username. You must create a cluster role binding and bind it to username. Run the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>
    Copy to Clipboard Toggle word wrap

    This role is a super user, which has access to all resources and actions. You can create the namespace for the application and all application resources in the namespace with this role.

  • Create applications that deploy resources to multiple namespaces. You must create a cluster role binding to the open-cluster-management:subscription-admin cluster role, and bind it to a user named username. Run the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>
    Copy to Clipboard Toggle word wrap
  • Create and administer applications in the cluster-name managed cluster, with the username user. You must create a cluster role binding to the open-cluster-management:admin:<cluster-name> cluster role and bind it to username by entering the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>
    Copy to Clipboard Toggle word wrap

    This role has read and write access to all application resources on the managed cluster, cluster-name. Repeat this if access for other managed clusters is required.

  • Create a namespace role binding to the application namespace using the admin role and bind it to username by entering the following command:

    oc create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=admin --user=<username>
    Copy to Clipboard Toggle word wrap

    This role has read and write access to all application resources in the application namspace. Repeat this if access for other applications is required or if the application deploys to multiple namespaces.

  • You can create applications that deploy resources to multiple namespaces. Create a cluster role binding to the open-cluster-management:subscription-admin cluster role and bind it to username by entering the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>
    Copy to Clipboard Toggle word wrap
  • To view an application on a managed cluster named cluster-name with the user named username, create a cluster role binding to the open-cluster-management:view: cluster role and bind it to username. Enter the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>
    Copy to Clipboard Toggle word wrap

    This role has read access to all application resources on the managed cluster, cluster-name. Repeat this if access for other managed clusters is required.

  • Create a namespace role binding to the application namespace using the view role and bind it to username. Enter the following command:

    oc create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=view --user=<username>
    Copy to Clipboard Toggle word wrap

    This role has read access to all application resources in the application namspace. Repeat this if access for other applications is required.

View the following console and API RBAC tables for Application lifecycle:

Expand
Table 1.2. Console RBAC table for application lifecycle
ResourceAdminEditView

Application

create, read, update, delete

create, read, update, delete

read

Channel

create, read, update, delete

create, read, update, delete

read

Subscription

create, read, update, delete

create, read, update, delete

read

Expand
Table 1.3. API RBAC table for application lifecycle
APIAdminEditView

applications.app.k8s.io

create, read, update, delete

create, read, update, delete

read

channels.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

deployables.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

helmreleases.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

placements.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

placementrules.apps.open-cluster-management.io (Deprecated)

create, read, update, delete

create, read, update, delete

read

subscriptions.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

configmaps

create, read, update, delete

create, read, update, delete

read

secrets

create, read, update, delete

create, read, update, delete

read

namespaces

create, read, update, delete

create, read, update, delete

read

1.2.2. Governance lifecycle RBAC

To perform governance lifecycle operations, you need access to the namespace where the policy is created, along with access to the managed cluster where the policy is applied. The managed cluster must also be part of a ManagedClusterSet that is bound to the namespace. To continue to learn about ManagedClusterSet, see ManagedClusterSets Introduction.

After you select a namespace, such as rhacm-policies, with one or more bound ManagedClusterSets, and after you have access to create Placement objects in the namespace, view the following operations:

  • To create a ClusterRole named rhacm-edit-policy with Policy, PlacementBinding, and PolicyAutomation edit access, run the following command:

    oc create clusterrole rhacm-edit-policy --resource=policies.policy.open-cluster-management.io,placementbindings.policy.open-cluster-management.io,policyautomations.policy.open-cluster-management.io,policysets.policy.open-cluster-management.io --verb=create,delete,get,list,patch,update,watch
    Copy to Clipboard Toggle word wrap
  • To create a policy in the rhacm-policies namespace, create a namespace RoleBinding, such as rhacm-edit-policy, to the rhacm-policies namespace using the ClusterRole created previously. Run the following command:

    oc create rolebinding rhacm-edit-policy -n rhacm-policies --clusterrole=rhacm-edit-policy --user=<username>
    Copy to Clipboard Toggle word wrap
  • To view policy status of a managed cluster, you need permission to view policies in the managed cluster namespace on the hub cluster. If you do not have view access, such as through the OpenShift view ClusterRole, create a ClusterRole, such as rhacm-view-policy, with view access to policies with the following command:

    oc create clusterrole rhacm-view-policy --resource=policies.policy.open-cluster-management.io --verb=get,list,watch
    Copy to Clipboard Toggle word wrap
  • To bind the new ClusterRole to the managed cluster namespace, run the following command to create a namespace RoleBinding:

    oc create rolebinding rhacm-view-policy -n <cluster name> --clusterrole=rhacm-view-policy --user=<username>
    Copy to Clipboard Toggle word wrap

View the following console and API RBAC tables for governance lifecycle:

Expand
Table 1.4. Console RBAC table for governance lifecycle
ResourceAdminEditView

Policies

create, read, update, delete

read, update

read

PlacementBindings

create, read, update, delete

read, update

read

Placements

create, read, update, delete

read, update

read

PlacementRules (deprecated)

create, read, update, delete

read, update

read

PolicyAutomations

create, read, update, delete

read, update

read

Expand
Table 1.5. API RBAC table for governance lifecycle
APIAdminEditView

policies.policy.open-cluster-management.io

create, read, update, delete

read, update

read

placementbindings.policy.open-cluster-management.io

create, read, update, delete

read, update

read

policyautomations.policy.open-cluster-management.io

create, read, update, delete

read, update

read

1.2.3. Observability RBAC

To view the observability metrics for a managed cluster, you must have view access to that managed cluster on the hub cluster. View the following list of observability features:

  • Access managed cluster metrics.

    Users are denied access to managed cluster metrics, if they are not assigned to the view role for the managed cluster on the hub cluster. Run the following command to verify if a user has the authority to create a managedClusterView role in the managed cluster namespace:

    oc auth can-i create ManagedClusterView -n <managedClusterName> --as=<user>
    Copy to Clipboard Toggle word wrap

    As a cluster administrator, create a managedClusterView role in the managed cluster namespace. Run the following command:

    oc create role create-managedclusterview --verb=create --resource=managedclusterviews -n <managedClusterName>
    Copy to Clipboard Toggle word wrap

    Then apply and bind the role to a user by creating a role bind. Run the following command:

    oc create rolebinding user-create-managedclusterview-binding --role=create-managedclusterview --user=<user>  -n <managedClusterName>
    Copy to Clipboard Toggle word wrap
  • Search for resources.

    To verify if a user has access to resource types, use the following command:

    oc auth can-i list <resource-type> -n <namespace> --as=<rbac-user>
    Copy to Clipboard Toggle word wrap

    Note: <resource-type> must be plural.

  • To view observability data in Grafana, you must have a RoleBinding resource in the same namespace of the managed cluster.

    View the following RoleBinding example:

    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
     name: <replace-with-name-of-rolebinding>
     namespace: <replace-with-name-of-managedcluster-namespace>
    subjects:
     - kind: <replace with User|Group|ServiceAccount>
       apiGroup: rbac.authorization.k8s.io
       name: <replace with name of User|Group|ServiceAccount>
    roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: ClusterRole
     name: view
    Copy to Clipboard Toggle word wrap

See Role binding policy for more information. See Observability advanced configuration to configure observability.

To manage components of observability, view the following API RBAC table:

Expand
Table 1.6. API RBAC table for observability

API

Admin

Edit

View

multiclusterobservabilities.observability.open-cluster-management.io

create, read, update, and delete

read, update

read

searchcustomizations.search.open-cluster-management.io

create, get, list, watch, update, delete, patch

-

-

policyreports.wgpolicyk8s.io

get, list, watch

get, list, watch

get, list, watch

Technology Preview: Red Hat Advanced Cluster Management for Kubernetes supports fine-grained role-based access control (RBAC). As a cluster administrator, you can manage and control permissions with the ClusterPermission resource, which controls permissions at the namespace level on managed clusters, as well as at the cluster level. Grant permissions to a virtual machine namespace within a cluster without granting permission to the entire managed cluster.

Learn how to set up for fine-grained role-based access control (RBAC) and the ClusterPermission resource from the console.

Required access: Cluster administrator

To learn about OpenShift Container Platform default and virtualization roles and permissions, see Authorization in the OpenShift Container Platform documentation.

See Implementing role-based access control for more details about Red Hat Advanced Cluster Management role-based access.

Prerequisites

See the following requirements to begin using fine-grained role-based access control:

  • Your MultiClusterHub custom resource spec.overrides.components field for search must enabled to retrieve a list of managed clusters namespaces that can represent virtual machines that are used for access control.
  • You need virtual machines.

You can assign users to manage virtual machines with fine-grained role-based access control. Actions are disabled in the console if the user-role access is not permitted. Slide the YAML option on to see the data that you enter populate in the YAML editor.

You can grant access to the following roles for OpenShift Virtualization, which are extensions of the default roles:

  • kubevirt.io:view: only view resources
  • kubevirt.io:edit: modify resources
  • kubevirt.io:admin: view, modify, delete resources; grant permissions

Important: As an administrator, you need to add either RoleBinding or ClusterRoleBinding resources for a valid ClusterPermission resource. You can also choose to add both resources.

  1. Navigate to your MultiClusterHub custom resource to edit the resource and enable the feature.

    1. From the local-cluster view, click Operators > Installed Operators > Advanced Cluster Management for Kubernetes.
    2. Click the MultiClusterHub tab to edit the resource.
    3. Slide the YAML option on to see the data in the YAML editor.
    4. In your MultiClusterHub custom resource spec.overrides.components field, set fine-grained-rbac-preview to true to enable the feature. Change the configOverrides specification to enabled: true in the YAML editor and save your changes. See the following example with fine-grained-rbac-preview enabled:
    - configOverrides: {}
      enabled: true
      name: fine-grained-rbac-preview
Copy to Clipboard Toggle word wrap
  1. Label your local-cluster with environment=virtualization.

    1. From the All Clusters view, click Infrastructure > Clusters >
    2. Find your local-cluster and click Actions to edit.
    3. Add the environment=virtualization label and save your changes. See the following example:
    environment=virtualization
    Copy to Clipboard Toggle word wrap
  2. Change the policy-virt-clusterroles value for the remediationAction to enforce, which adds the kubevirt clusterroles to the hub cluster.

    1. Click Governance > Policies.
    2. Find the policy-virt-clusterroles policy and click Actions to change the remediationAction value to enforce. Important: Two remediationAction specifications are in the policy, but you only need to change the later remediationAction. This does not apply to the first template in the YAML file. See the following sample:

        remediationAction: enforce
      Copy to Clipboard Toggle word wrap
    3. Slide the YAML option on to see the data in the YAML editor and save your changes. See the following YAML sample:
  3. Create a ClusterRole resource and name your file.

    1. From the local-cluster view, click User Management > Roles > Create Role.
    2. Add the following ClusterRole resource information to the YAML editor:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: <cluster-role-name> 
    1
    
    rules:
      - apiGroups: ["clusterview.open-cluster-management.io"]
        resources: ["kubevirtprojects"]
        verbs: ["list"]
      - apiGroups: ["clusterview.open-cluster-management.io"]
        resources: ["managedclusters"]
        verbs: ["list","get","watch"]
      - apiGroups: ["cluster.open-cluster-management.io"]
        resources: ["managedclusters"]
        verbs: ["get"]
        resourceNames: ["<cluster-name-01>", "<cluster-name-02>", "<cluster-name-03>"] 
    2
    Copy to Clipboard Toggle word wrap
    1
    You can use a different name, but you must continue to use that name when you create the ClusterRoleBinding resource.
    2
    Add the names of the managed clusters that you want your User or Group to access. Each name must be unique.
  4. Create a ClusterRoleBinding resource.

    1. From the local-cluster view, click User Management > RoleBindings > Create bindings.
    2. Choose Cluster-wide role binding for the Binding type.
    3. Add a RoleBinding name that matches the name of the ClusterRole, which is the <cluster-role-name> that you chose previously.
    4. Add the matching role name.
    5. For the Subject, select User or Group, enter the User or Group name, and save your changes.
  5. Create a ClusterPermission resource to grant permissions at the namespace level.

    1. Click Access control > Create permission.
    2. In the Basic information window, add the cluster name and the User or Group that is granted permission.
    3. Choose the cluster for that permission.
  6. Add the Role bindings information, which sets permissions at the namespace level.

    1. Add virtual machine namespaces in the cluster.
    2. Add Users or Groups.
    3. Add roles, such as kubevirt.io:view, for fine-grained role-based access control. You can choose a combination of RoleBindings.
  7. Add the ClusterRoleBinding resource with the same information to set permissions at the cluster level.
  8. Review and click Create permission to create ClusterPermission resource as you see in the following example:

    apiVersion: rbac.open-cluster-management.io/v1alpha1
    kind: ClusterPermission
    metadata:
      name: <cluster-premission-name> 
    1
    
      namespace: <cluster-name-01> 
    2
    
    spec:
      roleBindings:
        - name: <role-binding-name> 
    3
    
          namespace: <your-namespace> 
    4
    
          roleRef:
            name: kubevirt.io:admin
            apiGroup: rbac.authorization.k8s.io
            kind: ClusterRole
          subjects:
            - kind: User 
    5
    
              apiGroup: rbac.authorization.k8s.io
              name: <user-name> 
    6
    Copy to Clipboard Toggle word wrap
    1
    Specify a name for the permissions.
    2
    Add the cluster name that is also a namespace for permissions.
    3
    Specify a name for <role-binding-name>, which assigns roles to Users or Groups.
    4
    Specify the namespace in the managed cluster to which the User or Group is granted access.
    5
    Choose User or choose Group.
    6
    Specify the User or Group name.
  9. Check for a Ready status in the console.
  10. You can click Edit permission to edit the Role bindings and Cluster role binding.
  11. Optional: Click Export YAML to use the resources for GitOps or in the terminal.
  12. You can delete the ClusterPermissions resource when you are ready.
  13. Optional: If the observability service is enabled, create an additional RoleBinding resource on the hub cluster so that users can view virtual machine details in Grafana.

    1. From the local-cluster view, click User Management > RoleBindings > RoleBindings.
    2. Choose Namespace role binding for the Binding type.
    3. Specify a name for RoleBindings, which assigns roles to Users or Groups.
    4. Add the cluster name, which is also the namespace on the hub cluster.
    5. Choose view for the Role name.
    6. For the Subject, select User or Group, enter the User or Group name, and save your changes.

Technology Preview: Red Hat Advanced Cluster Management for Kubernetes supports fine-grained role-based access control (RBAC). As a cluster administrator, you can manage and control permissions with the ClusterPermission resource, which controls permissions at the namespace level on managed clusters, as well as at the cluster level. Grant permissions to a virtual machine namespace within a cluster without granting permission to the entire managed cluster.

Learn how to set up for fine-grained role-based access control (RBAC) and the ClusterPermission resource from the terminal.

Required access: Cluster administrator

To learn about OpenShift Container Platform default and virtualization roles and permissions, see Authorization in the OpenShift Container Platform documentation.

See Implementing role-based access control for more details about Red Hat Advanced Cluster Management role-based access.

Prerequisites

See the following requirements to begin using fine-grained role-based access control:

  • Your MultiClusterHub custom resource spec.overrides.components field for search must enabled to retrieve a list of managed clusters namespaces that can represent virtual machines that are used for access control.
  • You need virtual machines.

You can grant access to the following roles for OpenShift Virtualization, which are extensions of the default roles:

  • kubevirt.io:view: only view resources
  • kubevirt.io:edit: modify resources
  • kubevirt.io:admin: view, modify, delete resources; grant permissions

Complete the following steps:

  1. Enable fine-grained-rbac-preview in the MultiClusterHub resource.

    1. Run the following command:

      oc edit mch -n open-cluster-management multiclusterhub
      Copy to Clipboard Toggle word wrap
    2. Edit to change the configOverrides specification from enabled: false to enabled: true. See the following example with the feature enabled:

          - configOverrides: {}
            enabled: true
            name: fine-grained-rbac-preview
      Copy to Clipboard Toggle word wrap

    Note: Run oc get mch -A to get the name and namespace of the MultiClusterHub resource if you do not use the open-cluster-management namespace.

  2. Label your local-cluster with environment=virtualization. Run the following command:

    oc label managedclusters local-cluster environment=virtualization
    Copy to Clipboard Toggle word wrap
  3. Change the policy-virt-clusterroles to enforce, which adds the kubevirt clusterroles to the hub cluster.

    1. Run the following command to edit the policy:

      oc edit policy -n open-cluster-management-global-set policy-virt-clusterroles
      Copy to Clipboard Toggle word wrap
    2. Edit the remediationAction value from inform to enforce. Important: Two remediationAction specifications are in the policy, but you only need to change the later remediationAction. This does not apply to the first template in the YAML file. See the following sample:
      remediationAction: enforce
    Copy to Clipboard Toggle word wrap
  4. Create the following YAML file and name the file:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: <cluster-role-name> 
    1
    
    rules:
      - apiGroups: ["clusterview.open-cluster-management.io"]
        resources: ["kubevirtprojects"]
        verbs: ["list"]
      - apiGroups: ["clusterview.open-cluster-management.io"]
        resources: ["managedclusters"]
        verbs: ["list","get","watch"]
      - apiGroups: ["cluster.open-cluster-management.io"]
        resources: ["managedclusters"]
        verbs: ["get"]
        resourceNames: ["<cluster-name-01>", "<cluster-name-02>", "<cluster-name-03>"] 
    2
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: <role-binding-name> 
    3
    
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: <cluster-role-name> 
    4
    
    subjects:
      - kind: User 
    5
    
        apiGroup: rbac.authorization.k8s.io
        name: <user-name> 
    6
    Copy to Clipboard Toggle word wrap
    1
    You can use any name, but you must continue to use that name.
    2
    Add the names of the managed clusters that you want your User or Group to access. Each name must be unique.
    3
    Specify a name for RoleBindings, which assigns roles to Users or Groups.
    4
    Ensure the name matches the ClusterRole name.
    5
    Choose User or Group.
    6
    Specify a User or Group name.
  5. Apply the ClusterRole resource. Run the following command and change the file name only if you changed it earlier in the process:

    oc apply -f <filename>.yml
    Copy to Clipboard Toggle word wrap
  6. Create a YAML file for the ClusterPermission resource and name the file.
  7. Assign fine-grain role-based access from the ClusterPermission resource by specifying the cluster name, managed cluster namespace, and Users or Group name:

    apiVersion: rbac.open-cluster-management.io/v1alpha1
    kind: ClusterPermission
    metadata:
      name: <cluster-premission-name> 
    1
    
      namespace: <cluster-name-01> 
    2
    
    spec:
      roleBindings:
        - name: <role-binding-name> 
    3
    
          namespace: <your-namespace> 
    4
    
          roleRef:
            name: kubevirt.io:admin
            apiGroup: rbac.authorization.k8s.io
            kind: ClusterRole
          subjects:
            - kind: User 
    5
    
              apiGroup: rbac.authorization.k8s.io
              name: <user-name> 
    6
    Copy to Clipboard Toggle word wrap
    1
    Specify a name for the permissions.
    2
    Add the cluster name that is also a namespace for permissions.
    3
    Specify a name for RoleBindings, which assigns roles to Users or Groups.
    4
    Specify the namespace in the managed cluster to which the User or Group is granted access.
    5
    Choose User or choose Group.
    6
    Specify the User or Group name.
  8. Run the following command to apply the file and change the name of the file if you changed the name previously:

    oc apply -f <filename>.yml
    Copy to Clipboard Toggle word wrap
  9. Optional: If observability is enabled, create an additional RoleBinding on the hub cluster so that users can view virtual machine details in Grafana.

    1. Create the RoleBinding resource for Grafana access. See the following sample YAML file:

      kind: RoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
       name: <role-binding-name> 
      1
      
       namespace: <cluster-name-01> 
      2
      
      subjects:
       - kind: User 
      3
      
         apiGroup: rbac.authorization.k8s.io
         name: <user-name> 
      4
      
      roleRef:
       apiGroup: rbac.authorization.k8s.io
       kind: ClusterRole
       name: view
      Copy to Clipboard Toggle word wrap
      1
      Specify a name for <role-binding-name>, which assigns roles to Users or Groups.
      2
      Add the cluster name, which is also the namespace on the hub cluster.
      3
      Choose User or choose Group.
      4
      Choose the User or Group name.
    2. Apply the ClusterRoleBinding resource with the following command:
    oc apply -f <filename>.yml
    Copy to Clipboard Toggle word wrap

1.5. Certificates

All certificates that are required by services that run on Red Hat Advanced Cluster Management are created when you install Red Hat Advanced Cluster Management. View the following list of certificates, which are created and managed by the following components of Red Hat OpenShift Container Platform:

  • OpenShift Service Serving Certificates
  • Red Hat Advanced Cluster Management webhook controllers
  • Kubernetes Certificates API
  • OpenShift default ingress

Required access: Cluster administrator

Continue reading to learn more about certificate management:

Note: Users are responsible for certificate rotations and updates.

OpenShift Container Platform default ingress certificate is a type of hub cluster certificate. After the Red Hat Advanced Cluster Management installation, observability certificates are created and used by the observability components to give mutual TLS on the traffic between the hub cluster and managed cluster. Access the observability namespaces to retrieve and implement the different observability certificates, depending on your needs.

  • The open-cluster-management-observability namespace has the following certificates:

    • observability-server-ca-certs: Has the CA certificate to sign server-side certificates
    • observability-client-ca-certs: Has the CA certificate to sign client-side certificates
    • observability-server-certs: Has the server certificate used by the observability-observatorium-api deployment
    • observability-grafana-certs: Has the client certificate used by the observability-rbac-query-proxy deployment
  • The open-cluster-management-addon-observability namespace has the following certificates on managed clusters:

    • observability-managed-cluster-certs: Has the same server CA certificate as observability-server-ca-certs in the hub server
    • observability-controller-open-cluster-management.io-observability-signer-client-cert: Has the client certificate used by the metrics-collector-deployment

The CA certificates are valid for five years and other certificates are valid for one year. All observability certificates are automatically refreshed upon expiration. View the following list to understand the effects when certificates are automatically renewed:

  • Non-CA certificates are renewed automatically when the remaining valid time is no more than 73 days. After the certificate is renewed, the pods in the related deployments restart automatically to use the renewed certificates.
  • CA certificates are renewed automatically when the remaining valid time is no more than one year. After the certificate is renewed, the old CA is not deleted but co-exist with the renewed ones. Both old and renewed certificates are used by related deployments, and continue to work. The old CA certificates are deleted when they expire.
  • When a certificate is renewed, the traffic between the hub cluster and managed cluster is not interrupted.

View the following Red Hat Advanced Cluster Management hub cluster certificates table:

Expand
Table 1.7. Red Hat Advanced Cluster Management hub cluster certificates
NamespaceSecret namePod label 

open-cluster-management

channels-apps-open-cluster-management-webhook-svc-ca

app=multicluster-operators-channel

open-cluster-management

channels-apps-open-cluster-management-webhook-svc-signed-ca

app=multicluster-operators-channel

open-cluster-management

multicluster-operators-application-svc-ca

app=multicluster-operators-application

open-cluster-management

multicluster-operators-application-svc-signed-ca

app=multicluster-operators-application

open-cluster-management-hub

registration-webhook-serving-cert signer-secret

Not required

open-cluster-management-hub

View the following table for a summarized list of the component pods that contain Red Hat Advanced Cluster Management managed certificates and the related secrets:

Expand
Table 1.8. Pods that contain Red Hat Advanced Cluster Management managed certificates
NamespaceSecret name (if applicable)

open-cluster-management-agent-addon

cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert

open-cluster-management-agent-addon

cluster-proxy-service-proxy-server-certificates

Use these Red Hat Advanced Cluster Management managed certificates to authenticate managed clusters within your hub cluster. These managed cluster certificates get managed and refreshed automatically. If you customize the hub cluster API server certificate, the managed cluster automatically updates its certificates.

1.5.3. Additional resources

1.6. Managing certificates

Continue reading for information about how to refresh, replace, rotate, and list certificates.

You can refresh Red Hat Advanced Cluster Management managed certificates, which are certificates that are created and managed by Red Hat Advanced Cluster Management services.

Complete the following steps to refresh certificates managed by Red Hat Advanced Cluster Management:

  1. Delete the secret that is associated with the Red Hat Advanced Cluster Management managed certificate by running the following command:

    oc delete secret -n <namespace> <secret> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <namespace> and <secret> with the values that you want to use.
  2. Restart the services that are associated with the Red Hat Advanced Cluster Management managed certificate(s) by running the following command:

    oc delete pod -n <namespace> -l <pod-label> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <namespace> and <pod-label> with the values from the Red Hat Advanced Cluster Management managed cluster certificates table.

    Note: If a pod-label is not specified, there is no service that must be restarted. The secret is recreated and used automatically.

If you do not want to use the OpenShift Container Platform default ingress certificate, replace observability alertmanager certificates by updating the Alertmanager route. Complete the following steps:

  1. Examine the observability certificate with the following command:

    openssl x509  -noout -text -in ./observability.crt
    Copy to Clipboard Toggle word wrap
  2. Change the common name (CN) on the certificate to alertmanager.
  3. Change the SAN in the csr.cnf configuration file with the hostname for your alertmanager route.
  4. Create the two following secrets in the open-cluster-management-observability namespace. Run the following commands:

    oc -n open-cluster-management-observability create secret tls alertmanager-byo-ca --cert ./ca.crt --key ./ca.key
    
    oc -n open-cluster-management-observability create secret tls alertmanager-byo-cert --cert ./ingress.crt --key ./ingress.key
    Copy to Clipboard Toggle word wrap

1.6.3. Rotating the gatekeeper webhook certificate

Complete the following steps to rotate the gatekeeper webhook certificate:

  1. Edit the secret that contains the certificate with the following command:

    oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert
    Copy to Clipboard Toggle word wrap
  2. Delete the following content in the data section: ca.crt, ca.key, tls.crt, and tls.key.
  3. Restart the gatekeeper webhook service by deleting the gatekeeper-controller-manager pods with the following command:

    oc delete pod -n openshift-gatekeeper-system -l control-plane=controller-manager
    Copy to Clipboard Toggle word wrap

The gatekeeper webhook certificate is rotated.

1.6.4. Verifying certificate rotation

Verify that your certificates are rotated using the following steps:

  1. Identify the secret that you want to check.
  2. Check the tls.crt key to verify that a certificate is available.
  3. Display the certificate information by using the following command:

    oc get secret <your-secret-name> -n open-cluster-management -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout
    Copy to Clipboard Toggle word wrap

    Replace <your-secret-name> with the name of secret that you are verifying. If it is necessary, also update the namespace and JSON path.

  4. Check the Validity details in the output. View the following Validity example:

    Validity
                Not Before: Jul 13 15:17:50 2023 GMT 
    1
    
                Not After : Jul 12 15:17:50 2024 GMT 
    2
    Copy to Clipboard Toggle word wrap
    1
    The Not Before value is the date and time that you rotated your certificate.
    2
    The Not After value is the date and time for the certificate expiration.

1.6.5. Listing hub cluster managed certificates

You can view a list of hub cluster managed certificates that use OpenShift Service Serving Certificates service internally. Run the following command to list the certificates:

for ns in multicluster-engine open-cluster-management ; do echo "$ns:" ; oc get secret -n $ns -o custom-columns=Name:.metadata.name,Expiration:.metadata.annotations.service\\.beta\\.openshift\\.io/expiry | grep -v '<none>' ; echo ""; done
Copy to Clipboard Toggle word wrap

For more information, see OpenShift Service Serving Certificates in the Additional resources section.

Note: If observability is enabled, there are additional namespaces where certificates are created.

1.6.6. Additional resources

When you install Red Hat Advanced Cluster Management for Kubernetes, only Certificate Authority (CA) certificates for observability are provided by default. If you do not want to use the default observability CA certificates generated by Red Hat Advanced Cluster Management, you can choose to bring your own observability CA certificates before you enable observability.

Observability requires two CA certificates, one for the server-side and the other is for the client-side.

  • Generate your CA RSA private keys with the following commands:

    openssl genrsa -out serverCAKey.pem 2048
    openssl genrsa -out clientCAKey.pem 2048
    Copy to Clipboard Toggle word wrap
  • Generate the self-signed CA certificates using the private keys. Run the following commands:

    openssl req -x509 -sha256 -new -nodes -key serverCAKey.pem -days 1825 -out serverCACert.pem
    openssl req -x509 -sha256 -new -nodes -key clientCAKey.pem -days 1825 -out clientCACert.pem
    Copy to Clipboard Toggle word wrap

Complete the following steps to create the secrets:

  1. Create the observability-server-ca-certs secret by using your certificate and private key. Run the following command:

    oc -n open-cluster-management-observability create secret tls observability-server-ca-certs --cert ./serverCACert.pem --key ./serverCAKey.pem
    Copy to Clipboard Toggle word wrap
  2. Create the observability-client-ca-certs secret by using your certificate and private key. Run the following command:

    oc -n open-cluster-management-observability create secret tls observability-client-ca-certs --cert ./clientCACert.pem --key ./clientCAKey.pem
    Copy to Clipboard Toggle word wrap

You can replace certificates for the rbac-query-proxy route. See Generating CA certificates by using OpenSSL commands to create certificates.

When you create a Certificate Signing Request (CSR) by using the csr.cnf file, update the DNS.1 field in the subjectAltName section to match the host name of the rbac-query-proxy route.

Complete the following steps:

  1. Retrieve the host name by running the following command:

    oc get route rbac-query-proxy -n open-cluster-management-observability -o jsonpath="
    {.spec.host}"
    Copy to Clipboard Toggle word wrap
  2. Create a proxy-byo-ca secret by using the generated certificates. Run the following command:

    oc -n open-cluster-management-observability create secret tls proxy-byo-ca --cert ./ca.crt --key ./ca.key
    Copy to Clipboard Toggle word wrap
  3. Run the following command to create a proxy-byo-cert secret by using the generated certificates:

    oc -n open-cluster-management-observability create secret tls proxy-byo-cert --cert ./ingress.crt --key ./ingress.key
    Copy to Clipboard Toggle word wrap

1.7.4. Additional resources

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat