Secure clusters
Secure your clusters with role-based access and certificates.
Abstract
Chapter 1. Securing clusters Copy linkLink copied to clipboard!
You might need to manually create and manage the access control on your cluster. To do this, you must configure authentication service requirements for Red Hat Advanced Cluster Management for Kubernetes to onboard workloads to Identity and Access Management (IAM).
Use the role-based access control and authentication to identify the user associated roles and cluster credentials. To create and manage your cluster credentials, access the credentials by going to the Kubernetes secrets where they are stored. See the following documentation for information about access and credentials:
Required access: Cluster administrator
- Role-based access control
- Implementing role-based access control
- Implementing fine-grained role-based access control in the terminal (Technology Preview)
- Implementing fine-grained role-based access control in the console (Technology Preview)
- Certificates
- Managing certificates
- Bringing your own observability Certificate Authority (CA) certificates
1.1. Role-based access control Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management for Kubernetes supports role-based access control (RBAC). Your role determines the actions that you can perform. RBAC is based on the authorization mechanisms in Kubernetes, similar to Red Hat OpenShift Container Platform. For more information about RBAC, see the OpenShift RBAC overview in the OpenShift Container Platform documentation.
Note: Action buttons are disabled from the console if the user-role access is impermissible.
1.1.1. Overview of roles Copy linkLink copied to clipboard!
Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported in Red Hat Advanced Cluster Management for Kubernetes:
Role | Definition |
|
This is an OpenShift Container Platform default role. A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with the |
admin, edit, view |
Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to |
|
A user with the |
Important:
- Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace.
-
If a user does not have role access to a cluster, the cluster name is not displayed. The cluster name might be displayed with the following symbol:
-
.
See Implementing role-based access control for more details.
1.2. Implementing role-based access control Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management for Kubernetes RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions.
The multicluster engine operator is a prerequisite and the cluster lifecycle function of Red Hat Advanced Cluster Management. To manage RBAC for clusters with the multicluster engine operator, use the RBAC guidance from the cluster lifecycle multicluster engine for Kubernetes operator Role-based access control documentation.
View the following sections for more information on RBAC for specific lifecycles for Red Hat Advanced Cluster Management:
1.2.1. Application lifecycle RBAC Copy linkLink copied to clipboard!
When you create an application, the subscription
namespace is created and the configuration map is created in the subscription
namespace. You must also have access to the channel
namespace. When you want to apply a subscription, you must be a subscription administrator. For more information on managing applications, see Creating an allow and deny list as subscription administrator.
View the following application lifecycle RBAC operations:
Create and administer applications on all managed clusters with a user named
username
. You must create a cluster role binding and bind it tousername
. Run the following command:oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>
oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This role is a super user, which has access to all resources and actions. You can create the namespace for the application and all application resources in the namespace with this role.
Create applications that deploy resources to multiple namespaces. You must create a cluster role binding to the
open-cluster-management:subscription-admin
cluster role, and bind it to a user namedusername
. Run the following command:oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>
oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create and administer applications in the
cluster-name
managed cluster, with theusername
user. You must create a cluster role binding to theopen-cluster-management:admin:<cluster-name>
cluster role and bind it tousername
by entering the following command:oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>
oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This role has read and write access to all
application
resources on the managed cluster,cluster-name
. Repeat this if access for other managed clusters is required.Create a namespace role binding to the
application
namespace using theadmin
role and bind it tousername
by entering the following command:oc create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=admin --user=<username>
oc create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=admin --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This role has read and write access to all
application
resources in theapplication
namspace. Repeat this if access for other applications is required or if the application deploys to multiple namespaces.You can create applications that deploy resources to multiple namespaces. Create a cluster role binding to the
open-cluster-management:subscription-admin
cluster role and bind it tousername
by entering the following command:oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>
oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:subscription-admin --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view an application on a managed cluster named
cluster-name
with the user namedusername
, create a cluster role binding to theopen-cluster-management:view:
cluster role and bind it tousername
. Enter the following command:oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>
oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This role has read access to all
application
resources on the managed cluster,cluster-name
. Repeat this if access for other managed clusters is required.Create a namespace role binding to the
application
namespace using theview
role and bind it tousername
. Enter the following command:oc create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=view --user=<username>
oc create rolebinding <role-binding-name> -n <application-namespace> --clusterrole=view --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This role has read access to all
application
resources in theapplication
namspace. Repeat this if access for other applications is required.
1.2.1.1. Console and API RBAC table for application lifecycle Copy linkLink copied to clipboard!
View the following console and API RBAC tables for Application lifecycle:
Resource | Admin | Edit | View |
---|---|---|---|
Application | create, read, update, delete | create, read, update, delete | read |
Channel | create, read, update, delete | create, read, update, delete | read |
Subscription | create, read, update, delete | create, read, update, delete | read |
API | Admin | Edit | View |
---|---|---|---|
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
| create, read, update, delete | create, read, update, delete | read |
1.2.2. Governance lifecycle RBAC Copy linkLink copied to clipboard!
To perform governance lifecycle operations, you need access to the namespace where the policy is created, along with access to the managed cluster where the policy is applied. The managed cluster must also be part of a ManagedClusterSet
that is bound to the namespace. To continue to learn about ManagedClusterSet
, see ManagedClusterSets Introduction.
After you select a namespace, such as rhacm-policies
, with one or more bound ManagedClusterSets
, and after you have access to create Placement
objects in the namespace, view the following operations:
To create a
ClusterRole
namedrhacm-edit-policy
withPolicy
,PlacementBinding
, andPolicyAutomation
edit access, run the following command:oc create clusterrole rhacm-edit-policy --resource=policies.policy.open-cluster-management.io,placementbindings.policy.open-cluster-management.io,policyautomations.policy.open-cluster-management.io,policysets.policy.open-cluster-management.io --verb=create,delete,get,list,patch,update,watch
oc create clusterrole rhacm-edit-policy --resource=policies.policy.open-cluster-management.io,placementbindings.policy.open-cluster-management.io,policyautomations.policy.open-cluster-management.io,policysets.policy.open-cluster-management.io --verb=create,delete,get,list,patch,update,watch
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a policy in the
rhacm-policies
namespace, create a namespaceRoleBinding
, such asrhacm-edit-policy
, to therhacm-policies
namespace using theClusterRole
created previously. Run the following command:oc create rolebinding rhacm-edit-policy -n rhacm-policies --clusterrole=rhacm-edit-policy --user=<username>
oc create rolebinding rhacm-edit-policy -n rhacm-policies --clusterrole=rhacm-edit-policy --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view policy status of a managed cluster, you need permission to view policies in the managed cluster namespace on the hub cluster. If you do not have
view
access, such as through the OpenShiftview
ClusterRole
, create aClusterRole
, such asrhacm-view-policy
, with view access to policies with the following command:oc create clusterrole rhacm-view-policy --resource=policies.policy.open-cluster-management.io --verb=get,list,watch
oc create clusterrole rhacm-view-policy --resource=policies.policy.open-cluster-management.io --verb=get,list,watch
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To bind the new
ClusterRole
to the managed cluster namespace, run the following command to create a namespaceRoleBinding
:oc create rolebinding rhacm-view-policy -n <cluster name> --clusterrole=rhacm-view-policy --user=<username>
oc create rolebinding rhacm-view-policy -n <cluster name> --clusterrole=rhacm-view-policy --user=<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2.1. Console and API RBAC table for governance lifecycle Copy linkLink copied to clipboard!
View the following console and API RBAC tables for governance lifecycle:
Resource | Admin | Edit | View |
---|---|---|---|
Policies | create, read, update, delete | read, update | read |
PlacementBindings | create, read, update, delete | read, update | read |
Placements | create, read, update, delete | read, update | read |
PlacementRules (deprecated) | create, read, update, delete | read, update | read |
PolicyAutomations | create, read, update, delete | read, update | read |
API | Admin | Edit | View |
---|---|---|---|
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
1.2.3. Observability RBAC Copy linkLink copied to clipboard!
To view the observability metrics for a managed cluster, you must have view
access to that managed cluster on the hub cluster. View the following list of observability features:
Access managed cluster metrics.
Users are denied access to managed cluster metrics, if they are not assigned to the
view
role for the managed cluster on the hub cluster. Run the following command to verify if a user has the authority to create amanagedClusterView
role in the managed cluster namespace:oc auth can-i create ManagedClusterView -n <managedClusterName> --as=<user>
oc auth can-i create ManagedClusterView -n <managedClusterName> --as=<user>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As a cluster administrator, create a
managedClusterView
role in the managed cluster namespace. Run the following command:oc create role create-managedclusterview --verb=create --resource=managedclusterviews -n <managedClusterName>
oc create role create-managedclusterview --verb=create --resource=managedclusterviews -n <managedClusterName>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then apply and bind the role to a user by creating a role bind. Run the following command:
oc create rolebinding user-create-managedclusterview-binding --role=create-managedclusterview --user=<user> -n <managedClusterName>
oc create rolebinding user-create-managedclusterview-binding --role=create-managedclusterview --user=<user> -n <managedClusterName>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Search for resources.
To verify if a user has access to resource types, use the following command:
oc auth can-i list <resource-type> -n <namespace> --as=<rbac-user>
oc auth can-i list <resource-type> -n <namespace> --as=<rbac-user>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note:
<resource-type>
must be plural.To view observability data in Grafana, you must have a
RoleBinding
resource in the same namespace of the managed cluster.View the following
RoleBinding
example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
See Role binding policy for more information. See Observability advanced configuration to configure observability.
1.2.3.1. Console and API RBAC table for observability lifecycle Copy linkLink copied to clipboard!
To manage components of observability, view the following API RBAC table:
API | Admin | Edit | View |
| create, read, update, and delete | read, update | read |
| create, get, list, watch, update, delete, patch | - | - |
| get, list, watch | get, list, watch | get, list, watch |
1.3. Implementing fine-grained role-based access control with the console (Technology Preview) Copy linkLink copied to clipboard!
Technology Preview: Red Hat Advanced Cluster Management for Kubernetes supports fine-grained role-based access control (RBAC). As a cluster administrator, you can manage and control permissions with the ClusterPermission
resource, which controls permissions at the namespace level on managed clusters, as well as at the cluster level. Grant permissions to a virtual machine namespace within a cluster without granting permission to the entire managed cluster.
Learn how to set up for fine-grained role-based access control (RBAC) and the ClusterPermission
resource from the console.
Required access: Cluster administrator
To learn about OpenShift Container Platform default and virtualization roles and permissions, see Authorization in the OpenShift Container Platform documentation.
See Implementing role-based access control for more details about Red Hat Advanced Cluster Management role-based access.
Prerequisites
See the following requirements to begin using fine-grained role-based access control:
-
Your
MultiClusterHub
custom resourcespec.overrides.components
field forsearch
mustenabled
to retrieve a list of managed clusters namespaces that can represent virtual machines that are used for access control. - You need virtual machines.
1.3.1. Assigning fine-grained role-based access control in the console Copy linkLink copied to clipboard!
You can assign users to manage virtual machines with fine-grained role-based access control. Actions are disabled in the console if the user-role access is not permitted. Slide the YAML option on to see the data that you enter populate in the YAML editor.
You can grant access to the following roles for OpenShift Virtualization, which are extensions of the default roles:
-
kubevirt.io:view
: only view resources -
kubevirt.io:edit
: modify resources -
kubevirt.io:admin
: view, modify, delete resources; grant permissions
Important: As an administrator, you need to add either RoleBinding
or ClusterRoleBinding
resources for a valid ClusterPermission
resource. You can also choose to add both resources.
Navigate to your
MultiClusterHub
custom resource to edit the resource and enable the feature.- From the local-cluster view, click Operators > Installed Operators > Advanced Cluster Management for Kubernetes.
- Click the MultiClusterHub tab to edit the resource.
- Slide the YAML option on to see the data in the YAML editor.
-
In your
MultiClusterHub
custom resourcespec.overrides.components
field, setfine-grained-rbac-preview
totrue
to enable the feature. Change theconfigOverrides
specification toenabled: true
in the YAML editor and save your changes. See the following example withfine-grained-rbac-preview
enabled:
- configOverrides: {} enabled: true name: fine-grained-rbac-preview
- configOverrides: {}
enabled: true
name: fine-grained-rbac-preview
Label your
local-cluster
withenvironment=virtualization
.- From the All Clusters view, click Infrastructure > Clusters >
-
Find your
local-cluster
and click Actions to edit. -
Add the
environment=virtualization
label and save your changes. See the following example:
environment=virtualization
environment=virtualization
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
policy-virt-clusterroles
value for theremediationAction
toenforce
, which adds thekubevirt
clusterroles
to the hub cluster.- Click Governance > Policies.
Find the
policy-virt-clusterroles
policy and click Actions to change theremediationAction
value toenforce
. Important: TworemediationAction
specifications are in the policy, but you only need to change the laterremediationAction
. This does not apply to the first template in the YAML file. See the following sample:remediationAction: enforce
remediationAction: enforce
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Slide the YAML option on to see the data in the YAML editor and save your changes. See the following YAML sample:
Create a
ClusterRole
resource and name your file.- From the local-cluster view, click User Management > Roles > Create Role.
-
Add the following
ClusterRole
resource information to the YAML editor:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterRoleBinding
resource.- From the local-cluster view, click User Management > RoleBindings > Create bindings.
-
Choose
Cluster-wide role binding
for the Binding type. -
Add a
RoleBinding
name that matches the name of theClusterRole
, which is the<cluster-role-name>
that you chose previously. - Add the matching role name.
- For the Subject, select User or Group, enter the User or Group name, and save your changes.
Create a
ClusterPermission
resource to grant permissions at the namespace level.- Click Access control > Create permission.
- In the Basic information window, add the cluster name and the User or Group that is granted permission.
- Choose the cluster for that permission.
Add the
Role bindings
information, which sets permissions at the namespace level.- Add virtual machine namespaces in the cluster.
- Add Users or Groups.
-
Add roles, such as
kubevirt.io:view
, for fine-grained role-based access control. You can choose a combination ofRoleBindings
.
-
Add the
ClusterRoleBinding
resource with the same information to set permissions at the cluster level. Review and click Create permission to create
ClusterPermission
resource as you see in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the permissions.
- 2
- Add the cluster name that is also a namespace for permissions.
- 3
- Specify a name for
<role-binding-name>
, which assigns roles to Users or Groups. - 4
- Specify the namespace in the managed cluster to which the User or Group is granted access.
- 5
- Choose
User
or chooseGroup
. - 6
- Specify the User or Group
name
.
-
Check for a
Ready
status in the console. -
You can click Edit permission to edit the
Role bindings
andCluster role binding
. - Optional: Click Export YAML to use the resources for GitOps or in the terminal.
-
You can delete the
ClusterPermissions
resource when you are ready. Optional: If the
observability
service is enabled, create an additionalRoleBinding
resource on the hub cluster so that users can view virtual machine details in Grafana.- From the local-cluster view, click User Management > RoleBindings > RoleBindings.
-
Choose
Namespace role binding
for the Binding type. -
Specify a name for
RoleBindings
, which assigns roles to Users or Groups. - Add the cluster name, which is also the namespace on the hub cluster.
-
Choose
view
for the Role name. - For the Subject, select User or Group, enter the User or Group name, and save your changes.
1.4. Implementing fine-grained role-based access control in the terminal (Technology Preview) Copy linkLink copied to clipboard!
Technology Preview: Red Hat Advanced Cluster Management for Kubernetes supports fine-grained role-based access control (RBAC). As a cluster administrator, you can manage and control permissions with the ClusterPermission
resource, which controls permissions at the namespace level on managed clusters, as well as at the cluster level. Grant permissions to a virtual machine namespace within a cluster without granting permission to the entire managed cluster.
Learn how to set up for fine-grained role-based access control (RBAC) and the ClusterPermission
resource from the terminal.
Required access: Cluster administrator
To learn about OpenShift Container Platform default and virtualization roles and permissions, see Authorization in the OpenShift Container Platform documentation.
See Implementing role-based access control for more details about Red Hat Advanced Cluster Management role-based access.
Prerequisites
See the following requirements to begin using fine-grained role-based access control:
-
Your
MultiClusterHub
custom resourcespec.overrides.components
field forsearch
mustenabled
to retrieve a list of managed clusters namespaces that can represent virtual machines that are used for access control. - You need virtual machines.
1.4.1. Assigning fine-grained role-based access control in the terminal Copy linkLink copied to clipboard!
You can grant access to the following roles for OpenShift Virtualization, which are extensions of the default roles:
-
kubevirt.io:view
: only view resources -
kubevirt.io:edit
: modify resources -
kubevirt.io:admin
: view, modify, delete resources; grant permissions
Complete the following steps:
Enable
fine-grained-rbac-preview
in theMultiClusterHub
resource.Run the following command:
oc edit mch -n open-cluster-management multiclusterhub
oc edit mch -n open-cluster-management multiclusterhub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit to change the
configOverrides
specification fromenabled: false
toenabled: true
. See the following example with the feature enabled:- configOverrides: {} enabled: true name: fine-grained-rbac-preview
- configOverrides: {} enabled: true name: fine-grained-rbac-preview
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note: Run
oc get mch -A
to get the name and namespace of theMultiClusterHub
resource if you do not use theopen-cluster-management
namespace.Label your
local-cluster
withenvironment=virtualization
. Run the following command:oc label managedclusters local-cluster environment=virtualization
oc label managedclusters local-cluster environment=virtualization
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
policy-virt-clusterroles
toenforce
, which adds thekubevirt
clusterroles
to the hub cluster.Run the following command to edit the policy:
oc edit policy -n open-cluster-management-global-set policy-virt-clusterroles
oc edit policy -n open-cluster-management-global-set policy-virt-clusterroles
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the
remediationAction
value frominform
toenforce
. Important: TworemediationAction
specifications are in the policy, but you only need to change the laterremediationAction
. This does not apply to the first template in the YAML file. See the following sample:
remediationAction: enforce
remediationAction: enforce
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following YAML file and name the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can use any name, but you must continue to use that name.
- 2
- Add the names of the managed clusters that you want your User or Group to access. Each name must be unique.
- 3
- Specify a name for
RoleBindings
, which assigns roles to Users or Groups. - 4
- Ensure the name matches the
ClusterRole
name. - 5
- Choose User or Group.
- 6
- Specify a User or Group name.
Apply the
ClusterRole
resource. Run the following command and change the file name only if you changed it earlier in the process:oc apply -f <filename>.yml
oc apply -f <filename>.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a YAML file for the
ClusterPermission
resource and name the file. Assign fine-grain role-based access from the
ClusterPermission
resource by specifying the cluster name, managed cluster namespace, and Users or Group name:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the permissions.
- 2
- Add the cluster name that is also a namespace for permissions.
- 3
- Specify a name for
RoleBindings
, which assigns roles to Users or Groups. - 4
- Specify the namespace in the managed cluster to which the User or Group is granted access.
- 5
- Choose
User
or chooseGroup
. - 6
- Specify the User or Group
name
.
Run the following command to apply the file and change the name of the file if you changed the name previously:
oc apply -f <filename>.yml
oc apply -f <filename>.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If
observability
is enabled, create an additionalRoleBinding
on the hub cluster so that users can view virtual machine details in Grafana.Create the
RoleBinding
resource for Grafana access. See the following sample YAML file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Apply the
ClusterRoleBinding
resource with the following command:
oc apply -f <filename>.yml
oc apply -f <filename>.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Certificates Copy linkLink copied to clipboard!
All certificates that are required by services that run on Red Hat Advanced Cluster Management are created when you install Red Hat Advanced Cluster Management. View the following list of certificates, which are created and managed by the following components of Red Hat OpenShift Container Platform:
- OpenShift Service Serving Certificates
- Red Hat Advanced Cluster Management webhook controllers
- Kubernetes Certificates API
- OpenShift default ingress
Required access: Cluster administrator
Continue reading to learn more about certificate management:
Note: Users are responsible for certificate rotations and updates.
1.5.1. Red Hat Advanced Cluster Management hub cluster certificates Copy linkLink copied to clipboard!
OpenShift Container Platform default ingress certificate is a type of hub cluster certificate. After the Red Hat Advanced Cluster Management installation, observability certificates are created and used by the observability components to give mutual TLS on the traffic between the hub cluster and managed cluster. Access the observability namespaces to retrieve and implement the different observability certificates, depending on your needs.
The
open-cluster-management-observability
namespace has the following certificates:-
observability-server-ca-certs
: Has the CA certificate to sign server-side certificates -
observability-client-ca-certs
: Has the CA certificate to sign client-side certificates -
observability-server-certs
: Has the server certificate used by theobservability-observatorium-api
deployment -
observability-grafana-certs
: Has the client certificate used by theobservability-rbac-query-proxy
deployment
-
The
open-cluster-management-addon-observability
namespace has the following certificates on managed clusters:-
observability-managed-cluster-certs
: Has the same server CA certificate asobservability-server-ca-certs
in the hub server -
observability-controller-open-cluster-management.io-observability-signer-client-cert
: Has the client certificate used by themetrics-collector-deployment
-
The CA certificates are valid for five years and other certificates are valid for one year. All observability certificates are automatically refreshed upon expiration. View the following list to understand the effects when certificates are automatically renewed:
- Non-CA certificates are renewed automatically when the remaining valid time is no more than 73 days. After the certificate is renewed, the pods in the related deployments restart automatically to use the renewed certificates.
- CA certificates are renewed automatically when the remaining valid time is no more than one year. After the certificate is renewed, the old CA is not deleted but co-exist with the renewed ones. Both old and renewed certificates are used by related deployments, and continue to work. The old CA certificates are deleted when they expire.
- When a certificate is renewed, the traffic between the hub cluster and managed cluster is not interrupted.
View the following Red Hat Advanced Cluster Management hub cluster certificates table:
Namespace | Secret name | Pod label | |
---|---|---|---|
open-cluster-management | channels-apps-open-cluster-management-webhook-svc-ca | app=multicluster-operators-channel | open-cluster-management |
channels-apps-open-cluster-management-webhook-svc-signed-ca | app=multicluster-operators-channel | open-cluster-management | multicluster-operators-application-svc-ca |
app=multicluster-operators-application | open-cluster-management | multicluster-operators-application-svc-signed-ca | app=multicluster-operators-application |
open-cluster-management-hub | registration-webhook-serving-cert signer-secret | Not required | open-cluster-management-hub |
1.5.2. Red Hat Advanced Cluster Management managed certificates Copy linkLink copied to clipboard!
View the following table for a summarized list of the component pods that contain Red Hat Advanced Cluster Management managed certificates and the related secrets:
Namespace | Secret name (if applicable) |
---|---|
open-cluster-management-agent-addon | cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert |
open-cluster-management-agent-addon | cluster-proxy-service-proxy-server-certificates |
Use these Red Hat Advanced Cluster Management managed certificates to authenticate managed clusters within your hub cluster. These managed cluster certificates get managed and refreshed automatically. If you customize the hub cluster API server certificate, the managed cluster automatically updates its certificates.
1.5.3. Additional resources Copy linkLink copied to clipboard!
- If you wan to reconnect a managed hub cluster that is not automatically reconnected, see Troubleshooting imported clusters offline after certificate change.
- Use the certificate policy controller to create and manage certificate policies on managed clusters. See Certificate policy controller for more details.
- See Using custom CA certificates for a secure HTTPS connection for more details about securely connecting to a privately-hosted Git server with SSL/TLS certificates.
- See OpenShift Service Serving Certificates for more details.
- The OpenShift Container Platform default ingress is a hub cluster certificate. See Replacing the default ingress certificate for more details.
1.6. Managing certificates Copy linkLink copied to clipboard!
Continue reading for information about how to refresh, replace, rotate, and list certificates.
1.6.1. Refreshing a Red Hat Advanced Cluster Management webhook certificate Copy linkLink copied to clipboard!
You can refresh Red Hat Advanced Cluster Management managed certificates, which are certificates that are created and managed by Red Hat Advanced Cluster Management services.
Complete the following steps to refresh certificates managed by Red Hat Advanced Cluster Management:
Delete the secret that is associated with the Red Hat Advanced Cluster Management managed certificate by running the following command:
oc delete secret -n <namespace> <secret>
oc delete secret -n <namespace> <secret>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<namespace>
and<secret>
with the values that you want to use.
Restart the services that are associated with the Red Hat Advanced Cluster Management managed certificate(s) by running the following command:
oc delete pod -n <namespace> -l <pod-label>
oc delete pod -n <namespace> -l <pod-label>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<namespace>
and<pod-label>
with the values from the Red Hat Advanced Cluster Management managed cluster certificates table.
Note: If a
pod-label
is not specified, there is no service that must be restarted. The secret is recreated and used automatically.
1.6.2. Replacing certificates for alertmanager route Copy linkLink copied to clipboard!
If you do not want to use the OpenShift Container Platform default ingress certificate, replace observability alertmanager
certificates by updating the Alertmanager route. Complete the following steps:
Examine the observability certificate with the following command:
openssl x509 -noout -text -in ./observability.crt
openssl x509 -noout -text -in ./observability.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Change the common name (
CN
) on the certificate toalertmanager
. -
Change the SAN in the
csr.cnf
configuration file with the hostname for youralertmanager
route. Create the two following secrets in the
open-cluster-management-observability
namespace. Run the following commands:oc -n open-cluster-management-observability create secret tls alertmanager-byo-ca --cert ./ca.crt --key ./ca.key oc -n open-cluster-management-observability create secret tls alertmanager-byo-cert --cert ./ingress.crt --key ./ingress.key
oc -n open-cluster-management-observability create secret tls alertmanager-byo-ca --cert ./ca.crt --key ./ca.key oc -n open-cluster-management-observability create secret tls alertmanager-byo-cert --cert ./ingress.crt --key ./ingress.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6.3. Rotating the gatekeeper webhook certificate Copy linkLink copied to clipboard!
Complete the following steps to rotate the gatekeeper webhook certificate:
Edit the secret that contains the certificate with the following command:
oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert
oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the following content in the
data
section:ca.crt
,ca.key
,tls.crt
, andtls.key
. Restart the gatekeeper webhook service by deleting the
gatekeeper-controller-manager
pods with the following command:oc delete pod -n openshift-gatekeeper-system -l control-plane=controller-manager
oc delete pod -n openshift-gatekeeper-system -l control-plane=controller-manager
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The gatekeeper webhook certificate is rotated.
1.6.4. Verifying certificate rotation Copy linkLink copied to clipboard!
Verify that your certificates are rotated using the following steps:
- Identify the secret that you want to check.
-
Check the
tls.crt
key to verify that a certificate is available. Display the certificate information by using the following command:
oc get secret <your-secret-name> -n open-cluster-management -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout
oc get secret <your-secret-name> -n open-cluster-management -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<your-secret-name>
with the name of secret that you are verifying. If it is necessary, also update the namespace and JSON path.Check the
Validity
details in the output. View the followingValidity
example:Validity Not Before: Jul 13 15:17:50 2023 GMT Not After : Jul 12 15:17:50 2024 GMT
Validity Not Before: Jul 13 15:17:50 2023 GMT
1 Not After : Jul 12 15:17:50 2024 GMT
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6.5. Listing hub cluster managed certificates Copy linkLink copied to clipboard!
You can view a list of hub cluster managed certificates that use OpenShift Service Serving Certificates service internally. Run the following command to list the certificates:
for ns in multicluster-engine open-cluster-management ; do echo "$ns:" ; oc get secret -n $ns -o custom-columns=Name:.metadata.name,Expiration:.metadata.annotations.service\\.beta\\.openshift\\.io/expiry | grep -v '<none>' ; echo ""; done
for ns in multicluster-engine open-cluster-management ; do echo "$ns:" ; oc get secret -n $ns -o custom-columns=Name:.metadata.name,Expiration:.metadata.annotations.service\\.beta\\.openshift\\.io/expiry | grep -v '<none>' ; echo ""; done
For more information, see OpenShift Service Serving Certificates in the Additional resources section.
Note: If observability is enabled, there are additional namespaces where certificates are created.
1.6.6. Additional resources Copy linkLink copied to clipboard!
1.7. Bringing your own observability Certificate Authority (CA) certificates Copy linkLink copied to clipboard!
When you install Red Hat Advanced Cluster Management for Kubernetes, only Certificate Authority (CA) certificates for observability are provided by default. If you do not want to use the default observability CA certificates generated by Red Hat Advanced Cluster Management, you can choose to bring your own observability CA certificates before you enable observability.
1.7.1. Generating CA certificates by using OpenSSL commands Copy linkLink copied to clipboard!
Observability requires two CA certificates, one for the server-side and the other is for the client-side.
Generate your CA RSA private keys with the following commands:
openssl genrsa -out serverCAKey.pem 2048 openssl genrsa -out clientCAKey.pem 2048
openssl genrsa -out serverCAKey.pem 2048 openssl genrsa -out clientCAKey.pem 2048
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the self-signed CA certificates using the private keys. Run the following commands:
openssl req -x509 -sha256 -new -nodes -key serverCAKey.pem -days 1825 -out serverCACert.pem openssl req -x509 -sha256 -new -nodes -key clientCAKey.pem -days 1825 -out clientCACert.pem
openssl req -x509 -sha256 -new -nodes -key serverCAKey.pem -days 1825 -out serverCACert.pem openssl req -x509 -sha256 -new -nodes -key clientCAKey.pem -days 1825 -out clientCACert.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.2. Creating the secrets associated with your own observability CA certificates Copy linkLink copied to clipboard!
Complete the following steps to create the secrets:
Create the
observability-server-ca-certs
secret by using your certificate and private key. Run the following command:oc -n open-cluster-management-observability create secret tls observability-server-ca-certs --cert ./serverCACert.pem --key ./serverCAKey.pem
oc -n open-cluster-management-observability create secret tls observability-server-ca-certs --cert ./serverCACert.pem --key ./serverCAKey.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
observability-client-ca-certs
secret by using your certificate and private key. Run the following command:oc -n open-cluster-management-observability create secret tls observability-client-ca-certs --cert ./clientCACert.pem --key ./clientCAKey.pem
oc -n open-cluster-management-observability create secret tls observability-client-ca-certs --cert ./clientCACert.pem --key ./clientCAKey.pem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.3. Replacing certificates for rbac-query-proxy route Copy linkLink copied to clipboard!
You can replace certificates for the rbac-query-proxy
route. See Generating CA certificates by using OpenSSL commands to create certificates.
When you create a Certificate Signing Request (CSR) by using the csr.cnf
file, update the DNS.1
field in the subjectAltName
section to match the host name of the rbac-query-proxy
route.
Complete the following steps:
Retrieve the host name by running the following command:
oc get route rbac-query-proxy -n open-cluster-management-observability -o jsonpath=" {.spec.host}"
oc get route rbac-query-proxy -n open-cluster-management-observability -o jsonpath=" {.spec.host}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
proxy-byo-ca
secret by using the generated certificates. Run the following command:oc -n open-cluster-management-observability create secret tls proxy-byo-ca --cert ./ca.crt --key ./ca.key
oc -n open-cluster-management-observability create secret tls proxy-byo-ca --cert ./ca.crt --key ./ca.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create a
proxy-byo-cert
secret by using the generated certificates:oc -n open-cluster-management-observability create secret tls proxy-byo-cert --cert ./ingress.crt --key ./ingress.key
oc -n open-cluster-management-observability create secret tls proxy-byo-cert --cert ./ingress.crt --key ./ingress.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow