Chapter 1. Security
Manage your security and role-based access control (RBAC) of Red Hat Advanced Cluster Management for Kubernetes components. Govern your cluster with defined policies and processes to identify and minimize risks. Use policies to define rules and set controls.
Prerequisite: You must configure authentication service requirements for Red Hat Advanced Cluster Management for Kubernetes to onboard workloads to Identity and Access Management (IAM). For more information see, Understanding authentication in Understanding authentication in the OpenShift Container Platform documentation.
Review the following topics to learn more about securing your cluster:
1.1. Role-based access control Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management for Kubernetes supports role-based access control (RBAC). Your role determines the actions that you can perform. RBAC is based on the authorization mechanisms in Kubernetes, similar to Red Hat OpenShift Container Platform. For more information about RBAC, see the OpenShift RBAC overview in the OpenShift Container Platform documentation.
Note: Action buttons are disabled from the console if the user-role access is impermissible.
View the following sections for details of supported RBAC by component:
1.1.1. Overview of roles Copy linkLink copied to clipboard!
Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported in Red Hat Advanced Cluster Management for Kubernetes:
| Role | Definition |
|---|---|
| cluster-admin |
A user with cluster-wide binding to the |
| open-cluster-management:cluster-manager-admin |
A user with cluster-wide binding to the |
| open-cluster-management:managed-cluster-x (admin) |
A user with cluster binding to the |
| open-cluster-management:managed-cluster-x (viewer) |
A user with cluster-wide binding to the |
| open-cluster-management:subscription-admin |
A user with the |
| admin, edit, view |
Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to |
Important:
- Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace.
-
If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol:
-.
1.1.2. RBAC implementation Copy linkLink copied to clipboard!
RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product.
1.1.2.1. Cluster lifecycle RBAC Copy linkLink copied to clipboard!
View the following cluster lifecycle RBAC operations.
To create and administer all managed clusters:
Create a cluster role binding to the cluster role
open-cluster-management:cluster-manager-admin. This role is a super user, which has access to all resources and actions. This role allows you to create cluster-scopedmanagedclusterresources, the namespace for the resources that manage the managed cluster, and the resources in the namespace. This role also allows access to provider connections and to bare metal assets that are used to create managed clusters.oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin
oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To administer a managed cluster named cluster-name:
Create a cluster role binding to the cluster role
open-cluster-management:admin:<cluster-name>. This role allows read/write access to the cluster-scopedmanagedclusterresource. This is needed because themanagedclusteris a cluster-scoped resource and not a namespace-scoped resource.oc create clusterrolebinding (role-binding-name) --clusterrole=open-cluster-management:admin:<cluster-name>
oc create clusterrolebinding (role-binding-name) --clusterrole=open-cluster-management:admin:<cluster-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a namespace role binding to the cluster role
admin. This role allows read/write access to the resources in the namespace of the managed cluster.oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=admin
oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To view a managed cluster named cluster-name:
Create a cluster role binding to the cluster role
open-cluster-management:view:<cluster-name>. This role allows read access to the cluster-scopedmanagedclusterresource. This is needed because themanagedclusteris a cluster-scoped resource and not a namespace-scoped resource.oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name>
oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a namespace role binding to the cluster role
view. This role allows read-only access to the resources in the namespace of the managed cluster.oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=view
oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=viewCopy to Clipboard Copied! Toggle word wrap Toggle overflow
View the following console and API RBAC tables for cluster lifecycle:
| Action | Admin | Edit | View |
|---|---|---|---|
| Clusters | read, update, delete | read, update | read |
| Provider connections | create, read, update, and delete | create, read, update, and delete | read |
| Bare metal asset | create, read, update, delete | read, update | read |
| API | Admin | Edit | View |
|---|---|---|---|
| managedclusters.cluster.open-cluster-management.io | create, read, update, delete | read, update | read |
| baremetalassets.inventory.open-cluster-management.io | create, read, update, delete | read, update | read |
| klusterletaddonconfigs.agent.open-cluster-management.io | create, read, update, delete | read, update | read |
| managedclusteractions.action.open-cluster-management.io | create, read, update, delete | read, update | read |
| managedclusterviews.view.open-cluster-management.io | create, read, update, delete | read, update | read |
| managedclusterinfos.internal.open-cluster-management.io | create, read, update, delete | read, update | read |
| manifestworks.work.open-cluster-management.io | create, read, update, delete | read, update | read |
1.1.2.2. Application lifecycle RBAC Copy linkLink copied to clipboard!
When you create an application, the subscription namespace is created and the configuration map is created in the subscription namespace. You must also have access to the channel namespace. When you want to apply a subscription, you must be a subscription administrator. For more information on managing applications, see Creating and managing subscriptions.
To perform application lifecycle tasks, users with the admin role must have access to the application namespace where the application is created, and to the managed cluster namespace. For example, the required access to create applications in namespace "N" is a namespace-scoped binding to the admin role for namespace "N".
View the following console and API RBAC tables for Application lifecycle:
| Action | Admin | Edit | View |
|---|---|---|---|
| Application | create, read, update, delete | create, read, update, delete | read |
| Channel | create, read, update, delete | create, read, update, delete | read |
| Subscription | create, read, update, delete | create, read, update, delete | read |
| Placement rule | create, read, update, delete | create, read, update, delete | read |
| API | Admin | Edit | View |
|---|---|---|---|
| applications.app.k8s.io | create, read, update, delete | create, read, update, delete | read |
| channels.apps.open-cluster-management.io | create, read, update, delete | create, read, update, delete | read |
| deployables.apps.open-cluster-management.io | create, read, update, delete | create, read, update, delete | read |
| helmreleases.apps.open-cluster-management.io | create, read, update, delete | create, read, update, delete | read |
| placementrules.apps.open-cluster-management.io | create, read, update, delete | create, read, update, delete | read |
| subscriptions.apps.open-cluster-management.io | create, read, update, delete | create, read, update, delete | read |
| configmaps | create, read, update, delete | create, read, update, delete | read |
| secrets | create, read, update, delete | create, read, update, delete | read |
| namespaces | create, read, update, delete | create, read, update, delete | read |
1.1.2.3. Governance lifecycle RBAC Copy linkLink copied to clipboard!
To perform governance lifecycle operations, users must have access to the namespace where the policy is created, along with access to the managedcluster namespace where the policy is applied.
View the following examples:
To view policies in namespace "N" the following role is required:
-
A namespace-scoped binding to the
viewrole for namespace "N".
-
A namespace-scoped binding to the
To create a policy in namespace "N" and apply it on
managedcluster"X", the following roles are required:-
A namespace-scoped binding to the
adminrole for namespace "N". -
A namespace-scoped binding to the
adminrole for namespace "X".
-
A namespace-scoped binding to the
View the following console and API RBAC tables for Governance lifecycle:
| Action | Admin | Edit | View |
|---|---|---|---|
| Policies | create, read, update, delete | read, update | read |
| PlacementBindings | create, read, update, delete | read, update | read |
| PlacementRules | create, read, update, delete | read, update | read |
| API | Admin | Edit | View |
|---|---|---|---|
| policies.policy.open-cluster-management.io | create, read, update, delete | read, update | read |
| placementbindings.policy.open-cluster-management.io | create, read, update, delete | read, update | read |
1.1.2.4. Observability RBAC Copy linkLink copied to clipboard!
To view the observability metrics for a managed cluster, you must have view access to that managed cluster on the hub cluster. View the following list of observability features:
Access managed cluster metrics.
Users are denied access to managed cluster metrics, if they are not assigned to the
viewrole for the managed cluster on the hub cluster.- Search for resources.
To view observability data in Grafana, you must have a RoleBinding resource in the same namespace of the managed cluster. View the following RoleBinding example:
See Role binding policy for more information. See Customizing observability to configure observability.
- Use the Visual Web Terminal if you have access to the managed cluster.
To create, update, and delete the MultiClusterObservability custom resource. View the following RBAC table:
| API | Admin | Edit | View |
| multiclusterobservabilities.observability.open-cluster-management.io | create, read, update, and delete | - | - |
To continue to learn more about securing your cluster, see Security.
1.2. Credentials Copy linkLink copied to clipboard!
You can rotate your credentials for your Red Hat Advanced Cluster Management for Kubernetes clusters when your cloud provider access credentials have changed. Continue reading for the procedure to manually propagate your updated cloud provider credentials.
Required access: Cluster administrator
1.2.1. Provider credentials Copy linkLink copied to clipboard!
Connection secrets for a cloud provider can be rotated. See the following list of provider credentials:
1.2.1.1. Amazon Web Services Copy linkLink copied to clipboard!
-
aws_access_key_id: Your provisioned cluster access key. aws_secret_access_key: Your provisioned secret access key.- View the resources in the namespace that has the same name as the cluster with the expired credential.
-
Find the secret name
<cluster_name>-<cloud_provider>-creds. For example:my_cluster-aws-creds1. - Edit the secret to replace the existing value with the updated value.
1.2.2. Agents Copy linkLink copied to clipboard!
Agents are responsible for connections. See how you can rotate the following credentials:
-
registration-agent: Connects the registration agent to the hub cluster. work-agent: Connects the work agent to the hub cluster.To rotate credentials, delete the
hub-kubeconfigsecret to restart the registration pods.APIServer: Connects agents and add-ons to the hub cluster.On the hub cluster, display the import command by entering the following command:
oc get secret -n ${CLUSTER_NAME} ${CLUSTER_NAME}-import -ojsonpath='{.data.import\.yaml}' | base64 --decode > import.yamloc get secret -n ${CLUSTER_NAME} ${CLUSTER_NAME}-import -ojsonpath='{.data.import\.yaml}' | base64 --decode > import.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
On the managed cluster, apply the
import.yamlfile. Run the following command:oc apply -f import.yaml.
1.3. Certificates Copy linkLink copied to clipboard!
Various certificates are created and used throughout Red Hat Advanced Cluster Management for Kubernetes.
You can bring your own certificates. You must create a Kubernetes TLS Secret for your certificate. After you create your certificates, you can replace certain certificates that are created by the Red Hat Advanced Cluster Management installer.
Required access: Cluster administrator or team administrator.
Note: Replacing certificates is supported only on native Red Hat Advanced Cluster Management installations.
All certificates required by services that run on Red Hat Advanced Cluster Management are created during the installation of Red Hat Advanced Cluster Management. Certificates are created and managed by the Red Hat Advanced Cluster Management Certificate manager (cert-manager) service. The Red Hat Advanced Cluster Management Root Certificate Authority (CA) certificate is stored within the Kubernetes Secret multicloud-ca-cert in the hub cluster namespace. The certificate can be imported into your client truststores to access Red Hat Advanced Cluster Management Platform APIs.
See the following topics to replace certificates:
1.3.1. List managed certificates Copy linkLink copied to clipboard!
You can view a list of managed certificates that use cert-manager internally by running the following command:
oc get certificates.certmanager.k8s.io -n open-cluster-management
oc get certificates.certmanager.k8s.io -n open-cluster-management
Note: If observability is enabled, there are additional namespaces where certificates are created.
1.3.2. Refresh a managed certificate Copy linkLink copied to clipboard!
You can refresh a managed certificate by running the command in the List managed certificates section. When you identify the certificate that you need to refresh, delete the secret that is associated with the certificate. For example, you can delete a secret by running the following command:
oc delete secret grc-0c925-grc-secrets -n open-cluster-management
oc delete secret grc-0c925-grc-secrets -n open-cluster-management
1.3.3. Refresh managed certificates for Red Hat Advanced Cluster Management for Kubernetes Copy linkLink copied to clipboard!
You can refresh all managed certificates that are issued by the Red Hat Advanced Cluster Management CA. During the refresh, the Kubernetes secret that is associated with each cert-manager certificate is deleted. The service restarts automatically to use the certificate. Run the following command:
oc delete secret -n open-cluster-management $(oc get certificates.certmanager.k8s.io -n open-cluster-management -o wide | grep multicloud-ca-issuer | awk '{print $3}')
oc delete secret -n open-cluster-management $(oc get certificates.certmanager.k8s.io -n open-cluster-management -o wide | grep multicloud-ca-issuer | awk '{print $3}')
The Red Hat OpenShift Container Platform certificate is not included in the Red Hat Advanced Cluster Management for Kubernetes management ingress. For more information, see the Security known issues.
1.3.4. Refresh internal certificates Copy linkLink copied to clipboard!
You can refresh internal certificates, which are certificates that are used by Red Hat Advanced Cluster Management webhooks and the proxy server.
Complete the following steps to refresh internal certificates:
Delete the secret that is associated with the internal certificate by running the following command:
oc delete secret -n open-cluster-management ocm-webhook-secret
oc delete secret -n open-cluster-management ocm-webhook-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: Some services might not have a secret that needs to be deleted.
Restart the services that are associated with the internal certificate(s) by running the following command:
oc delete po -n open-cluster-management ocm-webhook-679444669c-5cg76
oc delete po -n open-cluster-management ocm-webhook-679444669c-5cg76Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember: There are replicas of many services; each service must be restarted.
View the following table for a summarized list of the pods that contain certificates and whether a secret needs to be deleted prior to restarting the pod:
| Service name | Namespace | Sample pod name | Secret name (if applicable) |
|---|---|---|---|
| channels-apps-open-cluster-management-webhook-svc | open-cluster-management | multicluster-operators-application-8c446664c-5lbfk | - |
| multicluster-operators-application-svc | open-cluster-management | multicluster-operators-application-8c446664c-5lbfk | - |
| multiclusterhub-operator-webhook | open-cluster-management | multiclusterhub-operator-bfd948595-mnhjc | - |
| ocm-webhook | open-cluster-management | ocm-webhook-679444669c-5cg76 | ocm-webhook-secret |
| cluster-manager-registration-webhook | open-cluster-management-hub | cluster-manager-registration-webhook-fb7b99c-d8wfc | registration-webhook-serving-cert |
| cluster-manager-work-webhook | open-cluster-management-hub | cluster-manager-work-webhook-89b8d7fc-f4pv8 | work-webhook-serving-cert |
1.3.4.1. Rotating the gatekeeper webhook certificate Copy linkLink copied to clipboard!
Complete the following steps to rotate the gatekeeper webhook certificate:
Edit the secret that contains the certificate with the following command:
oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert
oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-certCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the following content in the
datasection:ca.crt,ca.key, tls.crt`, andtls.key. Restart the gatekeeper webhook service by deleting the
gatekeeper-controller-managerpods with the following command:oc delete po -n openshift-gatekeeper-system -l control-plane=controller-manager
oc delete po -n openshift-gatekeeper-system -l control-plane=controller-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The gatekeeper webhook certificate is rotated.
1.3.4.2. Rotating the integrity shield webhook certificate (Technology preview) Copy linkLink copied to clipboard!
Complete the following steps to rotate the integrity shield webhook certificate:
Edit the IntegrityShield custom resource and add the
integrity-shield-operator-systemnamespace to the excluded list of namespaces in theinScopeNamespaceSelectorsetting. Run the following command to edit the resource:oc edit integrityshield integrity-shield-server -n integrity-shield-operator-system
oc edit integrityshield integrity-shield-server -n integrity-shield-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secret that contains the integrity shield certificate by running the following command:
oc delete secret -n integrity-shield-operator-system ishield-server-tls
oc delete secret -n integrity-shield-operator-system ishield-server-tlsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the operator so that the secret is recreated. Be sure that the operator pod name matches the pod name on your system. Run the following command:
oc delete po -n integrity-shield-operator-system integrity-shield-operator-controller-manager-64549569f8-v4pz6
oc delete po -n integrity-shield-operator-system integrity-shield-operator-controller-manager-64549569f8-v4pz6Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the integrity shield server pod to begin using the new certificate with the following command:
oc delete po -n integrity-shield-operator-system integrity-shield-server-5fbdfbbbd4-bbfbz
oc delete po -n integrity-shield-operator-system integrity-shield-server-5fbdfbbbd4-bbfbzCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.4.3. Observability certificates Copy linkLink copied to clipboard!
When Red Hat Advanced Cluster Management is installed there are additional namespaces where certificates are managed. The open-cluster-management-observability namespace and the managed cluster namespaces contain certificates managed by cert-manager for the observability service.
Observability certificates are automatically refreshed upon expiration. View the following list to understand the effects when certificates are automatically renewed:
- Components on your hub cluster automatically restart to retrieve the refreshed certificate.
- Red Hat Advanced Cluster Management sends the refreshed certificates to managed clusters.
The
metrics-collectorrestarts to mount the renewed certificates.Note:
metrics-collectorcan push metrics to the hub cluster before and after certificates are removed. For more information about refreshing certificates across your clusters, see the Refresh internal certificates section. Be sure to specify the appropriate namespace when you refresh a certificate.
1.3.4.4. Channel certificates Copy linkLink copied to clipboard!
CA certificates can be associated with Git channel that are a part of the Red Hat Advanced Cluster Management application management. See Using custom CA certificates for a secure HTTPS connection for more details.
Helm channels allow you to disable certificate validation. Helm channels where certificate validation is disabled, must be configured in development environments. Disabling certificate validation introduces security risks.
1.3.4.5. Managed cluster certificates Copy linkLink copied to clipboard!
Certificates are used to authenticate managed clusters with the hub. Therefore, it is important to be aware of troubleshooting scenarios associated with these certificates. View Troubleshooting imported clusters offline after certificate change for more details.
The managed cluster certificates are refreshed automatically.
Use the certificate policy controller to create and manage certificate policies on managed clusters. See Policy controllers to learn more about controllers. Return to the Security page for more information.
1.3.5. Replacing the root CA certificate Copy linkLink copied to clipboard!
You can replace the root CA certificate.
1.3.5.1. Prerequisites for root CA certificate Copy linkLink copied to clipboard!
Verify that your Red Hat Advanced Cluster Management for Kubernetes cluster is running.
Back up the existing Red Hat Advanced Cluster Management for Kubernetes certificate resource by running the following command:
oc get cert multicloud-ca-cert -n open-cluster-management -o yaml > multicloud-ca-cert-backup.yaml
oc get cert multicloud-ca-cert -n open-cluster-management -o yaml > multicloud-ca-cert-backup.yaml
1.3.5.2. Creating the root CA certificate with OpenSSL Copy linkLink copied to clipboard!
Complete the following steps to create a root CA certificate with OpenSSL:
Generate your certificate authority (CA) RSA private key by running the following command:
openssl genrsa -out ca.key 4096
openssl genrsa -out ca.key 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a self-signed CA certificate by using your CA key. Run the following command:
openssl req -x509 -new -nodes -key ca.key -days 400 -out ca.crt -config req.cnf
openssl req -x509 -new -nodes -key ca.key -days 400 -out ca.crt -config req.cnfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Your
req.cnffile might resemble the following file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.5.3. Replacing root CA certificates Copy linkLink copied to clipboard!
Create a new secret with the CA certificate by running the following command:
kubectl -n open-cluster-management create secret tls byo-ca-cert --cert ./ca.crt --key ./ca.key
kubectl -n open-cluster-management create secret tls byo-ca-cert --cert ./ca.crt --key ./ca.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the CA issuer to point to the BYO certificate. Run the following commnad:
oc edit issuer -n open-cluster-management multicloud-ca-issuer
oc edit issuer -n open-cluster-management multicloud-ca-issuerCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace the string
mulicloud-ca-certwithbyo-ca-cert. Save your deployment and quit the editor. Edit the management ingress deployment to reference the Bring Your Own (BYO) CA certificate. Run the following command:
oc edit deployment management-ingress-435ab
oc edit deployment management-ingress-435abCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace the
multicloud-ca-certstring with thebyo-ca-cert. Save your deployment and quit the editor. - Validate the custom CA is in use by logging in to the console and view the details of the certificate being used.
1.3.5.4. Refreshing cert-manager certificates Copy linkLink copied to clipboard!
After the root CA is replaced, all certificates that are signed by the root CA must be refreshed and the services that use those certificates must be restarted. Cert-manager creates the default Issuer from the root CA so all of the certificates issued by cert-manager, and signed by the default ClusterIssuer must also be refreshed.
Delete the Kubernetes secrets associated with each cert-manager certificate to refresh the certificate and restart the services that use the certificate. Run the following command:
oc delete secret -n open-cluster-management $(oc get cert -n open-cluster-management -o wide | grep multicloud-ca-issuer | awk '{print $3}')
oc delete secret -n open-cluster-management $(oc get cert -n open-cluster-management -o wide | grep multicloud-ca-issuer | awk '{print $3}')
1.3.5.5. Restoring root CA certificates Copy linkLink copied to clipboard!
To restore the root CA certificate, update the CA issuer by completing the following steps:
Edit the CA issuer. Run the following command:
oc edit issuer -n open-cluster-management multicloud-ca-issuer
oc edit issuer -n open-cluster-management multicloud-ca-issuerCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace the
byo-ca-certstring withmulticloud-ca-certin the editor. Save the issuer and quit the editor. Edit the management ingress depolyment to reference the original CA certificate. Run the following command:
oc edit deployment management-ingress-435ab
oc edit deployment management-ingress-435abCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace the
byo-ca-certstring with themulticloud-ca-certstring. Save your deployment and quit the editor. Delete the BYO CA certificate. Run the following commnad:
oc delete secret -n open-cluster-management byo-ca-cert
oc delete secret -n open-cluster-management byo-ca-certCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Refresh all cert-manager certificates that use the CA. For more information, see the forementioned section, Refreshing cert-manager certificates.
See Certificates for more information about certificates that are created and managed by Red Hat Advanced Cluster Management for Kubernates.
1.3.6. Replacing the management ingress certificates Copy linkLink copied to clipboard!
You can replace management ingress certificates.
1.3.6.1. Prerequisites to replace management ingress certificate Copy linkLink copied to clipboard!
Prepare and have your management-ingress certificates and private keys ready. If needed, you can generate a TLS certificate by using OpenSSL. Set the common name parameter,CN, on the certificate to manangement-ingress. If you are generating the certificate, include the following settings:
Include the route name for Red Hat Advanced Cluster Management for Kubernetes as the domain name in your certificate Subject Alternative Name (SAN) list.
-
The service name for the management ingress:
management-ingress. Include the route name for Red Hat Advanced Cluster Management for Kubernetes.
Receive the route name by running the following command:
oc get route -n open-cluster-management
oc get route -n open-cluster-managementCopy to Clipboard Copied! Toggle word wrap Toggle overflow You might receive the following response:
multicloud-console.apps.grchub2.dev08.red-chesterfield.com
multicloud-console.apps.grchub2.dev08.red-chesterfield.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add the localhost IP address:
127.0.0.1. -
Add the localhost entry:
localhost.
-
The service name for the management ingress:
1.3.6.1.1. Example configuration file for generating a certificate Copy linkLink copied to clipboard!
The following example configuration file and OpenSSL commands provide an example for how to generate a TLS certificate by using OpenSSL. View the following csr.cnf configuration file, which defines the configuration settings for generating certificates with OpenSSL.
Note: Be sure to update the SAN labeled, DNS.2 with the correct hostname for your management ingress.
1.3.6.1.2. OpenSSL commands for generating a certificate Copy linkLink copied to clipboard!
The following OpenSSL commands are used with the preceding configuration file to generate the required TLS certificate.
Generate your certificate authority (CA) RSA private key:
openssl genrsa -out ca.key 4096
openssl genrsa -out ca.key 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a self-signed CA certificate by using your CA key:
openssl req -x509 -new -nodes -key ca.key -subj "/C=US/ST=North Carolina/L=Raleigh/O=Red Hat OpenShift" -days 400 -out ca.crt
openssl req -x509 -new -nodes -key ca.key -subj "/C=US/ST=North Carolina/L=Raleigh/O=Red Hat OpenShift" -days 400 -out ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the RSA private key for your certificate:
openssl genrsa -out ingress.key 4096
openssl genrsa -out ingress.key 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the Certificate Signing request (CSR) by using the private key:
openssl req -new -key ingress.key -out ingress.csr -config csr.cnf
openssl req -new -key ingress.key -out ingress.csr -config csr.cnfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a signed certificate by using your CA certificate and key and CSR:
openssl x509 -req -in ingress.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out ingress.crt -sha256 -days 300 -extensions v3_ext -extfile csr.cnf
openssl x509 -req -in ingress.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out ingress.crt -sha256 -days 300 -extensions v3_ext -extfile csr.cnfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the certificate contents:
openssl x509 -noout -text -in ./ingress.crt
openssl x509 -noout -text -in ./ingress.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.6.2. Replace the Bring Your Own (BYO) ingress certificate Copy linkLink copied to clipboard!
Complete the following steps to replace your BYO ingress certificate:
Create the
byo-ingress-tlssecret by using your certificate and private key. Run the following command:kubectl -n open-cluster-management create secret tls byo-ingress-tls-secret --cert ./ingress.crt --key ./ingress.key
kubectl -n open-cluster-management create secret tls byo-ingress-tls-secret --cert ./ingress.crt --key ./ingress.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the secret is created in the correct namespace with the following command:
kubectl get secret -n open-cluster-management | grep -e byo-ingress-tls-secret -e byo-ca-cert
kubectl get secret -n open-cluster-management | grep -e byo-ingress-tls-secret -e byo-ca-certCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret containing the CA certificate by running the following command:
kubectl -n open-cluster-management create secret tls byo-ca-cert --cert ./ca.crt --key ./ca.key
kubectl -n open-cluster-management create secret tls byo-ca-cert --cert ./ca.crt --key ./ca.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the management ingress deployment and get the name of the deployment with the following commands:
export MANAGEMENT_INGRESS=`oc get deployment -o custom-columns=:.metadata.name | grep management-ingress` oc edit deployment $MANAGEMENT_INGRESS -n open-cluster-management
export MANAGEMENT_INGRESS=`oc get deployment -o custom-columns=:.metadata.name | grep management-ingress` oc edit deployment $MANAGEMENT_INGRESS -n open-cluster-managementCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace the
multicloud-ca-certstring withbyo-ca-cert. -
Replace the
$MANAGEMENT_INGRESS-tls-secretstring withbyo-ingress-tls-secret. - Save your deployment and close the editor. + The management ingress automatically restarts.
-
Replace the
- Verify that the current certificate is your certificate, and that all console access and login functionality remain the same.
1.3.6.3. Restore the default self-signed certificate for management ingress Copy linkLink copied to clipboard!
Edit the management ingress deployment. Replace the string
multicloud-ca-certwithbyo-ca-certand get the name of the deployment with the following commands:export MANAGEMENT_INGRESS=`oc get deployment -o custom-columns=:.metadata.name | grep management-ingress` oc edit deployment $MANAGEMENT_INGRESS -n open-cluster-management
export MANAGEMENT_INGRESS=`oc get deployment -o custom-columns=:.metadata.name | grep management-ingress` oc edit deployment $MANAGEMENT_INGRESS -n open-cluster-managementCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace the
byo-ca-certstring withmulticloud-ca-cert. -
Replace the
byo-ingress-tls-secretstring with the$MANAGEMENT_INGRESS-tls-secret. - Save your deployment and close the editor. The management ingress automatically restarts.
-
Replace the
- After all pods are restarted, navigate to the Red Hat Advanced Cluster Management for Kubernetes console from your browser.
- Verify that the current certificate is your certificate, and that all console access and login functionality remain the same.
Delete the Bring Your Own (BYO) ingress secret and ingress CA certificate by running the following commands:
oc delete secret -n open-cluster-management byo-ingress-tls-secret oc delete secret -n open-cluster-management byo-ca-cert
oc delete secret -n open-cluster-management byo-ingress-tls-secret oc delete secret -n open-cluster-management byo-ca-certCopy to Clipboard Copied! Toggle word wrap Toggle overflow
See Certificates for more information about certificates that are created and managed by Red Hat Advanced Cluster Management. Return to the Security page for more information on securing your cluster.