Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. Customizing permissions by creating user-defined cluster roles for cluster-scoped instances
For the default cluster-scoped instance, the Red Hat OpenShift GitOps Operator grants additional permissions for managing certain cluster-scoped resources. Consequently, as a cluster administrator, when you deploy an Argo CD as a cluster-scoped instance, the Operator creates additional cluster roles and cluster role bindings for the GitOps control plane components. These cluster roles and cluster role bindings provide the additional permissions that Argo CD requires to operate at the cluster level.
If you do not want the cluster-scoped instance to have all of the Operator-given permissions and choose to add or remove permissions to cluster-wide resources, you must first disable the creation of the default cluster roles for the cluster-scoped instance. Then, you can customize permissions for the following cluster-scoped instances:
- Default ArgoCD instance (default cluster-scoped instance)
- User-defined cluster-scoped Argo CD instance
This guide provides instructions with examples to help you create a user-defined cluster-scoped Argo CD instance, deploy an Argo CD application in your defined namespace that contains custom configurations for your cluster, disable the creation of the default cluster roles for the cluster-scoped instance, and customize permissions for user-defined cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
As a developer, if you are creating an Argo CD application and deploying cluster-wide resources, ensure that your cluster administrator grants the necessary permissions to them.
Otherwise, after the Argo CD reconciliation, you will see an authentication error message in the application’s Status
field similar to the following example:
Example authentication error message
persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope.
2.1. Prerequisites
- You have installed Red Hat OpenShift GitOps 1.13.0 or a later version on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. -
You have installed a cluster-scoped Argo CD instance in your defined namespace. For example,
spring-petclinic
namespace. You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:
- Argo CD Application Controller
- Argo CD server
- Argo CD ApplicationSet Controller (provided the ApplicationSet Controller is created)
You have deployed a
cluster-configs
Argo CD application with thecustomclusterrole
path in thespring-petclinic
namespace and created thetest-gitops-ns
namespace andtest-gitops-pv
persistent volume resources.NoteThe
cluster-configs
Argo CD application must be managed by a user-defined cluster-scoped instance with the following parameters set:-
The
selfHeal
field value set totrue
-
The
syncPolicy
field value set toautomated
-
The Label field set to the
app.kubernetes.io/part-of=argocd
value -
The Label field set to the
argocd.argoproj.io/managed-by=<user_defined_namespace>
value so that the Argo CD instance in your defined namespace can manage your namespace -
The Label field set to the
app.kubernetes.io/name=<user_defined_argocd_instance>
value
-
The
2.2. Disabling the creation of the default cluster roles for the cluster-scoped instance
To add or remove permissions to cluster-wide resources, as needed, you must disable the creation of the default cluster roles for the cluster-scoped instance by editing the YAML file of the Argo CD custom resource (CR).
Procedure
In the Argo CD CR, set the value of the
.spec.defaultClusterScopedRoleDisabled
field totrue
:Example Argo CD CR
apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example 1 namespace: spring-petclinic 2 # ... spec: defaultClusterScopedRoleDisabled: true 3 # ...
- 1
- The name of the cluster-scoped instance.
- 2
- The namespace where you want to run the cluster-scoped instance.
- 3
- The flag value that disables the creation of the default cluster roles for the cluster-scoped instance. If you want the Operator to recreate the default cluster roles and cluster role bindings for the cluster-scoped instance, set the field value to
false
.
Sample output
argocd.argoproj.io/example configured
Verify that the Red Hat OpenShift GitOps Operator has deleted the default cluster roles and cluster role bindings for the GitOps control plane components by running the following commands:
$ oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>
$ oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>
Sample output
No resources found
The default cluster roles and cluster role bindings for the cluster-scoped instance are not created. As a cluster administrator, you can now create and customize permissions for cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
Additional resources
2.3. Customizing permissions for cluster-scoped instances
As a cluster administrator, to customize permissions for cluster-scoped instances, you must create new cluster roles and cluster role bindings for the GitOps control plane components.
For example purposes, the following instructions focus only on user-defined cluster-scoped instances.
Procedure
-
Open the Administrator perspective of the web console and go to User Management
Roles Create Role. Use the following
ClusterRole
YAML template to add rules to specify the additional permissions.Example cluster role YAML template
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: example-spring-petclinic-argocd-application-controller 1 rules: - verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - '*' apiGroups: - '' resources: 2 - namespaces - persistentvolumes
- Click Create to add the cluster role.
Find the service account used by the control plane component you are customizing permissions for, by performing the following steps:
-
Go to Workloads
Pods. - From the Project list, select the project where the user-defined cluster-scoped instance is installed.
- Click the pod of the control plane component and go to the YAML tab.
-
Find the
spec.ServiceAccount
field and note the service account.
-
Go to Workloads
-
Go to User Management
RoleBindings Create binding. - Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for RoleBinding name by following the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention.
- Select the newly created cluster role from the drop-down list for Role name.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
spring-petclinic
Subject name:
example-argocd-application-controller
NoteFor Subject name, ensure that the value you configure is the same as the value of the
spec.ServiceAccount
field of the control plane component you are customizing permissions for.
-
Subject namespace:
Click Create.
You have created the required permissions for the control plane component’s service account and namespace. The YAML file for the
ClusterRoleBinding
object looks similar to the following example:Example YAML file for a cluster role binding
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example-spring-petclinic-argocd-application-controller subjects: - kind: ServiceAccount name: example-argocd-application-controller namespace: spring-petclinic roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: example-spring-petclinic-argocd-application-controller