Declarative cluster configuration
Configuring an OpenShift cluster with cluster configurations by using OpenShift GitOps and creating and synchronizing applications in the default and code mode by using the GitOps CLI
Abstract
Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations Copy linkLink copied to clipboard!
With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster.
1.1. Prerequisites Copy linkLink copied to clipboard!
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
1.2. Using an Argo CD instance to manage cluster-scoped resources Copy linkLink copied to clipboard!
Do not elevate the permissions of Argo CD instances to be cluster-scoped unless you have a distinct use case that requires it. Only users with cluster-admin privileges should manage the instances you elevate. Anyone with access to the namespace of a cluster-scoped instance can elevate their privileges on the cluster to become a cluster administrator themselves.
To manage cluster-scoped resources, update the existing Subscription object for the Red Hat OpenShift GitOps Operator and add the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES environment variable in the spec section.
Procedure
- In the Administrator perspective of the web console, navigate to Operators → Installed Operators → Red Hat OpenShift GitOps → Subscription.
- Click the Actions list and then click Edit Subscription.
On the openshift-gitops-operator Subscription details page, under the YAML tab, edit the
SubscriptionYAML file by adding the namespace of the Argo CD instance to theARGOCD_CLUSTER_CONFIG_NAMESPACESenvironment variable in thespecsection:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save and Reload.
To verify that the Argo CD instance is configured with a cluster role to manage cluster-scoped resources, perform the following steps:
- Navigate to User Management → Roles and from the Filter list select Cluster-wide Roles.
Search for the
argocd-application-controllerby using the Search by name field.The Roles page displays the created cluster role.
TipAlternatively, in the OpenShift CLI, run the following command:
oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller
oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output
yesverifies that the Argo instance is configured with a cluster role to manage cluster-scoped resources. Else, check your configurations and take necessary steps as required.
1.3. Default permissions of an Argo CD instance Copy linkLink copied to clipboard!
By default Argo CD instance has the following permissions:
-
Argo CD instance has the
adminprivileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the foo namespace has theadminprivileges to manage resources only for that namespace. Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide
readprivileges on resources to function appropriately:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can edit the cluster roles used by the
argocd-serverandargocd-application-controllercomponents where Argo CD is running such that thewriteprivileges are limited to only the namespaces and resources that you wish Argo CD to manage.oc edit clusterrole argocd-server oc edit clusterrole argocd-application-controller
$ oc edit clusterrole argocd-server $ oc edit clusterrole argocd-application-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Running the Argo CD instance at the cluster-level Copy linkLink copied to clipboard!
The default Argo CD instance and the accompanying controllers, installed by the Red Hat OpenShift GitOps Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle.
Procedure
Label the existing nodes:
oc label node <node-name> node-role.kubernetes.io/infra=""
$ oc label node <node-name> node-role.kubernetes.io/infra=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If required, you can also apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes:
oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecute
$ oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecuteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
runOnInfratoggle in theGitOpsServicecustom resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If taints have been added to the nodes, then add
tolerationsto theGitOpsServicecustom resource.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the workloads in the
openshift-gitopsnamespace are now scheduled on the infrastructure nodes by viewing Pods → Pod details for any pod in the console UI.
Any nodeSelectors and tolerations manually added to the default Argo CD custom resource are overwritten by the toggle and tolerations in the GitOpsService custom resource.
1.5. Creating an application by using the Argo CD dashboard Copy linkLink copied to clipboard!
Argo CD provides a dashboard which allows you to create applications.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the
menu in the web console, and defines a namespace spring-petclinic on the cluster.
Prerequisites
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
Procedure
- In the Argo CD dashboard, click NEW APP to add a new Argo CD application.
For this workflow, create a cluster-configs application with the following configurations:
- Application Name
-
cluster-configs - Project
-
default - Sync Policy
-
Manual - Repository URL
-
https://github.com/redhat-developer/openshift-gitops-getting-started - Revision
-
HEAD - Path
-
cluster - Destination
-
https://kubernetes.default.svc - Namespace
-
spring-petclinic - Directory Recurse
-
checked
- Click CREATE to create your application.
- Open the Administrator perspective of the web console and expand Administration → Namespaces.
-
Search for and select the namespace, then enter
argocd.argoproj.io/managed-by=openshift-gitopsin the Label field so that the Argo CD instance in theopenshift-gitopsnamespace can manage your namespace.
1.6. Creating an application by using the oc tool Copy linkLink copied to clipboard!
You can create Argo CD applications in your terminal by using the oc tool.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to an Argo CD instance.
Procedure
Download the sample application:
git clone git@github.com:redhat-developer/openshift-gitops-getting-started.git
$ git clone git@github.com:redhat-developer/openshift-gitops-getting-started.gitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the application:
oc create -f openshift-gitops-getting-started/argo/app.yaml
$ oc create -f openshift-gitops-getting-started/argo/app.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
oc getcommand to review the created application:oc get application -n openshift-gitops
$ oc get application -n openshift-gitopsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a label to the namespace your application is deployed in so that the Argo CD instance in the
openshift-gitopsnamespace can manage it:oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops
$ oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitopsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7. Creating an application in the default mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can create applications in the default mode by using the GitOps argocd CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc). -
You have installed the Red Hat OpenShift GitOps
argocdCLI. - You have logged in to Argo CD instance.
Procedure
Get the
adminaccount password for the Argo CD server:ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the Argo CD server URL:
SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the Argo CD server by using the
adminaccount password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you are able to run
argocdcommands in the default mode by listing all applications:argocd app list
$ argocd app listCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGETCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an application in the default mode:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the
spring-petclinicdestination namespace to be managed by theopenshif-gitopsArgo CD instance:oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available applications to confirm that the application is created successfully:
argocd app list
$ argocd app listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Even though the
cluster-configsArgo CD application has theHealthystatus, it is not automatically synced due to itsnonesync policy, causing it to remain in theOutOfSyncstatus.
1.8. Creating an application in core mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can create applications in core mode by using the GitOps argocd CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc). -
You have installed the Red Hat OpenShift GitOps
argocdCLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
ocCLI tool:oc login -u <username> -p <password> <server_url>
$ oc login -u <username> -p <password> <server_url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the context is set correctly in the
kubeconfigfile:oc config current-context
$ oc config current-contextCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the default namespace of the current context to
openshift-gitops:oc config set-context --current --namespace openshift-gitops
$ oc config set-context --current --namespace openshift-gitopsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the following environment variable to override the Argo CD component names:
export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you are able to run
argocdcommands incoremode by listing all applications:argocd app list --core
$ argocd app list --coreCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGETCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an application in
coremode:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the
spring-petclinicdestination namespace to be managed by theopenshif-gitopsArgo CD instance:oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available applications to confirm that the application is created successfully:
argocd app list --core
$ argocd app list --coreCopy to Clipboard Copied! Toggle word wrap Toggle overflow Even though the
cluster-configsArgo CD application has theHealthystatus, it is not automatically synced due to itsnonesync policy, causing it to remain in theOutOfSyncstatus.
1.9. Synchronizing your application with your Git repository Copy linkLink copied to clipboard!
You can synchronize your application with your Git repository by modifying the synchronization policy for Argo CD. The policy modification automatically applies the changes in your cluster configurations from your Git repository to the cluster.
Procedure
- In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync. Because the application was configured with a manual sync policy, Argo CD does not sync it automatically.
- Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE. Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync. You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster.
- Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced. Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster.
-
Navigate to the OpenShift Container Platform web console and click
to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there.
Navigate to the Project page and search for the
spring-petclinicnamespace to verify that it has been added to the cluster.Your cluster configurations have been successfully synchronized to the cluster.
1.10. Synchronizing an application in the default mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can synchronize applications in the default mode by using the GitOps argocd CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
-
You have installed the OpenShift CLI (
oc). -
You have installed the Red Hat OpenShift GitOps
argocdCLI.
Procedure
Get the
adminaccount password for the Argo CD server:ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the Argo CD server URL:
SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the Argo CD server by using the
adminaccount password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Because the application is configured with the
nonesync policy, you must manually trigger the sync operation:argocd app sync openshift-gitops/app-cluster-configs
$ argocd app sync openshift-gitops/app-cluster-configsCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the application to confirm that the application has the
HealthyandSyncedstatuses:argocd app list
$ argocd app listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.11. Synchronizing an application in core mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can synchronize applications in core mode by using the GitOps argocd CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc). -
You have installed the Red Hat OpenShift GitOps
argocdCLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
ocCLI tool:oc login -u <username> -p <password> <server_url>
$ oc login -u <username> -p <password> <server_url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the context is set correctly in the
kubeconfigfile:oc config current-context
$ oc config current-contextCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the default namespace of the current context to
openshift-gitops:oc config set-context --current --namespace openshift-gitops
$ oc config set-context --current --namespace openshift-gitopsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the following environment variable to override the Argo CD component names:
export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Because the application is configured with the
nonesync policy, you must manually trigger the sync operation:argocd app sync --core openshift-gitops/app-cluster-configs
$ argocd app sync --core openshift-gitops/app-cluster-configsCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the application to confirm that the application has the
HealthyandSyncedstatuses:argocd app list --core
$ argocd app list --coreCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.12. In-built permissions for cluster configuration Copy linkLink copied to clipboard!
By default, the Argo CD instance has permissions to manage specific cluster-scoped resources such as cluster Operators, optional OLM Operators and user management.
-
Argo CD does not have
cluster-adminpermissions. - You can extend the permissions bound to any Argo CD instances managed by the GitOps Operator. However, you must not modify the permission resources, such as roles or cluster roles created by the GitOps Operator, because the Operator might reconcile them back to their initial state. Instead, create dedicated role and cluster role objects and bind them to the appropriate service account that the application controller uses.
Permissions for the Argo CD instance:
| Resources | Descriptions |
|---|---|
| Resource Groups | Configure the user or administrator |
|
| Optional Operators managed by OLM |
|
| Groups, Users and their permissions |
|
| Control plane Operators managed by CVO used to configure cluster-wide build configuration, registry configuration and scheduler policies |
|
| Storage |
|
| Console customization |
1.13. Adding permissions for cluster configuration Copy linkLink copied to clipboard!
You can grant permissions for an Argo CD instance to manage cluster configuration. Create a cluster role with additional permissions and then create a new cluster role binding to associate the cluster role with a service account.
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-adminprivileges and are logged in to the web console. - You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
Procedure
In the web console, select User Management → Roles → Create Role. Use the following
ClusterRoleYAML template to add rules to specify the additional permissions.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create to add the cluster role.
- To create the cluster role binding, select User Management → Role Bindings → Create Binding.
- Select All Projects from the Project list.
- Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for the RoleBinding name.
- Select the newly created cluster role or an existing cluster role from the drop-down list.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
openshift-gitops Subject name:
openshift-gitops-argocd-application-controllerNoteThe value of Subject name depends on the GitOps control plane components for which you create the cluster roles and cluster role bindings.
-
Subject namespace:
Click Create. The YAML file for the
ClusterRoleBindingobject is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.14. Installing OLM Operators using Red Hat OpenShift GitOps Copy linkLink copied to clipboard!
Red Hat OpenShift GitOps with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators.
Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the OpenShift Container Platform web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster.
Red Hat OpenShift GitOps places your Kubernetes resources in your Git repository. As a cluster administrator, use Red Hat OpenShift GitOps to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using Red Hat OpenShift GitOps, the Red Hat OpenShift GitOps automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster.
1.14.1. Installing cluster-scoped Operators Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) uses a default global-operators Operator group in the openshift-operators namespace for cluster-scoped Operators. Hence you do not have to manage the OperatorGroup resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the OperatorGroup resource in that namespace.
To install cluster-scoped Operators, create and place the Subscription resource of the required Operator in your Git repository.
Example: Grafana Operator subscription
1.14.2. Installing namepace-scoped Operators Copy linkLink copied to clipboard!
To install namespace-scoped Operators, create and place the Subscription and OperatorGroup resources of the required Operator in your Git repository.
Example: Ansible Automation Platform Resource Operator
When deploying multiple Operators using Red Hat OpenShift GitOps, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a failure state with the TooManyOperatorGroups reason. After the number of Operator groups in their corresponding namespaces reaches one, all the previous failure state CSVs transition to pending state. You must manually approve the pending install plan to complete the Operator installation.
1.15. Configuring respectRBAC using Red Hat OpenShift GitOps Copy linkLink copied to clipboard!
The respectRBAC feature in Argo CD controls how Argo CD watches resources on a cluster. By default, Argo CD attempts to watch all Kubernetes resources (CRDs) on a cluster at the cluster scope. With the respectRBAC feature, you can restrict the ArgoCD controller from discovering or syncing specific resources using only controller RBAC, without manually configuring resource exclusions.
To enable this feature, set the .spec.controller.respectRBAC key in the Argo CD resource. After you set this key, the controller automatically stops watching resources it cannot list or access. For example, this prevents a situation where the Argo CD cluster role restricts Argo CD from watching OpenShift Routes, which would otherwise result in an error during synchronization, stating that it cannot watch the resource.
You can enable the respectRBAC feature by creating an Argo CD instance through the command-line interface (CLI) or the web console.
Prerequisites
Ensure that you created and updated a namespace in the Subscription resource, so Subscription can host a cluster-scoped Argo CD instance. For more information, see "Using an Argo CD instance to manage cluster-scoped resources".
1.15.1. Configuring respectRBAC using the CLI Copy linkLink copied to clipboard!
You can configure the respectRBAC feature by using the CLI.
Procedure
Create a YAML object file, for example,
argo-cd-resource.yaml, to configure therespectRBACfeature:Example
ArgoCDYAML to createrespectRBACCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the Argo CD instance.
- 2
- You can specify the value of the
.spec.controller.respectRBACkey in theArgoCDresource asnormalorstrict. Consider setting a value asnormalto balance accuracy and speed as resource listing is a lightweight operation. Set the value asstrictif Argo CD reports errors indicating that it cannot access resources when you set the value asnormal. Settingstrictincreases the number of API calls to the server and it is more accurate compared tonormalas Argo CD performs additional validations of RBAC resources to determine permissions.
Apply the changes to the YAML file by running the following command.
oc apply -f argocd-resource.yaml -n argo-cd-instance
$ oc apply -f argocd-resource.yaml -n argo-cd-instance1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the YAML file that includes the
ArgoCDresource and the namespace that hostsArgoCD.
Verify that the status of the
.status.phasefield isAvailableby running the following command:oc get argocd <argocd_instance_name> -n <argocd_namespace> -o jsonpath='{.status.phase}'$ oc get argocd <argocd_instance_name> -n <argocd_namespace> -o jsonpath='{.status.phase}'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<argocd_instance_name>with the name of your Argo CD instance for example,example-argocd.
Verify that the
resource.respectRBACparameter in theConfigMapresource is updated successfully:To retrieve the contents of the
argocd-cmconfig map, run the following command:oc get cm argocd-cm -n <argocd_namespace> -o yaml
$ oc get cm argocd-cm -n <argocd_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
argocd-cmConfigMapcontains theresource.respectRBACparameter and ensure its value is set to eitherstrictornormal.
1.15.2. Configuring respectRBAC by using the web console Copy linkLink copied to clipboard!
You can configure respectRBAC in the web console.
Procedure
- Log in to the OpenShift Container Platform web console.
- In the Administrator perspective of the web console, click Operators → Installed Operators.
- Create or select the project where you want to install the user-defined Argo CD instance from the Project list.
- Select Red Hat OpenShift GitOps from the installed Operators list and click the Argo CD tab.
Configure the
respectRBACparameter in the Argo CD tab.spec: controller: respectRBAC: strictspec: controller: respectRBAC: strictCopy to Clipboard Copied! Toggle word wrap Toggle overflow Click Create.
After successful installation, verify that the Argo CD instance is listed under the Argo CD tab and the Status is Available.
After the Argo CD instance is created, verify that the
resource.respectRBACparameter in theConfigMapresource is updated successfully by completing the following steps.- In the Administrator perspective, go to Workload → ConfigMaps.
- In the Project option, select the Argo CD namespace.
-
Select the
argocd-cmconfig map. -
Select the YAML tab to view the
resource.respectRBACparameter.
Chapter 2. Customizing permissions by creating user-defined cluster roles for cluster-scoped instances Copy linkLink copied to clipboard!
For the default cluster-scoped instance, the Red Hat OpenShift GitOps Operator grants additional permissions for managing certain cluster-scoped resources. Consequently, as a cluster administrator, when you deploy an Argo CD as a cluster-scoped instance, the Operator creates additional cluster roles and cluster role bindings for the GitOps control plane components. These cluster roles and cluster role bindings provide the additional permissions that Argo CD requires to operate at the cluster level.
If you do not want the cluster-scoped instance to have all of the Operator-given permissions and choose to add or remove permissions to cluster-wide resources, you must first disable the creation of the default cluster roles for the cluster-scoped instance. Then, you can customize permissions for the following cluster-scoped instances:
- Default ArgoCD instance (default cluster-scoped instance)
- User-defined cluster-scoped Argo CD instance
This guide provides instructions with examples to help you create a user-defined cluster-scoped Argo CD instance, deploy an Argo CD application in your defined namespace that contains custom configurations for your cluster, disable the creation of the default cluster roles for the cluster-scoped instance, and customize permissions for user-defined cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
As a developer, if you are creating an Argo CD application and deploying cluster-wide resources, ensure that your cluster administrator grants the necessary permissions to them.
Otherwise, after the Argo CD reconciliation, you will see an authentication error message in the application’s Status field similar to the following example:
Example authentication error message
persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope.
persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope.
2.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed Red Hat OpenShift GitOps 1.13.0 or a later version on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc). -
You have installed the Red Hat OpenShift GitOps
argocdCLI. -
You have installed a cluster-scoped Argo CD instance in your defined namespace. For example,
spring-petclinicnamespace. You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:
- Argo CD Application Controller
- Argo CD server
- Argo CD ApplicationSet Controller (provided the ApplicationSet Controller is created)
You have deployed a
cluster-configsArgo CD application with thecustomclusterrolepath in thespring-petclinicnamespace and created thetest-gitops-nsnamespace andtest-gitops-pvpersistent volume resources.NoteThe
cluster-configsArgo CD application must be managed by a user-defined cluster-scoped instance with the following parameters set:-
The
selfHealfield value set totrue -
The
syncPolicyfield value set toautomated -
The Label field set to the
app.kubernetes.io/part-of=argocdvalue -
The Label field set to the
argocd.argoproj.io/managed-by=<user_defined_namespace>value so that the Argo CD instance in your defined namespace can manage your namespace -
The Label field set to the
app.kubernetes.io/name=<user_defined_argocd_instance>value
-
The
2.2. Disabling the creation of the default cluster roles for the cluster-scoped instance Copy linkLink copied to clipboard!
To add or remove permissions to cluster-wide resources, as needed, you must disable the creation of the default cluster roles for the cluster-scoped instance by editing the YAML file of the Argo CD custom resource (CR).
Procedure
In the Argo CD CR, set the value of the
.spec.defaultClusterScopedRoleDisabledfield totrue:Example Argo CD CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the cluster-scoped instance.
- 2
- The namespace where you want to run the cluster-scoped instance.
- 3
- The flag value that disables the creation of the default cluster roles for the cluster-scoped instance. If you want the Operator to recreate the default cluster roles and cluster role bindings for the cluster-scoped instance, set the field value to
false.
Sample output
argocd.argoproj.io/example configured
argocd.argoproj.io/example configuredCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Red Hat OpenShift GitOps Operator has deleted the default cluster roles and cluster role bindings for the GitOps control plane components by running the following commands:
oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>
$ oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>
$ oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
No resources found
No resources foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow The default cluster roles and cluster role bindings for the cluster-scoped instance are not created. As a cluster administrator, you can now create and customize permissions for cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
2.3. Customizing permissions for cluster-scoped instances Copy linkLink copied to clipboard!
As a cluster administrator, to customize permissions for cluster-scoped instances, you must create new cluster roles and cluster role bindings for the GitOps control plane components.
For example purposes, the following instructions focus only on user-defined cluster-scoped instances.
Procedure
- Open the Administrator perspective of the web console and go to User Management → Roles → Create Role.
Use the following
ClusterRoleYAML template to add rules to specify the additional permissions.Example cluster role YAML template
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create to add the cluster role.
Find the service account used by the control plane component you are customizing permissions for, by performing the following steps:
- Go to Workloads → Pods.
- From the Project list, select the project where the user-defined cluster-scoped instance is installed.
- Click the pod of the control plane component and go to the YAML tab.
-
Find the
spec.ServiceAccountfield and note the service account.
- Go to User Management → RoleBindings → Create binding.
- Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for RoleBinding name by following the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention.
- Select the newly created cluster role from the drop-down list for Role name.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
spring-petclinic Subject name:
example-argocd-application-controllerNoteFor Subject name, ensure that the value you configure is the same as the value of the
spec.ServiceAccountfield of the control plane component you are customizing permissions for.
-
Subject namespace:
Click Create.
You have created the required permissions for the control plane component’s service account and namespace. The YAML file for the
ClusterRoleBindingobject looks similar to the following example:Example YAML file for a cluster role binding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Customizing permissions by creating aggregated cluster roles Copy linkLink copied to clipboard!
The default cluster role for the Argo CD Application Controller has a specific set of hard-coded permissions. The Red Hat OpenShift GitOps Operator manages this cluster role, so you cannot modify it. As a cluster administrator, you can customize the permissions by using any one of the following methods:
3.1. Aggregated cluster roles Copy linkLink copied to clipboard!
By using aggregated cluster roles, you do not have to define permissions by creating new cluster roles from scratch. Instead, you can combine several cluster roles into a single one.
With Red Hat OpenShift GitOps 1.14 and later, as a cluster administrator, you can use aggregated cluster roles and enable users to easily add user-defined permissions for Argo CD Application Controller.
- The aggregated cluster roles functionality is optional and disabled by default. You can create aggregated cluster roles only for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
-
Deleting the
aggregatedClusterRolesfield from the Argo CD custom resource (CR) does not delete the user-defined cluster role. You must manually delete the user-defined cluster role using the CLI or UI.
3.2. Prerequisites Copy linkLink copied to clipboard!
- You understand aggregated cluster roles.
- You have installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc). -
You have installed the Red Hat OpenShift GitOps
argocdCLI. - You have installed a cluster-scoped Argo CD instance in your defined namespace.
You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:
- Argo CD Application Controller
- Argo CD server
- Argo CD ApplicationSet Controller, if ApplicationSet Controller is created
- You have disabled the creation of the default cluster roles for the cluster-scoped instance.
3.3. Creating aggregated cluster roles Copy linkLink copied to clipboard!
The process of creating aggregated cluster roles consists of the following procedures:
- Enabling the creation of aggregated cluster roles
- Creating user-defined cluster roles and configuring user-defined permissions for Application Controller
3.3.1. Enable the creation of aggregated cluster roles Copy linkLink copied to clipboard!
You can enable the creation of aggregated cluster roles by setting the value of the .spec.aggregatedClusterRoles field to true in the Argo CD custom resource (CR). When you enable the creation of aggregated cluster roles, the Red Hat OpenShift GitOps Operator takes the following actions:
-
Creates an
<argocd_name>-<argocd_namespace>-argocd-application-controlleraggregated cluster role with a predefinedaggregationRulefield by default. - Creates a corresponding cluster role binding and manages it.
-
Creates and manages
viewandadmincluster roles for Application Controller to add user-defined permissions into the aggregated cluster role.
3.3.2. Create user-defined cluster roles and configure user-defined permissions Copy linkLink copied to clipboard!
To configure user-defined permissions into the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role and aggregated cluster role, you must create one or more user-defined cluster roles with the argocd/aggregate-to-admin: 'true' label and then configure the user-defined permissions for Application Controller.
-
The aggregated cluster role inherits permissions from the
<argocd_name>-<argocd_namespace>-argocd-application-controller-adminand<argocd_name>-<argocd_namespace>-argocd-application-controller-viewcluster roles. -
The
<argocd_name>-<argocd_namespace>-argocd-application-controller-admincluster role inherits permissions from the user-defined cluster role.
3.4. Enabling the creation of aggregated cluster roles Copy linkLink copied to clipboard!
To enable the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance, you must configure the corresponding field by editing the YAML file of the Argo CD custom resource (CR).
Procedure
In the Argo CD CR, set the value of the
.spec.aggregatedClusterRolesfield totrue:Example Argo CD CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the cluster-scoped instance.
- 2
- The namespace where you want to run the cluster-scoped instance.
- 3
- The value set to
trueenables the creation of aggregated cluster roles. If you do not want to enable the creation of aggregated cluster roles, either do not include this line or set the value tofalse.
Example output
argocd.argoproj.io/example configured
argocd.argoproj.io/example configuredCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
Statusfield of the cluster-scoped Argo CD instance shows asPhase: Availableby running the following command:oc describe argocd.argoproj.io/example -n spring-petclinic
$ oc describe argocd.argoproj.io/example -n spring-petclinicCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
Availablestatus indicates that the cluster-scoped Argo CD instance is healthy and available.
NoteThe Red Hat OpenShift GitOps Operator creates the following default cluster roles and manages them:
-
<argocd_name>-<argocd_namespace>-argocd-application-controlleraggregated cluster role -
<argocd_name>-<argocd_namespace>-argocd-application-controller-view -
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
Verify that the Operator has created the default cluster roles and cluster role bindings for the Argo CD Application Controller and Argo CD server components by running the following commands:
oc get ClusterRoles -l app.kubernetes.io/part-of=argocd
$ oc get ClusterRoles -l app.kubernetes.io/part-of=argocdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CREATED AT example-spring-petclinic-argocd-application-controller 2024-08-14T08:20:58Z example-spring-petclinic-argocd-application-controller-admin 2024-08-14T09:08:38Z example-spring-petclinic-argocd-application-controller-view 2024-08-14T09:08:38Z example-spring-petclinic-argocd-server 2024-08-14T08:20:59Z
NAME CREATED AT example-spring-petclinic-argocd-application-controller 2024-08-14T08:20:58Z example-spring-petclinic-argocd-application-controller-admin 2024-08-14T09:08:38Z example-spring-petclinic-argocd-application-controller-view 2024-08-14T09:08:38Z example-spring-petclinic-argocd-server 2024-08-14T08:20:59ZCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ClusterRoleBindings -l app.kubernetes.io/part-of=argocd
$ oc get ClusterRoleBindings -l app.kubernetes.io/part-of=argocdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME ROLE AGE example-spring-petclinic-argocd-application-controller ClusterRole/example-spring-petclinic-argocd-application-controller 54m example-spring-petclinic-argocd-server ClusterRole/example-spring-petclinic-argocd-server 54m
NAME ROLE AGE example-spring-petclinic-argocd-application-controller ClusterRole/example-spring-petclinic-argocd-application-controller 54m example-spring-petclinic-argocd-server ClusterRole/example-spring-petclinic-argocd-server 54mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The cluster role bindings for the
viewandadmincluster roles are not created. This is because theviewandadmincluster roles only add permissions to the aggregated cluster role and do not directly configure permissions to the Argo CD Application Controller.TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles and User Management → RoleBindings, respectively. You can search for the cluster roles and cluster role bindings that have the
app.kubernetes.io/part-of:argocdlabel.Verify that the aggregated cluster role is created by checking the permissions of outputs of the roles created by running the following command:
oc get ClusterRole/<cluster_role_name> -o yaml
$ oc get ClusterRole/<cluster_role_name> -o yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<cluster_role_name>with the name of the role created.
Example output of the aggregated cluster role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the aggregated cluster role.
- 2
- The predefined list of labels indicates that the aggregated cluster role can inherit permissions from the other user-defined cluster roles.
- 3
- No predefined permissions are set. However, when the Operator immediately creates a
<argocd_name>-<argocd_namespace>-argocd-application-controller-viewcluster role, the corresponding predefinedviewpermissions are added into the aggregated cluster role.
Example output of the
viewcluster roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output of the
admincluster roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The labels match the predefined list of an existing aggregated cluster role.
- 2
- The name of the
admincluster role. - 3
- The predefined list of labels indicates that the existing
<argocd_name>-<argocd_namespace>-argocd-application-controller-admincluster role can inherit permissions from the other user-defined cluster roles. - 4
- Specifies that no permissions are defined yet in one or more user-defined cluster roles.
TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role,
view, andadmincluster roles. You must open the cluster role to check the details and configurations.As a cluster administrator, you can now create one or more user-defined cluster roles and configure user-defined permissions for Argo CD Application Controller.
3.5. Creating user-defined cluster roles and configuring user-defined permissions for Application Controller Copy linkLink copied to clipboard!
As a cluster administrator, to add user-defined permissions to your aggregated cluster role, you must create one or more user-defined cluster roles and then configure the user-defined permissions for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
Prerequisites
- You have enabled the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
You have the following default cluster roles that are created and managed by the Red Hat OpenShift GitOps Operator:
-
<argocd_name>-<argocd_namespace>-argocd-application-controlleraggregated cluster role with a predefinedaggregationRulefield -
<argocd_name>-<argocd_namespace>-argocd-application-controller-viewwith predefinedviewpermissions -
<argocd_name>-<argocd_namespace>-argocd-application-controller-adminwith no predefined permissions
-
Procedure
Create a new cluster role with the required labels and permissions by using the following command:
oc apply -n <namespace> -f <cluster_role_name>.yaml
$ oc apply -n <namespace> -f <cluster_role_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Specifies the name of your defined namespace.
<cluster_role_name>Specifies the name of your defined cluster role YAML file.
Example user-defined cluster role YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the user-defined cluster role.
- 2
- The labels match the predefined list of an existing
<argocd_name>-<argocd_namespace>-argocd-application-controller-admincluster role. - 3
- The user-defined permissions that are to be added into the aggregated cluster role through the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admincluster role.
TipAlternatively, you can use the web console to create a user-defined cluster role from the Administrator perspective. You can go to User Management → Roles → Create Role, use the preceding YAML template to add permissions, and click Create.
Example output
clusterrole.rbac.authorization.k8s.io/user-application-controller created
clusterrole.rbac.authorization.k8s.io/user-application-controller createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow A user-defined cluster role is created.
Verify that the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admincluster role inherits permissions from the user-defined cluster role by running the following command:oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller-admin -o yaml
$ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller-admin -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<argocd_name>- Specifies the name of your user-defined cluster-scoped Argo CD instance.
<argocd_namespace>Specifies the namespace where Argo CD is installed.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admincluster role. You must open the cluster role to check the details and configurations.
Verify that the
<argocd_name>-<argocd_namespace>-argocd-application-controlleraggregated cluster role inherits permissions from the<argocd_name>-<argocd_namespace>-argocd-application-controller-adminand<argocd_name>-<argocd_namespace>-argocd-application-controller-viewcluster roles by running the following command:oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller -o yaml
$ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<argocd_name>- Specifies the name of your user-defined cluster-scoped Argo CD instance.
<argocd_namespace>Specifies the namespace where Argo CD is installed.
Example output of the aggregated cluster role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role. You must open the cluster role to check the details and configurations.
Chapter 4. Sharding clusters across Argo CD Application Controller replicas Copy linkLink copied to clipboard!
You can shard clusters across multiple Argo CD Application Controller replicas if the controller is managing too many clusters and uses too much memory.
4.1. Enabling the round-robin sharding algorithm Copy linkLink copied to clipboard!
The round-robin sharding algorithm is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, the Argo CD Application Controller uses the non-uniform legacy hash-based sharding algorithm to assign clusters to shards. This can result in uneven cluster distribution. You can enable the round-robin sharding algorithm to achieve more equal cluster distribution across all shards.
Using the round-robin sharding algorithm in Red Hat OpenShift GitOps provides the following benefits:
- Ensure more balanced workload distribution
- Prevent shards from being overloaded or underutilized
- Optimize the efficiency of computing resources
- Reduce the risk of bottlenecks
- Improve overall performance and reliability of the Argo CD system
The introduction of alternative sharding algorithms allows for further customization based on specific use cases. You can select the algorithm that best aligns with your deployment needs, which results in greater flexibility and adaptability in diverse operational scenarios.
To leverage the benefits of alternative sharding algorithms in GitOps, it is crucial to enable sharding during deployment.
4.1.1. Enabling the round-robin sharding algorithm in the web console Copy linkLink copied to clipboard!
You can enable the round-robin sharding algorithm by using the OpenShift Container Platform web console.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have access to the OpenShift Container Platform web console.
-
You have access to the cluster with
cluster-adminprivileges.
Procedure
- In the Administrator perspective of the web console, go to Operators → Installed Operators.
- Click Red Hat OpenShift GitOps from the installed operators and go to the Argo CD tab.
-
Click the Argo CD instance where you want to enable the
round-robinsharding algorithm, for example,openshift-gitops. Click the YAML tab and edit the YAML file as shown in the following example:
Example Argo CD instance with round-robin sharding algorithm enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Save.
A success notification alert,
openshift-gitops has been updated to version <version>, appears.NoteIf you edit the default
openshift-gitopsinstance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.Verify that the sharding is enabled with
round-robinas the sharding algorithm by performing the following steps:- Go to Workloads → StatefulSets.
- Select the namespace where you installed the Argo CD instance from the Project drop-down list.
- Click <instance_name>-application-controller, for example, openshift-gitops-application-controller, and go to the Pods tab.
- Observe the number of created application controller pods. It should correspond with the number of set replicas.
Click on the controller pod you want to examine and go to the Logs tab to view the pod logs.
Example controller pod logs snippet
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin"1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Look for the
"Using filter function: round-robin"message.
In the log Search field, search for
processed by shardto verify that the cluster distribution across shards is even, as shown in the following example.ImportantEnsure that you set the log level to
debugto observe these logs.Example controller pod logs snippet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.
4.1.2. Enabling the round-robin sharding algorithm by using the CLI Copy linkLink copied to clipboard!
You can enable the round-robin sharding algorithm by using the command-line interface.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have access to the cluster with
cluster-adminprivileges.
Procedure
Enable sharding and set the number of replicas to the wanted value by running the following command:
oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=merge$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
argocd.argoproj.io/<argocd_instance> patched
argocd.argoproj.io/<argocd_instance> patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the sharding algorithm to
round-robinby running the following command:oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=merge$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
argocd.argoproj.io/<argocd_instance> patched
argocd.argoproj.io/<argocd_instance> patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the number of Argo CD Application Controller pods corresponds with the number of set replicas by running the following command:
oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>
$ oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s
NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the sharding is enabled with
round-robinas the sharding algorithm by running the following command:oc logs <argocd_application_controller_pod> -n <namespace>
$ oc logs <argocd_application_controller_pod> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output snippet
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin"1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Look for the
"Using filter function: round-robin"message.
Verify that the cluster distribution across shards is even by performing the following steps:
Set the log level to
debugby running the following command:oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=merge$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
argocd.argoproj.io/<argocd_instance> patched
argocd.argoproj.io/<argocd_instance> patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the logs and search for
processed by shardto observe to which shard each cluster is attached by running the following command:oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard"
$ oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output snippet
time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2"
time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0"1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1"2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2"3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.
4.2. Enabling dynamic scaling of shards of the Argo CD Application Controller Copy linkLink copied to clipboard!
Dynamic scaling of shards is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, the Argo CD Application Controller assigns clusters to shards indefinitely. If you are using the round-robin sharding algorithm, this static assignment can result in uneven distribution of shards, particularly when replicas are added or removed. You can enable dynamic scaling of shards to automatically adjust the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. This ensures that shards are well-balanced and optimizes the use of compute resources.
After you enable dynamic scaling, you cannot manually modify the shard count. The system automatically adjusts the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time.
4.2.1. Enabling dynamic scaling of shards in the web console Copy linkLink copied to clipboard!
You can enable dynamic scaling of shards by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-adminprivileges. - You have access to the OpenShift Container Platform web console.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
Procedure
- In the Administator perspective of the OpenShift Container Platform web console, go to Operators → Installed Operators.
- From the the list of Installed Operators, select the Red Hat OpenShift GitOps Operator, and then click the ArgoCD tab.
-
Select the Argo CD instance name for which you want to enable dynamic scaling of shards, for example,
openshift-gitops. Click the YAML tab, and then edit and configure the
spec.controller.shardingproperties as follows:Example Argo CD YAML file with dynamic scaling enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set
dynamicScalingEnabledtotrueto enable dynamic scaling. - 2
- Set
minShardsto the minimum number of shards that you want to have. The value must be set to1or greater. - 3
- Set
maxShardsto the maximum number of shards that you want to have. The value must be greater than the value ofminShards. - 4
- Set
clustersPerShardto the number of clusters that you want to have per shard. The value must be set to1or greater.
Click Save.
A success notification alert,
openshift-gitops has been updated to version <version>, appears.NoteIf you edit the default
openshift-gitopsinstance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.
Verification
Verify that sharding is enabled by checking the number of pods in the namespace:
- Go to Workloads → StatefulSets.
-
Select the namespace where the Argo CD instance is deployed from the Project drop-down list, for example,
openshift-gitops. -
Click the name of the
StatefulSetobject that has the name of the Argo CD instance, for exampleopenshift-gitops-apllication-controller. -
Click the Pods tab, and then verify that the number of pods is equal to or greater than the value of
minShardsthat you have set in the Argo CDYAMLfile.
4.2.2. Enabling dynamic scaling of shards by using the CLI Copy linkLink copied to clipboard!
You can enable dynamic scaling of shards by using the OpenShift CLI (oc).
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have access to the cluster with
cluster-adminprivileges.
Procedure
-
Log in to the cluster by using the
octool as a user withcluster-adminprivileges. Enable dynamic scaling by running the following command:
oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}'$ oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}'$ oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The example command enables dynamic scaling for the
openshift-gitopsArgo CD instance in theopenshift-gitopsnamespace, and sets the minimum number of shards to1, the maximum number of shards to3, and the number of clusters per shard to1. The values ofminShardandclustersPerShardmust be set to1or greater. The value ofmaxShardmust be equal to or greater than the value ofminShard.
Example output
argocd.argoproj.io/openshift-gitops patched
argocd.argoproj.io/openshift-gitops patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the
spec.controller.shardingproperties of the Argo CD instance:oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'$ oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'$ oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output when dynamic scaling of shards is enabled
{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Verify that dynamic scaling is enabled by checking the configured
spec.controller.shardingproperties in the configurationYAMLfile of the Argo CD instance in the OpenShift Container Platform web console. Check the number of Argo CD Application Controller pods:
oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller
$ oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller
$ oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m
NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The number of Argo CD Application Controller pods must be greater than or equal to the value of
minShard.