Declarative cluster configuration
Configuring an OpenShift cluster with cluster configurations by using OpenShift GitOps and creating and synchronizing applications in the default and code mode by using the GitOps CLI
Abstract
Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations Copy linkLink copied to clipboard!
With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster.
1.1. Prerequisites Copy linkLink copied to clipboard!
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
1.2. Using an Argo CD instance to manage cluster-scoped resources Copy linkLink copied to clipboard!
Do not elevate the permissions of Argo CD instances to be cluster-scoped unless you have a distinct use case that requires it. Only users with cluster-admin
privileges should manage the instances you elevate. Anyone with access to the namespace of a cluster-scoped instance can elevate their privileges on the cluster to become a cluster administrator themselves.
To manage cluster-scoped resources, update the existing Subscription
object for the Red Hat OpenShift GitOps Operator and add the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES
environment variable in the spec
section.
Procedure
- In the Administrator perspective of the web console, navigate to Operators → Installed Operators → Red Hat OpenShift GitOps → Subscription.
- Click the Actions list and then click Edit Subscription.
On the openshift-gitops-operator Subscription details page, under the YAML tab, edit the
Subscription
YAML file by adding the namespace of the Argo CD instance to theARGOCD_CLUSTER_CONFIG_NAMESPACES
environment variable in thespec
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save and Reload.
To verify that the Argo CD instance is configured with a cluster role to manage cluster-scoped resources, perform the following steps:
- Navigate to User Management → Roles and from the Filter list select Cluster-wide Roles.
Search for the
argocd-application-controller
by using the Search by name field.The Roles page displays the created cluster role.
TipAlternatively, in the OpenShift CLI, run the following command:
oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller
oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output
yes
verifies that the Argo instance is configured with a cluster role to manage cluster-scoped resources. Else, check your configurations and take necessary steps as required.
1.3. Default permissions of an Argo CD instance Copy linkLink copied to clipboard!
By default Argo CD instance has the following permissions:
-
Argo CD instance has the
admin
privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the foo namespace has theadmin
privileges to manage resources only for that namespace. Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide
read
privileges on resources to function appropriately:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can edit the cluster roles used by the
argocd-server
andargocd-application-controller
components where Argo CD is running such that thewrite
privileges are limited to only the namespaces and resources that you wish Argo CD to manage.oc edit clusterrole argocd-server oc edit clusterrole argocd-application-controller
$ oc edit clusterrole argocd-server $ oc edit clusterrole argocd-application-controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Running the Argo CD instance at the cluster-level Copy linkLink copied to clipboard!
The default Argo CD instance and the accompanying controllers, installed by the Red Hat OpenShift GitOps Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle.
Procedure
Label the existing nodes:
oc label node <node-name> node-role.kubernetes.io/infra=""
$ oc label node <node-name> node-role.kubernetes.io/infra=""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If required, you can also apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes:
oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecute
$ oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecute
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
runOnInfra
toggle in theGitOpsService
custom resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If taints have been added to the nodes, then add
tolerations
to theGitOpsService
custom resource.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the workloads in the
openshift-gitops
namespace are now scheduled on the infrastructure nodes by viewing Pods → Pod details for any pod in the console UI.
Any nodeSelectors
and tolerations
manually added to the default Argo CD custom resource are overwritten by the toggle and tolerations
in the GitOpsService
custom resource.
1.5. Creating an application by using the Argo CD dashboard Copy linkLink copied to clipboard!
Argo CD provides a dashboard which allows you to create applications.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the
menu in the web console, and defines a namespace
spring-petclinic
on the cluster.
Prerequisites
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
Procedure
- In the Argo CD dashboard, click NEW APP to add a new Argo CD application.
For this workflow, create a cluster-configs application with the following configurations:
- Application Name
-
cluster-configs
- Project
-
default
- Sync Policy
-
Manual
- Repository URL
-
https://github.com/redhat-developer/openshift-gitops-getting-started
- Revision
-
HEAD
- Path
-
cluster
- Destination
-
https://kubernetes.default.svc
- Namespace
-
spring-petclinic
- Directory Recurse
-
checked
- Click CREATE to create your application.
- Open the Administrator perspective of the web console and expand Administration → Namespaces.
-
Search for and select the namespace, then enter
argocd.argoproj.io/managed-by=openshift-gitops
in the Label field so that the Argo CD instance in theopenshift-gitops
namespace can manage your namespace.
1.6. Creating an application by using the oc tool Copy linkLink copied to clipboard!
You can create Argo CD applications in your terminal by using the oc
tool.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to an Argo CD instance.
Procedure
Download the sample application:
git clone git@github.com:redhat-developer/openshift-gitops-getting-started.git
$ git clone git@github.com:redhat-developer/openshift-gitops-getting-started.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the application:
oc create -f openshift-gitops-getting-started/argo/app.yaml
$ oc create -f openshift-gitops-getting-started/argo/app.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
oc get
command to review the created application:oc get application -n openshift-gitops
$ oc get application -n openshift-gitops
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a label to the namespace your application is deployed in so that the Argo CD instance in the
openshift-gitops
namespace can manage it:oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops
$ oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7. Creating an application in the default mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can create applications in the default mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. - You have logged in to Argo CD instance.
Procedure
Get the
admin
account password for the Argo CD server:ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the Argo CD server URL:
SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the Argo CD server by using the
admin
account password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$
, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you are able to run
argocd
commands in the default mode by listing all applications:argocd app list
$ argocd app list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an application in the default mode:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the
spring-petclinic
destination namespace to be managed by theopenshif-gitops
Argo CD instance:oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available applications to confirm that the application is created successfully:
argocd app list
$ argocd app list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Even though the
cluster-configs
Argo CD application has theHealthy
status, it is not automatically synced due to itsnone
sync policy, causing it to remain in theOutOfSync
status.
1.8. Creating an application in core mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can create applications in core
mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
oc
CLI tool:oc login -u <username> -p <password> <server_url>
$ oc login -u <username> -p <password> <server_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the context is set correctly in the
kubeconfig
file:oc config current-context
$ oc config current-context
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the default namespace of the current context to
openshift-gitops
:oc config set-context --current --namespace openshift-gitops
$ oc config set-context --current --namespace openshift-gitops
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the following environment variable to override the Argo CD component names:
export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you are able to run
argocd
commands incore
mode by listing all applications:argocd app list --core
$ argocd app list --core
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an application in
core
mode:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label the
spring-petclinic
destination namespace to be managed by theopenshif-gitops
Argo CD instance:oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available applications to confirm that the application is created successfully:
argocd app list --core
$ argocd app list --core
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Even though the
cluster-configs
Argo CD application has theHealthy
status, it is not automatically synced due to itsnone
sync policy, causing it to remain in theOutOfSync
status.
1.9. Synchronizing your application with your Git repository Copy linkLink copied to clipboard!
You can synchronize your application with your Git repository by modifying the synchronization policy for Argo CD. The policy modification automatically applies the changes in your cluster configurations from your Git repository to the cluster.
Procedure
- In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync. Because the application was configured with a manual sync policy, Argo CD does not sync it automatically.
- Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE. Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync. You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster.
- Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced. Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster.
-
Navigate to the OpenShift Container Platform web console and click
to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there.
Navigate to the Project page and search for the
spring-petclinic
namespace to verify that it has been added to the cluster.Your cluster configurations have been successfully synchronized to the cluster.
1.10. Synchronizing an application in the default mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can synchronize applications in the default mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Get the
admin
account password for the Argo CD server:ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the Argo CD server URL:
SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the Argo CD server by using the
admin
account password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$
, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the application is configured with the
none
sync policy, you must manually trigger the sync operation:argocd app sync openshift-gitops/app-cluster-configs
$ argocd app sync openshift-gitops/app-cluster-configs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the application to confirm that the application has the
Healthy
andSynced
statuses:argocd app list
$ argocd app list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.11. Synchronizing an application in core mode by using the GitOps CLI Copy linkLink copied to clipboard!
You can synchronize applications in core
mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
oc
CLI tool:oc login -u <username> -p <password> <server_url>
$ oc login -u <username> -p <password> <server_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the context is set correctly in the
kubeconfig
file:oc config current-context
$ oc config current-context
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the default namespace of the current context to
openshift-gitops
:oc config set-context --current --namespace openshift-gitops
$ oc config set-context --current --namespace openshift-gitops
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the following environment variable to override the Argo CD component names:
export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the application is configured with the
none
sync policy, you must manually trigger the sync operation:argocd app sync --core openshift-gitops/app-cluster-configs
$ argocd app sync --core openshift-gitops/app-cluster-configs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the application to confirm that the application has the
Healthy
andSynced
statuses:argocd app list --core
$ argocd app list --core
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.12. In-built permissions for cluster configuration Copy linkLink copied to clipboard!
By default, the Argo CD instance has permissions to manage specific cluster-scoped resources such as cluster Operators, optional OLM Operators and user management.
-
Argo CD does not have
cluster-admin
permissions. - You can extend the permissions bound to any Argo CD instances managed by the GitOps Operator. However, you must not modify the permission resources, such as roles or cluster roles created by the GitOps Operator, because the Operator might reconcile them back to their initial state. Instead, create dedicated role and cluster role objects and bind them to the appropriate service account that the application controller uses.
Permissions for the Argo CD instance:
Resources | Descriptions |
---|---|
Resource Groups | Configure the user or administrator |
| Optional Operators managed by OLM |
| Groups, Users and their permissions |
| Control plane Operators managed by CVO used to configure cluster-wide build configuration, registry configuration and scheduler policies |
| Storage |
| Console customization |
1.13. Adding permissions for cluster configuration Copy linkLink copied to clipboard!
You can grant permissions for an Argo CD instance to manage cluster configuration. Create a cluster role with additional permissions and then create a new cluster role binding to associate the cluster role with a service account.
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-admin
privileges and are logged in to the web console. - You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
Procedure
In the web console, select User Management → Roles → Create Role. Use the following
ClusterRole
YAML template to add rules to specify the additional permissions.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create to add the cluster role.
- To create the cluster role binding, select User Management → Role Bindings → Create Binding.
- Select All Projects from the Project list.
- Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for the RoleBinding name.
- Select the newly created cluster role or an existing cluster role from the drop-down list.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
openshift-gitops
Subject name:
openshift-gitops-argocd-application-controller
NoteThe value of Subject name depends on the GitOps control plane components for which you create the cluster roles and cluster role bindings.
-
Subject namespace:
Click Create. The YAML file for the
ClusterRoleBinding
object is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.14. Installing OLM Operators using Red Hat OpenShift GitOps Copy linkLink copied to clipboard!
Red Hat OpenShift GitOps with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators.
Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the OpenShift Container Platform web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster.
Red Hat OpenShift GitOps places your Kubernetes resources in your Git repository. As a cluster administrator, use Red Hat OpenShift GitOps to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using Red Hat OpenShift GitOps, the Red Hat OpenShift GitOps automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster.
1.14.1. Installing cluster-scoped Operators Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) uses a default global-operators
Operator group in the openshift-operators
namespace for cluster-scoped Operators. Hence you do not have to manage the OperatorGroup
resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the OperatorGroup
resource in that namespace.
To install cluster-scoped Operators, create and place the Subscription
resource of the required Operator in your Git repository.
Example: Grafana Operator subscription
1.14.2. Installing namepace-scoped Operators Copy linkLink copied to clipboard!
To install namespace-scoped Operators, create and place the Subscription
and OperatorGroup
resources of the required Operator in your Git repository.
Example: Ansible Automation Platform Resource Operator
When deploying multiple Operators using Red Hat OpenShift GitOps, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a failure
state with the TooManyOperatorGroups
reason. After the number of Operator groups in their corresponding namespaces reaches one, all the previous failure
state CSVs transition to pending
state. You must manually approve the pending install plan to complete the Operator installation.
1.15. Configuring respectRBAC using Red Hat OpenShift GitOps Copy linkLink copied to clipboard!
The respectRBAC
feature in Argo CD controls how Argo CD watches resources on a cluster. By default, Argo CD attempts to watch all Kubernetes resources (CRDs) on a cluster at the cluster scope. With the respectRBAC
feature, you can restrict the ArgoCD controller from discovering or syncing specific resources using only controller RBAC, without manually configuring resource exclusions.
To enable this feature, set the .spec.controller.respectRBAC
key in the Argo CD resource. After you set this key, the controller automatically stops watching resources it cannot list or access. For example, this prevents a situation where the Argo CD cluster role restricts Argo CD from watching OpenShift Routes, which would otherwise result in an error during synchronization, stating that it cannot watch the resource.
You can enable the respectRBAC
feature by creating an Argo CD instance through the command-line interface (CLI) or the web console.
Prerequisites
Ensure that you created and updated a namespace in the Subscription
resource, so Subscription
can host a cluster-scoped Argo CD instance. For more information, see "Using an Argo CD instance to manage cluster-scoped resources".
1.15.1. Configuring respectRBAC using the CLI Copy linkLink copied to clipboard!
You can configure the respectRBAC
feature by using the CLI.
Procedure
Create a YAML object file, for example,
argo-cd-resource.yaml
, to configure therespectRBAC
feature:Example
ArgoCD
YAML to createrespectRBAC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the Argo CD instance.
- 2
- You can specify the value of the
.spec.controller.respectRBAC
key in theArgoCD
resource asnormal
orstrict
. Consider setting a value asnormal
to balance accuracy and speed as resource listing is a lightweight operation. Set the value asstrict
if Argo CD reports errors indicating that it cannot access resources when you set the value asnormal
. Settingstrict
increases the number of API calls to the server and it is more accurate compared tonormal
as Argo CD performs additional validations of RBAC resources to determine permissions.
Apply the changes to the YAML file by running the following command.
oc apply -f argocd-resource.yaml -n argo-cd-instance
$ oc apply -f argocd-resource.yaml -n argo-cd-instance
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the YAML file that includes the
ArgoCD
resource and the namespace that hostsArgoCD
.
Verify that the status of the
.status.phase
field isAvailable
by running the following command:oc get argocd <argocd_instance_name> -n <argocd_namespace> -o jsonpath='{.status.phase}'
$ oc get argocd <argocd_instance_name> -n <argocd_namespace> -o jsonpath='{.status.phase}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<argocd_instance_name>
with the name of your Argo CD instance for example,example-argocd
.
Verify that the
resource.respectRBAC
parameter in theConfigMap
resource is updated successfully:To retrieve the contents of the
argocd-cm
config map, run the following command:oc get cm argocd-cm -n <argocd_namespace> -o yaml
$ oc get cm argocd-cm -n <argocd_namespace> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
argocd-cm
ConfigMap
contains theresource.respectRBAC
parameter and ensure its value is set to eitherstrict
ornormal
.
1.15.2. Configuring respectRBAC by using the web console Copy linkLink copied to clipboard!
You can configure respectRBAC
in the web console.
Procedure
- Log in to the OpenShift Container Platform web console.
- In the Administrator perspective of the web console, click Operators → Installed Operators.
- Create or select the project where you want to install the user-defined Argo CD instance from the Project list.
- Select Red Hat OpenShift GitOps from the installed Operators list and click the Argo CD tab.
Configure the
respectRBAC
parameter in the Argo CD tab.spec: controller: respectRBAC: strict
spec: controller: respectRBAC: strict
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Create.
After successful installation, verify that the Argo CD instance is listed under the Argo CD tab and the Status is Available.
After the Argo CD instance is created, verify that the
resource.respectRBAC
parameter in theConfigMap
resource is updated successfully by completing the following steps.- In the Administrator perspective, go to Workload → ConfigMaps.
- In the Project option, select the Argo CD namespace.
-
Select the
argocd-cm
config map. -
Select the YAML tab to view the
resource.respectRBAC
parameter.
Chapter 2. Customizing permissions by creating user-defined cluster roles for cluster-scoped instances Copy linkLink copied to clipboard!
For the default cluster-scoped instance, the Red Hat OpenShift GitOps Operator grants additional permissions for managing certain cluster-scoped resources. Consequently, as a cluster administrator, when you deploy an Argo CD as a cluster-scoped instance, the Operator creates additional cluster roles and cluster role bindings for the GitOps control plane components. These cluster roles and cluster role bindings provide the additional permissions that Argo CD requires to operate at the cluster level.
If you do not want the cluster-scoped instance to have all of the Operator-given permissions and choose to add or remove permissions to cluster-wide resources, you must first disable the creation of the default cluster roles for the cluster-scoped instance. Then, you can customize permissions for the following cluster-scoped instances:
- Default ArgoCD instance (default cluster-scoped instance)
- User-defined cluster-scoped Argo CD instance
This guide provides instructions with examples to help you create a user-defined cluster-scoped Argo CD instance, deploy an Argo CD application in your defined namespace that contains custom configurations for your cluster, disable the creation of the default cluster roles for the cluster-scoped instance, and customize permissions for user-defined cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
As a developer, if you are creating an Argo CD application and deploying cluster-wide resources, ensure that your cluster administrator grants the necessary permissions to them.
Otherwise, after the Argo CD reconciliation, you will see an authentication error message in the application’s Status
field similar to the following example:
Example authentication error message
persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope.
persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope.
2.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed Red Hat OpenShift GitOps 1.13.0 or a later version on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. -
You have installed a cluster-scoped Argo CD instance in your defined namespace. For example,
spring-petclinic
namespace. You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:
- Argo CD Application Controller
- Argo CD server
- Argo CD ApplicationSet Controller (provided the ApplicationSet Controller is created)
You have deployed a
cluster-configs
Argo CD application with thecustomclusterrole
path in thespring-petclinic
namespace and created thetest-gitops-ns
namespace andtest-gitops-pv
persistent volume resources.NoteThe
cluster-configs
Argo CD application must be managed by a user-defined cluster-scoped instance with the following parameters set:-
The
selfHeal
field value set totrue
-
The
syncPolicy
field value set toautomated
-
The Label field set to the
app.kubernetes.io/part-of=argocd
value -
The Label field set to the
argocd.argoproj.io/managed-by=<user_defined_namespace>
value so that the Argo CD instance in your defined namespace can manage your namespace -
The Label field set to the
app.kubernetes.io/name=<user_defined_argocd_instance>
value
-
The
2.2. Disabling the creation of the default cluster roles for the cluster-scoped instance Copy linkLink copied to clipboard!
To add or remove permissions to cluster-wide resources, as needed, you must disable the creation of the default cluster roles for the cluster-scoped instance by editing the YAML file of the Argo CD custom resource (CR).
Procedure
In the Argo CD CR, set the value of the
.spec.defaultClusterScopedRoleDisabled
field totrue
:Example Argo CD CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the cluster-scoped instance.
- 2
- The namespace where you want to run the cluster-scoped instance.
- 3
- The flag value that disables the creation of the default cluster roles for the cluster-scoped instance. If you want the Operator to recreate the default cluster roles and cluster role bindings for the cluster-scoped instance, set the field value to
false
.
Sample output
argocd.argoproj.io/example configured
argocd.argoproj.io/example configured
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Red Hat OpenShift GitOps Operator has deleted the default cluster roles and cluster role bindings for the GitOps control plane components by running the following commands:
oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>
$ oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>
$ oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
No resources found
No resources found
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default cluster roles and cluster role bindings for the cluster-scoped instance are not created. As a cluster administrator, you can now create and customize permissions for cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
2.3. Customizing permissions for cluster-scoped instances Copy linkLink copied to clipboard!
As a cluster administrator, to customize permissions for cluster-scoped instances, you must create new cluster roles and cluster role bindings for the GitOps control plane components.
For example purposes, the following instructions focus only on user-defined cluster-scoped instances.
Procedure
- Open the Administrator perspective of the web console and go to User Management → Roles → Create Role.
Use the following
ClusterRole
YAML template to add rules to specify the additional permissions.Example cluster role YAML template
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Create to add the cluster role.
Find the service account used by the control plane component you are customizing permissions for, by performing the following steps:
- Go to Workloads → Pods.
- From the Project list, select the project where the user-defined cluster-scoped instance is installed.
- Click the pod of the control plane component and go to the YAML tab.
-
Find the
spec.ServiceAccount
field and note the service account.
- Go to User Management → RoleBindings → Create binding.
- Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for RoleBinding name by following the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention.
- Select the newly created cluster role from the drop-down list for Role name.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
spring-petclinic
Subject name:
example-argocd-application-controller
NoteFor Subject name, ensure that the value you configure is the same as the value of the
spec.ServiceAccount
field of the control plane component you are customizing permissions for.
-
Subject namespace:
Click Create.
You have created the required permissions for the control plane component’s service account and namespace. The YAML file for the
ClusterRoleBinding
object looks similar to the following example:Example YAML file for a cluster role binding
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Customizing permissions by creating aggregated cluster roles Copy linkLink copied to clipboard!
The default cluster role for the Argo CD Application Controller has a specific set of hard-coded permissions. The Red Hat OpenShift GitOps Operator manages this cluster role, so you cannot modify it. As a cluster administrator, you can customize the permissions by using any one of the following methods:
3.1. Aggregated cluster roles Copy linkLink copied to clipboard!
By using aggregated cluster roles, you do not have to define permissions by creating new cluster roles from scratch. Instead, you can combine several cluster roles into a single one.
With Red Hat OpenShift GitOps 1.14 and later, as a cluster administrator, you can use aggregated cluster roles and enable users to easily add user-defined permissions for Argo CD Application Controller.
- The aggregated cluster roles functionality is optional and disabled by default. You can create aggregated cluster roles only for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
-
Deleting the
aggregatedClusterRoles
field from the Argo CD custom resource (CR) does not delete the user-defined cluster role. You must manually delete the user-defined cluster role using the CLI or UI.
3.2. Prerequisites Copy linkLink copied to clipboard!
- You understand aggregated cluster roles.
- You have installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. - You have installed a cluster-scoped Argo CD instance in your defined namespace.
You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:
- Argo CD Application Controller
- Argo CD server
- Argo CD ApplicationSet Controller, if ApplicationSet Controller is created
- You have disabled the creation of the default cluster roles for the cluster-scoped instance.
3.3. Creating aggregated cluster roles Copy linkLink copied to clipboard!
The process of creating aggregated cluster roles consists of the following procedures:
- Enabling the creation of aggregated cluster roles
- Creating user-defined cluster roles and configuring user-defined permissions for Application Controller
3.3.1. Enable the creation of aggregated cluster roles Copy linkLink copied to clipboard!
You can enable the creation of aggregated cluster roles by setting the value of the .spec.aggregatedClusterRoles
field to true
in the Argo CD custom resource (CR). When you enable the creation of aggregated cluster roles, the Red Hat OpenShift GitOps Operator takes the following actions:
-
Creates an
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role with a predefinedaggregationRule
field by default. - Creates a corresponding cluster role binding and manages it.
-
Creates and manages
view
andadmin
cluster roles for Application Controller to add user-defined permissions into the aggregated cluster role.
3.3.2. Create user-defined cluster roles and configure user-defined permissions Copy linkLink copied to clipboard!
To configure user-defined permissions into the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role and aggregated cluster role, you must create one or more user-defined cluster roles with the argocd/aggregate-to-admin: 'true'
label and then configure the user-defined permissions for Application Controller.
-
The aggregated cluster role inherits permissions from the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
and<argocd_name>-<argocd_namespace>-argocd-application-controller-view
cluster roles. -
The
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role inherits permissions from the user-defined cluster role.
3.4. Enabling the creation of aggregated cluster roles Copy linkLink copied to clipboard!
To enable the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance, you must configure the corresponding field by editing the YAML file of the Argo CD custom resource (CR).
Procedure
In the Argo CD CR, set the value of the
.spec.aggregatedClusterRoles
field totrue
:Example Argo CD CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the cluster-scoped instance.
- 2
- The namespace where you want to run the cluster-scoped instance.
- 3
- The value set to
true
enables the creation of aggregated cluster roles. If you do not want to enable the creation of aggregated cluster roles, either do not include this line or set the value tofalse
.
Example output
argocd.argoproj.io/example configured
argocd.argoproj.io/example configured
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
Status
field of the cluster-scoped Argo CD instance shows asPhase: Available
by running the following command:oc describe argocd.argoproj.io/example -n spring-petclinic
$ oc describe argocd.argoproj.io/example -n spring-petclinic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
Available
status indicates that the cluster-scoped Argo CD instance is healthy and available.
NoteThe Red Hat OpenShift GitOps Operator creates the following default cluster roles and manages them:
-
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role -
<argocd_name>-<argocd_namespace>-argocd-application-controller-view
-
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
Verify that the Operator has created the default cluster roles and cluster role bindings for the Argo CD Application Controller and Argo CD server components by running the following commands:
oc get ClusterRoles -l app.kubernetes.io/part-of=argocd
$ oc get ClusterRoles -l app.kubernetes.io/part-of=argocd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CREATED AT example-spring-petclinic-argocd-application-controller 2024-08-14T08:20:58Z example-spring-petclinic-argocd-application-controller-admin 2024-08-14T09:08:38Z example-spring-petclinic-argocd-application-controller-view 2024-08-14T09:08:38Z example-spring-petclinic-argocd-server 2024-08-14T08:20:59Z
NAME CREATED AT example-spring-petclinic-argocd-application-controller 2024-08-14T08:20:58Z example-spring-petclinic-argocd-application-controller-admin 2024-08-14T09:08:38Z example-spring-petclinic-argocd-application-controller-view 2024-08-14T09:08:38Z example-spring-petclinic-argocd-server 2024-08-14T08:20:59Z
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ClusterRoleBindings -l app.kubernetes.io/part-of=argocd
$ oc get ClusterRoleBindings -l app.kubernetes.io/part-of=argocd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME ROLE AGE example-spring-petclinic-argocd-application-controller ClusterRole/example-spring-petclinic-argocd-application-controller 54m example-spring-petclinic-argocd-server ClusterRole/example-spring-petclinic-argocd-server 54m
NAME ROLE AGE example-spring-petclinic-argocd-application-controller ClusterRole/example-spring-petclinic-argocd-application-controller 54m example-spring-petclinic-argocd-server ClusterRole/example-spring-petclinic-argocd-server 54m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The cluster role bindings for the
view
andadmin
cluster roles are not created. This is because theview
andadmin
cluster roles only add permissions to the aggregated cluster role and do not directly configure permissions to the Argo CD Application Controller.TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles and User Management → RoleBindings, respectively. You can search for the cluster roles and cluster role bindings that have the
app.kubernetes.io/part-of:argocd
label.Verify that the aggregated cluster role is created by checking the permissions of outputs of the roles created by running the following command:
oc get ClusterRole/<cluster_role_name> -o yaml
$ oc get ClusterRole/<cluster_role_name> -o yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<cluster_role_name>
with the name of the role created.
Example output of the aggregated cluster role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the aggregated cluster role.
- 2
- The predefined list of labels indicates that the aggregated cluster role can inherit permissions from the other user-defined cluster roles.
- 3
- No predefined permissions are set. However, when the Operator immediately creates a
<argocd_name>-<argocd_namespace>-argocd-application-controller-view
cluster role, the corresponding predefinedview
permissions are added into the aggregated cluster role.
Example output of the
view
cluster roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output of the
admin
cluster roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The labels match the predefined list of an existing aggregated cluster role.
- 2
- The name of the
admin
cluster role. - 3
- The predefined list of labels indicates that the existing
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role can inherit permissions from the other user-defined cluster roles. - 4
- Specifies that no permissions are defined yet in one or more user-defined cluster roles.
TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role,
view
, andadmin
cluster roles. You must open the cluster role to check the details and configurations.As a cluster administrator, you can now create one or more user-defined cluster roles and configure user-defined permissions for Argo CD Application Controller.
3.5. Creating user-defined cluster roles and configuring user-defined permissions for Application Controller Copy linkLink copied to clipboard!
As a cluster administrator, to add user-defined permissions to your aggregated cluster role, you must create one or more user-defined cluster roles and then configure the user-defined permissions for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
Prerequisites
- You have enabled the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
You have the following default cluster roles that are created and managed by the Red Hat OpenShift GitOps Operator:
-
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role with a predefinedaggregationRule
field -
<argocd_name>-<argocd_namespace>-argocd-application-controller-view
with predefinedview
permissions -
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
with no predefined permissions
-
Procedure
Create a new cluster role with the required labels and permissions by using the following command:
oc apply -n <namespace> -f <cluster_role_name>.yaml
$ oc apply -n <namespace> -f <cluster_role_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>
- Specifies the name of your defined namespace.
<cluster_role_name>
Specifies the name of your defined cluster role YAML file.
Example user-defined cluster role YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the user-defined cluster role.
- 2
- The labels match the predefined list of an existing
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role. - 3
- The user-defined permissions that are to be added into the aggregated cluster role through the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role.
TipAlternatively, you can use the web console to create a user-defined cluster role from the Administrator perspective. You can go to User Management → Roles → Create Role, use the preceding YAML template to add permissions, and click Create.
Example output
clusterrole.rbac.authorization.k8s.io/user-application-controller created
clusterrole.rbac.authorization.k8s.io/user-application-controller created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A user-defined cluster role is created.
Verify that the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role inherits permissions from the user-defined cluster role by running the following command:oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller-admin -o yaml
$ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller-admin -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<argocd_name>
- Specifies the name of your user-defined cluster-scoped Argo CD instance.
<argocd_namespace>
Specifies the namespace where Argo CD is installed.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role. You must open the cluster role to check the details and configurations.
Verify that the
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role inherits permissions from the<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
and<argocd_name>-<argocd_namespace>-argocd-application-controller-view
cluster roles by running the following command:oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller -o yaml
$ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<argocd_name>
- Specifies the name of your user-defined cluster-scoped Argo CD instance.
<argocd_namespace>
Specifies the namespace where Argo CD is installed.
Example output of the aggregated cluster role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role. You must open the cluster role to check the details and configurations.
Chapter 4. Sharding clusters across Argo CD Application Controller replicas Copy linkLink copied to clipboard!
You can shard clusters across multiple Argo CD Application Controller replicas if the controller is managing too many clusters and uses too much memory.
4.1. Enabling the round-robin sharding algorithm Copy linkLink copied to clipboard!
The round-robin
sharding algorithm is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, the Argo CD Application Controller uses the non-uniform legacy
hash-based sharding algorithm to assign clusters to shards. This can result in uneven cluster distribution. You can enable the round-robin
sharding algorithm to achieve more equal cluster distribution across all shards.
Using the round-robin
sharding algorithm in Red Hat OpenShift GitOps provides the following benefits:
- Ensure more balanced workload distribution
- Prevent shards from being overloaded or underutilized
- Optimize the efficiency of computing resources
- Reduce the risk of bottlenecks
- Improve overall performance and reliability of the Argo CD system
The introduction of alternative sharding algorithms allows for further customization based on specific use cases. You can select the algorithm that best aligns with your deployment needs, which results in greater flexibility and adaptability in diverse operational scenarios.
To leverage the benefits of alternative sharding algorithms in GitOps, it is crucial to enable sharding during deployment.
4.1.1. Enabling the round-robin sharding algorithm in the web console Copy linkLink copied to clipboard!
You can enable the round-robin
sharding algorithm by using the OpenShift Container Platform web console.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have access to the OpenShift Container Platform web console.
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
- In the Administrator perspective of the web console, go to Operators → Installed Operators.
- Click Red Hat OpenShift GitOps from the installed operators and go to the Argo CD tab.
-
Click the Argo CD instance where you want to enable the
round-robin
sharding algorithm, for example,openshift-gitops
. Click the YAML tab and edit the YAML file as shown in the following example:
Example Argo CD instance with round-robin sharding algorithm enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Save.
A success notification alert,
openshift-gitops has been updated to version <version>
, appears.NoteIf you edit the default
openshift-gitops
instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.Verify that the sharding is enabled with
round-robin
as the sharding algorithm by performing the following steps:- Go to Workloads → StatefulSets.
- Select the namespace where you installed the Argo CD instance from the Project drop-down list.
- Click <instance_name>-application-controller, for example, openshift-gitops-application-controller, and go to the Pods tab.
- Observe the number of created application controller pods. It should correspond with the number of set replicas.
Click on the controller pod you want to examine and go to the Logs tab to view the pod logs.
Example controller pod logs snippet
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin"
1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Look for the
"Using filter function: round-robin"
message.
In the log Search field, search for
processed by shard
to verify that the cluster distribution across shards is even, as shown in the following example.ImportantEnsure that you set the log level to
debug
to observe these logs.Example controller pod logs snippet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.
4.1.2. Enabling the round-robin sharding algorithm by using the CLI Copy linkLink copied to clipboard!
You can enable the round-robin
sharding algorithm by using the command-line interface.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
Enable sharding and set the number of replicas to the wanted value by running the following command:
oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=merge
$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
argocd.argoproj.io/<argocd_instance> patched
argocd.argoproj.io/<argocd_instance> patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the sharding algorithm to
round-robin
by running the following command:oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=merge
$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
argocd.argoproj.io/<argocd_instance> patched
argocd.argoproj.io/<argocd_instance> patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the number of Argo CD Application Controller pods corresponds with the number of set replicas by running the following command:
oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>
$ oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s
NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the sharding is enabled with
round-robin
as the sharding algorithm by running the following command:oc logs <argocd_application_controller_pod> -n <namespace>
$ oc logs <argocd_application_controller_pod> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output snippet
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin"
1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Look for the
"Using filter function: round-robin"
message.
Verify that the cluster distribution across shards is even by performing the following steps:
Set the log level to
debug
by running the following command:oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=merge
$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
argocd.argoproj.io/<argocd_instance> patched
argocd.argoproj.io/<argocd_instance> patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the logs and search for
processed by shard
to observe to which shard each cluster is attached by running the following command:oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard"
$ oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output snippet
time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2"
time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0"
1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1"
2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2"
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.
4.2. Enabling dynamic scaling of shards of the Argo CD Application Controller Copy linkLink copied to clipboard!
Dynamic scaling of shards is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, the Argo CD Application Controller assigns clusters to shards indefinitely. If you are using the round-robin
sharding algorithm, this static assignment can result in uneven distribution of shards, particularly when replicas are added or removed. You can enable dynamic scaling of shards to automatically adjust the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. This ensures that shards are well-balanced and optimizes the use of compute resources.
After you enable dynamic scaling, you cannot manually modify the shard count. The system automatically adjusts the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time.
4.2.1. Enabling dynamic scaling of shards in the web console Copy linkLink copied to clipboard!
You can enable dynamic scaling of shards by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
Procedure
- In the Administator perspective of the OpenShift Container Platform web console, go to Operators → Installed Operators.
- From the the list of Installed Operators, select the Red Hat OpenShift GitOps Operator, and then click the ArgoCD tab.
-
Select the Argo CD instance name for which you want to enable dynamic scaling of shards, for example,
openshift-gitops
. Click the YAML tab, and then edit and configure the
spec.controller.sharding
properties as follows:Example Argo CD YAML file with dynamic scaling enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set
dynamicScalingEnabled
totrue
to enable dynamic scaling. - 2
- Set
minShards
to the minimum number of shards that you want to have. The value must be set to1
or greater. - 3
- Set
maxShards
to the maximum number of shards that you want to have. The value must be greater than the value ofminShards
. - 4
- Set
clustersPerShard
to the number of clusters that you want to have per shard. The value must be set to1
or greater.
Click Save.
A success notification alert,
openshift-gitops has been updated to version <version>
, appears.NoteIf you edit the default
openshift-gitops
instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.
Verification
Verify that sharding is enabled by checking the number of pods in the namespace:
- Go to Workloads → StatefulSets.
-
Select the namespace where the Argo CD instance is deployed from the Project drop-down list, for example,
openshift-gitops
. -
Click the name of the
StatefulSet
object that has the name of the Argo CD instance, for exampleopenshift-gitops-apllication-controller
. -
Click the Pods tab, and then verify that the number of pods is equal to or greater than the value of
minShards
that you have set in the Argo CDYAML
file.
4.2.2. Enabling dynamic scaling of shards by using the CLI Copy linkLink copied to clipboard!
You can enable dynamic scaling of shards by using the OpenShift CLI (oc
).
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
-
Log in to the cluster by using the
oc
tool as a user withcluster-admin
privileges. Enable dynamic scaling by running the following command:
oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}'
$ oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}'
$ oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The example command enables dynamic scaling for the
openshift-gitops
Argo CD instance in theopenshift-gitops
namespace, and sets the minimum number of shards to1
, the maximum number of shards to3
, and the number of clusters per shard to1
. The values ofminShard
andclustersPerShard
must be set to1
or greater. The value ofmaxShard
must be equal to or greater than the value ofminShard
.
Example output
argocd.argoproj.io/openshift-gitops patched
argocd.argoproj.io/openshift-gitops patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the
spec.controller.sharding
properties of the Argo CD instance:oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'
$ oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'
$ oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output when dynamic scaling of shards is enabled
{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}
{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Verify that dynamic scaling is enabled by checking the configured
spec.controller.sharding
properties in the configurationYAML
file of the Argo CD instance in the OpenShift Container Platform web console. Check the number of Argo CD Application Controller pods:
oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller
$ oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller
$ oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m
NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The number of Argo CD Application Controller pods must be greater than or equal to the value of
minShard
.