Declarative cluster configuration
Configuring an OpenShift cluster with cluster configurations by using OpenShift GitOps and creating and synchronizing applications in the default and code mode by using the GitOps CLI
Abstract
Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations
With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster.
1.1. Prerequisites
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
1.2. Using an Argo CD instance to manage cluster-scoped resources
Do not elevate the permissions of Argo CD instances to be cluster-scoped unless you have a distinct use case that requires it. Only users with cluster-admin
privileges should manage the instances you elevate. Anyone with access to the namespace of a cluster-scoped instance can elevate their privileges on the cluster to become a cluster administrator themselves.
To manage cluster-scoped resources, update the existing Subscription
object for the Red Hat OpenShift GitOps Operator and add the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES
environment variable in the spec
section.
Procedure
- In the Administrator perspective of the web console, navigate to Operators → Installed Operators → Red Hat OpenShift GitOps → Subscription.
- Click the Actions list and then click Edit Subscription.
On the openshift-gitops-operator Subscription details page, under the YAML tab, edit the
Subscription
YAML file by adding the namespace of the Argo CD instance to theARGOCD_CLUSTER_CONFIG_NAMESPACES
environment variable in thespec
section:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator # ... spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances> # ...
- Click Save and Reload.
To verify that the Argo CD instance is configured with a cluster role to manage cluster-scoped resources, perform the following steps:
- Navigate to User Management → Roles and from the Filter list select Cluster-wide Roles.
Search for the
argocd-application-controller
by using the Search by name field.The Roles page displays the created cluster role.
TipAlternatively, in the OpenShift CLI, run the following command:
oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller
The output
yes
verifies that the Argo instance is configured with a cluster role to manage cluster-scoped resources. Else, check your configurations and take necessary steps as required.
1.3. Default permissions of an Argo CD instance
By default Argo CD instance has the following permissions:
-
Argo CD instance has the
admin
privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the foo namespace has theadmin
privileges to manage resources only for that namespace. Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide
read
privileges on resources to function appropriately:- verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - get - list nonResourceURLs: - '*'
You can edit the cluster roles used by the
argocd-server
andargocd-application-controller
components where Argo CD is running such that thewrite
privileges are limited to only the namespaces and resources that you wish Argo CD to manage.$ oc edit clusterrole argocd-server $ oc edit clusterrole argocd-application-controller
1.4. Running the Argo CD instance at the cluster-level
The default Argo CD instance and the accompanying controllers, installed by the Red Hat OpenShift GitOps Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle.
Procedure
Label the existing nodes:
$ oc label node <node-name> node-role.kubernetes.io/infra=""
Optional: If required, you can also apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes:
$ oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecute
Add the
runOnInfra
toggle in theGitOpsService
custom resource:apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true
Optional: If taints have been added to the nodes, then add
tolerations
to theGitOpsService
custom resource.Example
apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved
-
Verify that the workloads in the
openshift-gitops
namespace are now scheduled on the infrastructure nodes by viewing Pods → Pod details for any pod in the console UI.
Any nodeSelectors
and tolerations
manually added to the default Argo CD custom resource are overwritten by the toggle and tolerations
in the GitOpsService
custom resource.
Additional resources
- To learn more about taints and tolerations, see Controlling pod placement using node taints.
- For more information on infrastructure machine sets, see Creating infrastructure machine sets.
1.5. Creating an application by using the Argo CD dashboard
Argo CD provides a dashboard which allows you to create applications.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the
menu in the web console, and defines a namespace spring-petclinic
on the cluster.
Prerequisites
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
Procedure
- In the Argo CD dashboard, click NEW APP to add a new Argo CD application.
For this workflow, create a cluster-configs application with the following configurations:
- Application Name
-
cluster-configs
- Project
-
default
- Sync Policy
-
Manual
- Repository URL
-
https://github.com/redhat-developer/openshift-gitops-getting-started
- Revision
-
HEAD
- Path
-
cluster
- Destination
-
https://kubernetes.default.svc
- Namespace
-
spring-petclinic
- Directory Recurse
-
checked
- Click CREATE to create your application.
- Open the Administrator perspective of the web console and expand Administration → Namespaces.
-
Search for and select the namespace, then enter
argocd.argoproj.io/managed-by=openshift-gitops
in the Label field so that the Argo CD instance in theopenshift-gitops
namespace can manage your namespace.
1.6. Creating an application by using the oc
tool
You can create Argo CD applications in your terminal by using the oc
tool.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to an Argo CD instance.
Procedure
Download the sample application:
$ git clone git@github.com:redhat-developer/openshift-gitops-getting-started.git
Create the application:
$ oc create -f openshift-gitops-getting-started/argo/app.yaml
Run the
oc get
command to review the created application:$ oc get application -n openshift-gitops
Add a label to the namespace your application is deployed in so that the Argo CD instance in the
openshift-gitops
namespace can manage it:$ oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops
1.7. Creating an application in the default mode by using the GitOps CLI
You can create applications in the default mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. - You have logged in to Argo CD instance.
Procedure
Get the
admin
account password for the Argo CD server:$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
Get the Argo CD server URL:
$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
Log in to the Argo CD server by using the
admin
account password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$
, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
Example
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
Verify that you are able to run
argocd
commands in the default mode by listing all applications:$ argocd app list
If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
Create an application in the default mode:
$ argocd app create app-cluster-configs \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path cluster \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy none \ --sync-option Prune=true \ --sync-option CreateNamespace=true
Label the
spring-petclinic
destination namespace to be managed by theopenshif-gitops
Argo CD instance:$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
List the available applications to confirm that the application is created successfully:
$ argocd app list
Even though the
cluster-configs
Argo CD application has theHealthy
status, it is not automatically synced due to itsnone
sync policy, causing it to remain in theOutOfSync
status.
1.8. Creating an application in core mode by using the GitOps CLI
You can create applications in core
mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
oc
CLI tool:$ oc login -u <username> -p <password> <server_url>
Example
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
Check whether the context is set correctly in the
kubeconfig
file:$ oc config current-context
Set the default namespace of the current context to
openshift-gitops
:$ oc config set-context --current --namespace openshift-gitops
Set the following environment variable to override the Argo CD component names:
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
Verify that you are able to run
argocd
commands incore
mode by listing all applications:$ argocd app list --core
If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
Create an application in
core
mode:$ argocd app create app-cluster-configs --core \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path cluster \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy none \ --sync-option Prune=true \ --sync-option CreateNamespace=true
Label the
spring-petclinic
destination namespace to be managed by theopenshif-gitops
Argo CD instance:$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
List the available applications to confirm that the application is created successfully:
$ argocd app list --core
Even though the
cluster-configs
Argo CD application has theHealthy
status, it is not automatically synced due to itsnone
sync policy, causing it to remain in theOutOfSync
status.
1.9. Synchronizing your application with your Git repository
You can synchronize your application with your Git repository by modifying the synchronization policy for Argo CD. The policy modification automatically applies the changes in your cluster configurations from your Git repository to the cluster.
Procedure
- In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync. Because the application was configured with a manual sync policy, Argo CD does not sync it automatically.
- Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE. Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync. You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster.
- Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced. Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster.
- Navigate to the OpenShift Container Platform web console and click to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there.
Navigate to the Project page and search for the
spring-petclinic
namespace to verify that it has been added to the cluster.Your cluster configurations have been successfully synchronized to the cluster.
1.10. Synchronizing an application in the default mode by using the GitOps CLI
You can synchronize applications in the default mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Get the
admin
account password for the Argo CD server:$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
Get the Argo CD server URL:
$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
Log in to the Argo CD server by using the
admin
account password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$
, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
Example
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
Because the application is configured with the
none
sync policy, you must manually trigger the sync operation:$ argocd app sync openshift-gitops/app-cluster-configs
List the application to confirm that the application has the
Healthy
andSynced
statuses:$ argocd app list
1.11. Synchronizing an application in core mode by using the GitOps CLI
You can synchronize applications in core
mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
oc
CLI tool:$ oc login -u <username> -p <password> <server_url>
Example
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
Check whether the context is set correctly in the
kubeconfig
file:$ oc config current-context
Set the default namespace of the current context to
openshift-gitops
:$ oc config set-context --current --namespace openshift-gitops
Set the following environment variable to override the Argo CD component names:
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
Because the application is configured with the
none
sync policy, you must manually trigger the sync operation:$ argocd app sync --core openshift-gitops/app-cluster-configs
List the application to confirm that the application has the
Healthy
andSynced
statuses:$ argocd app list --core
1.12. In-built permissions for cluster configuration
By default, the Argo CD instance has permissions to manage specific cluster-scoped resources such as cluster Operators, optional OLM Operators and user management.
-
Argo CD does not have
cluster-admin
permissions. - You can extend the permissions bound to any Argo CD instances managed by the GitOps Operator. However, you must not modify the permission resources, such as roles or cluster roles created by the GitOps Operator, because the Operator might reconcile them back to their initial state. Instead, create dedicated role and cluster role objects and bind them to the appropriate service account that the application controller uses.
Permissions for the Argo CD instance:
Resources | Descriptions |
---|---|
Resource Groups | Configure the user or administrator |
| Optional Operators managed by OLM |
| Groups, Users and their permissions |
| Control plane Operators managed by CVO used to configure cluster-wide build configuration, registry configuration and scheduler policies |
| Storage |
| Console customization |
1.13. Adding permissions for cluster configuration
You can grant permissions for an Argo CD instance to manage cluster configuration. Create a cluster role with additional permissions and then create a new cluster role binding to associate the cluster role with a service account.
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-admin
privileges and are logged in to the web console. - You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
Procedure
In the web console, select User Management → Roles → Create Role. Use the following
ClusterRole
YAML template to add rules to specify the additional permissions.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secrets-cluster-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["*"]
- Click Create to add the cluster role.
- To create the cluster role binding, select User Management → Role Bindings → Create Binding.
- Select All Projects from the Project list.
- Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for the RoleBinding name.
- Select the newly created cluster role or an existing cluster role from the drop-down list.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
openshift-gitops
Subject name:
openshift-gitops-argocd-application-controller
NoteThe value of Subject name depends on the GitOps control plane components for which you create the cluster roles and cluster role bindings.
-
Subject namespace:
Click Create. The YAML file for the
ClusterRoleBinding
object is as follows:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-role-binding subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secrets-cluster-role
1.14. Installing OLM Operators using Red Hat OpenShift GitOps
Red Hat OpenShift GitOps with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators.
Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the OpenShift Container Platform web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster.
Red Hat OpenShift GitOps places your Kubernetes resources in your Git repository. As a cluster administrator, use Red Hat OpenShift GitOps to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using Red Hat OpenShift GitOps, the Red Hat OpenShift GitOps automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster.
1.14.1. Installing cluster-scoped Operators
Operator Lifecycle Manager (OLM) uses a default global-operators
Operator group in the openshift-operators
namespace for cluster-scoped Operators. Hence you do not have to manage the OperatorGroup
resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the OperatorGroup
resource in that namespace.
To install cluster-scoped Operators, create and place the Subscription
resource of the required Operator in your Git repository.
Example: Grafana Operator subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: grafana spec: channel: v4 installPlanApproval: Automatic name: grafana-operator source: redhat-operators sourceNamespace: openshift-marketplace
1.14.2. Installing namepace-scoped Operators
To install namespace-scoped Operators, create and place the Subscription
and OperatorGroup
resources of the required Operator in your Git repository.
Example: Ansible Automation Platform Resource Operator
# ... apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform # ... apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform # ... apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: patch-me installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace # ...
When deploying multiple Operators using Red Hat OpenShift GitOps, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a failure
state with the TooManyOperatorGroups
reason. After the number of Operator groups in their corresponding namespaces reaches one, all the previous failure
state CSVs transition to pending
state. You must manually approve the pending install plan to complete the Operator installation.
1.15. Additional resources
Chapter 2. Customizing permissions by creating user-defined cluster roles for cluster-scoped instances
For the default cluster-scoped instance, the Red Hat OpenShift GitOps Operator grants additional permissions for managing certain cluster-scoped resources. Consequently, as a cluster administrator, when you deploy an Argo CD as a cluster-scoped instance, the Operator creates additional cluster roles and cluster role bindings for the GitOps control plane components. These cluster roles and cluster role bindings provide the additional permissions that Argo CD requires to operate at the cluster level.
If you do not want the cluster-scoped instance to have all of the Operator-given permissions and choose to add or remove permissions to cluster-wide resources, you must first disable the creation of the default cluster roles for the cluster-scoped instance. Then, you can customize permissions for the following cluster-scoped instances:
- Default ArgoCD instance (default cluster-scoped instance)
- User-defined cluster-scoped Argo CD instance
This guide provides instructions with examples to help you create a user-defined cluster-scoped Argo CD instance, deploy an Argo CD application in your defined namespace that contains custom configurations for your cluster, disable the creation of the default cluster roles for the cluster-scoped instance, and customize permissions for user-defined cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
As a developer, if you are creating an Argo CD application and deploying cluster-wide resources, ensure that your cluster administrator grants the necessary permissions to them.
Otherwise, after the Argo CD reconciliation, you will see an authentication error message in the application’s Status
field similar to the following example:
Example authentication error message
persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope.
2.1. Prerequisites
- You have installed Red Hat OpenShift GitOps 1.13.0 or a later version on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. -
You have installed a cluster-scoped Argo CD instance in your defined namespace. For example,
spring-petclinic
namespace. You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:
- Argo CD Application Controller
- Argo CD server
- Argo CD ApplicationSet Controller (provided the ApplicationSet Controller is created)
You have deployed a
cluster-configs
Argo CD application with thecustomclusterrole
path in thespring-petclinic
namespace and created thetest-gitops-ns
namespace andtest-gitops-pv
persistent volume resources.NoteThe
cluster-configs
Argo CD application must be managed by a user-defined cluster-scoped instance with the following parameters set:-
The
selfHeal
field value set totrue
-
The
syncPolicy
field value set toautomated
-
The Label field set to the
app.kubernetes.io/part-of=argocd
value -
The Label field set to the
argocd.argoproj.io/managed-by=<user_defined_namespace>
value so that the Argo CD instance in your defined namespace can manage your namespace -
The Label field set to the
app.kubernetes.io/name=<user_defined_argocd_instance>
value
-
The
2.2. Disabling the creation of the default cluster roles for the cluster-scoped instance
To add or remove permissions to cluster-wide resources, as needed, you must disable the creation of the default cluster roles for the cluster-scoped instance by editing the YAML file of the Argo CD custom resource (CR).
Procedure
In the Argo CD CR, set the value of the
.spec.defaultClusterScopedRoleDisabled
field totrue
:Example Argo CD CR
apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example 1 namespace: spring-petclinic 2 # ... spec: defaultClusterScopedRoleDisabled: true 3 # ...
- 1
- The name of the cluster-scoped instance.
- 2
- The namespace where you want to run the cluster-scoped instance.
- 3
- The flag value that disables the creation of the default cluster roles for the cluster-scoped instance. If you want the Operator to recreate the default cluster roles and cluster role bindings for the cluster-scoped instance, set the field value to
false
.
Sample output
argocd.argoproj.io/example configured
Verify that the Red Hat OpenShift GitOps Operator has deleted the default cluster roles and cluster role bindings for the GitOps control plane components by running the following commands:
$ oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>
$ oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>
Sample output
No resources found
The default cluster roles and cluster role bindings for the cluster-scoped instance are not created. As a cluster administrator, you can now create and customize permissions for cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.
Additional resources
2.3. Customizing permissions for cluster-scoped instances
As a cluster administrator, to customize permissions for cluster-scoped instances, you must create new cluster roles and cluster role bindings for the GitOps control plane components.
For example purposes, the following instructions focus only on user-defined cluster-scoped instances.
Procedure
- Open the Administrator perspective of the web console and go to User Management → Roles → Create Role.
Use the following
ClusterRole
YAML template to add rules to specify the additional permissions.Example cluster role YAML template
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: example-spring-petclinic-argocd-application-controller 1 rules: - verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - '*' apiGroups: - '' resources: 2 - namespaces - persistentvolumes
- Click Create to add the cluster role.
Find the service account used by the control plane component you are customizing permissions for, by performing the following steps:
- Go to Workloads → Pods.
- From the Project list, select the project where the user-defined cluster-scoped instance is installed.
- Click the pod of the control plane component and go to the YAML tab.
-
Find the
spec.ServiceAccount
field and note the service account.
- Go to User Management → RoleBindings → Create binding.
- Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for RoleBinding name by following the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention.
- Select the newly created cluster role from the drop-down list for Role name.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
spring-petclinic
Subject name:
example-argocd-application-controller
NoteFor Subject name, ensure that the value you configure is the same as the value of the
spec.ServiceAccount
field of the control plane component you are customizing permissions for.
-
Subject namespace:
Click Create.
You have created the required permissions for the control plane component’s service account and namespace. The YAML file for the
ClusterRoleBinding
object looks similar to the following example:Example YAML file for a cluster role binding
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example-spring-petclinic-argocd-application-controller subjects: - kind: ServiceAccount name: example-argocd-application-controller namespace: spring-petclinic roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: example-spring-petclinic-argocd-application-controller
2.4. Additional resources
Chapter 3. Customizing permissions by creating aggregated cluster roles
The default cluster role for the Argo CD Application Controller has a specific set of hard-coded permissions. The Red Hat OpenShift GitOps Operator manages this cluster role, so you cannot modify it. As a cluster administrator, you can customize the permissions by using any one of the following methods:
3.1. Aggregated cluster roles
By using aggregated cluster roles, you do not have to define permissions by creating new cluster roles from scratch. Instead, you can combine several cluster roles into a single one.
With Red Hat OpenShift GitOps 1.14 and later, as a cluster administrator, you can use aggregated cluster roles and enable users to easily add user-defined permissions for Argo CD Application Controller.
- You can create aggregated cluster roles only for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
-
Deleting the
aggregatedClusterRoles
field from the Argo CD custom resource (CR) does not delete the user-defined cluster role. You must manually delete the user-defined cluster role using the CLI or UI.
3.2. Prerequisites
- You understand aggregated cluster roles.
- You have installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. - You have installed a cluster-scoped Argo CD instance in your defined namespace.
You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:
- Argo CD Application Controller
- Argo CD server
- Argo CD ApplicationSet Controller, if ApplicationSet Controller is created
- You have disabled the creation of the default cluster roles for the cluster-scoped instance.
3.3. Creating aggregated cluster roles
The process of creating aggregated cluster roles consists of the following procedures:
- Enabling the creation of aggregated cluster roles
- Creating user-defined cluster roles and configuring user-defined permissions for Application Controller
3.3.1. Enable the creation of aggregated cluster roles
You can enable the creation of aggregated cluster roles by setting the value of the .spec.aggregatedClusterRoles
field to true
in the Argo CD custom resource (CR). When you enable the creation of aggregated cluster roles, the {gitops} Operator takes the following actions:
-
Creates an
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role with a predefinedaggregationRule
field by default. - Creates a corresponding cluster role binding and manages it.
-
Creates and manages
view
andadmin
cluster roles for Application Controller to add user-defined permissions into the aggregated cluster role.
3.3.2. Create user-defined cluster roles and configure user-defined permissions
To configure user-defined permissions into the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role and aggregated cluster role, you must create one or more user-defined cluster roles and then configure the user-defined permissions for Application Controller.
-
The aggregated cluster role inherits permissions from the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
and<argocd_name>-<argocd_namespace>-argocd-application-controller-view
cluster roles. -
The
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role inherits permissions from the user-defined cluster role.
3.4. Enabling the creation of aggregated cluster roles
To enable the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance, you must configure the corresponding field by editing the YAML file of the Argo CD custom resource (CR).
Procedure
In the Argo CD CR, set the value of the
.spec.aggregatedClusterRoles
field totrue
:Example Argo CD CR
apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example 1 namespace: spring-petclinic 2 # ... spec: aggregatedClusterRoles: true 3 # ...
- 1
- The name of the cluster-scoped instance.
- 2
- The namespace where you want to run the cluster-scoped instance.
- 3
- The value set to
true
enables the creation of aggregated cluster roles. If you do not want to enable the creation of aggregated cluster roles, either do not include this line or set the value tofalse
.
Example output
argocd.argoproj.io/example configured
Verify that the
Status
field of the cluster-scoped Argo CD instance shows asPhase: Available
by running the following command:$ oc describe argocd.argoproj.io/example -n spring-petclinic
Example output
Name: example Namespace: spring-petclinic Labels: <none> Annotations: <none> API Version: argoproj.io/v1beta1 Kind: ArgoCD Metadata: Creation Timestamp: 2024-08-14T08:20:53Z Finalizers: argoproj.io/finalizer Generation: 3 Resource Version: 60437 UID: 57940e54-d60b-4c1a-bc4a-85c81c63ab69 Spec: Aggregated Cluster Roles: true ... Status: Application Controller: Running Application Set Controller: Unknown Phase: Available 1 Redis: Running Repo: Running Server: Running Sso: Unknown Events: <none>
- 1
- The
Available
status indicates that the cluster-scoped Argo CD instance is healthy and available.
NoteThe Red Hat OpenShift GitOps Operator creates the following default cluster roles and manages them:
-
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role -
<argocd_name>-<argocd_namespace>-argocd-application-controller-view
-
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
Verify that the Operator has created the default cluster roles and cluster role bindings for the Argo CD Application Controller and Argo CD server components by running the following commands:
$ oc get ClusterRoles -l app.kubernetes.io/part-of=argocd
Example output
NAME CREATED AT example-spring-petclinic-argocd-application-controller 2024-08-14T08:20:58Z example-spring-petclinic-argocd-application-controller-admin 2024-08-14T09:08:38Z example-spring-petclinic-argocd-application-controller-view 2024-08-14T09:08:38Z example-spring-petclinic-argocd-server 2024-08-14T08:20:59Z
$ oc get ClusterRoleBindings -l app.kubernetes.io/part-of=argocd
Example output
NAME ROLE AGE example-spring-petclinic-argocd-application-controller ClusterRole/example-spring-petclinic-argocd-application-controller 54m example-spring-petclinic-argocd-server ClusterRole/example-spring-petclinic-argocd-server 54m
The cluster role bindings for the
view
andadmin
cluster roles are not created. This is because theview
andadmin
cluster roles only add permissions to the aggregated cluster role and do not directly configure permissions to the Argo CD Application Controller.TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles and User Management → RoleBindings, respectively. You can search for the cluster roles and cluster role bindings that have the
app.kubernetes.io/part-of:argocd
label.Verify that the aggregated cluster role is created by checking the permissions of outputs of the roles created by running the following command:
$ oc get ClusterRole/<cluster_role_name> -o yaml 1
- 1
- Replace
<cluster_role_name>
with the name of the role created.
Example output of the aggregated cluster role
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}} rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2024-08-14T08:20:58Z" labels: app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd name: example-spring-petclinic-argocd-application-controller 1 resourceVersion: "78640" uid: aeeb2ef5-b531-4fe3-a61a-b5ad8dd8ca6e aggregationRule: 2 clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-controller: "true" rules: [] 3
- 1
- The name of the aggregated cluster role.
- 2
- The predefined list of labels indicates that the aggregated cluster role can inherit permissions from the other user-defined cluster roles.
- 3
- No predefined permissions are set. However, when the Operator immediately creates a
<argocd_name>-<argocd_namespace>-argocd-application-controller-view
cluster role, the corresponding predefinedview
permissions are added into the aggregated cluster role.
Example output of the
view
cluster roleapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}} creationTimestamp: "2024-08-14T09:59:14Z" labels: 1 app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-controller: "true" name: example-spring-petclinic-argocd-application-controller-view 2 resourceVersion: "78639" uid: 068b8867-7a0c-4af3-a17a-0560a00eba41 rules: 3 - apiGroups: - '*' resources: - '*' verbs: - get - list - watch - nonResourceURLs: - '*' verbs: - get - list
Example output of the
admin
cluster roleapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}} rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2024-08-14T09:59:15Z" labels: 1 app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-controller: "true" name: example-spring-petclinic-argocd-application-controller-admin 2 resourceVersion: "78642" uid: e2d35b6f-0832-4993-8b24-915a725454f9 aggregationRule: 3 clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-admin: "true" rules: null 4
- 1
- The labels match the predefined list of an existing aggregated cluster role.
- 2
- The name of the
admin
cluster role. - 3
- The predefined list of labels indicates that the existing
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role can inherit permissions from the other user-defined cluster roles. - 4
- Specifies that no permissions are defined yet in one or more user-defined cluster roles.
TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role,
view
, andadmin
cluster roles. You must open the cluster role to check the details and configurations.As a cluster administrator, you can now create one or more user-defined cluster roles and configure user-defined permissions for Argo CD Application Controller.
Additional resources
3.5. Creating user-defined cluster roles and configuring user-defined permissions for Application Controller
As a cluster administrator, to add user-defined permissions to your aggregated cluster role, you must create one or more user-defined cluster roles and then configure the user-defined permissions for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
Prerequisites
- You have enabled the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
You have the following default cluster roles that are created and managed by the Red Hat OpenShift GitOps Operator:
-
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role with a predefinedaggregationRule
field -
<argocd_name>-<argocd_namespace>-argocd-application-controller-view
with predefinedview
permissions -
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
with no predefined permissions
-
Procedure
Create a new cluster role with the required labels and permissions by using the following command:
$ oc apply -n <namespace> -f <cluster_role_name>.yaml
where:
<namespace>
- Specifies the name of your defined namespace.
<cluster_role_name>
Specifies the name of your defined cluster role YAML file.
Example user-defined cluster role YAML
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: user-application-controller 1 labels: 2 app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-admin: 'true' rules: 3 - verbs: - '*' apiGroups: - '' resources: - namespaces - persistentvolumeclaims - persistentvolumes - configmaps - verbs: - '*' apiGroups: - compliance.openshift.io resources: - scansettingbindings
- 1
- The name of the user-defined cluster role.
- 2
- The labels match the predefined list of an existing
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role. - 3
- The user-defined permissions that are to be added into the aggregated cluster role through the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role.
TipAlternatively, you can use the web console to create a user-defined cluster role from the Administrator perspective. You can go to User Management → Roles → Create Role, use the preceding YAML template to add permissions, and click Create.
Example output
clusterrole.rbac.authorization.k8s.io/user-application-controller created
A user-defined cluster role is created.
Verify that the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role inherits permissions from the user-defined cluster role by running the following command:$ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller-admin -o yaml
where:
<argocd_name>
- Specifies the name of your user-defined cluster-scoped Argo CD instance.
<argocd_namespace>
Specifies the namespace where Argo CD is installed.
Example output
aggregationRule: clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-admin: "true" apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}} creationTimestamp: "2024-08-14T09:59:15Z" labels: app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd argocd/aggregate-to-controller: "true" name: example-spring-petclinic-argocd-application-controller-admin resourceVersion: "79202" uid: e2d35b6f-0832-4993-8b24-915a725454f9 rules: - apiGroups: - "" resources: - namespaces - persistentvolumeclaims - persistentvolumes - configmaps verbs: - '*' - apiGroups: - compliance.openshift.io resources: - scansettingbindings verbs: - '*'
TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the
<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
cluster role. You must open the cluster role to check the details and configurations.
Verify that the
<argocd_name>-<argocd_namespace>-argocd-application-controller
aggregated cluster role inherits permissions from the<argocd_name>-<argocd_namespace>-argocd-application-controller-admin
and<argocd_name>-<argocd_namespace>-argocd-application-controller-view
cluster roles by running the following command:$ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller -o yaml
where:
<argocd_name>
- Specifies the name of your user-defined cluster-scoped Argo CD instance.
<argocd_namespace>
Specifies the namespace where Argo CD is installed.
Example output of the aggregated cluster role
aggregationRule: clusterRoleSelectors: - matchLabels: app.kubernetes.io/managed-by: spring-petclinic argocd/aggregate-to-controller: "true" apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocds.argoproj.io/name: example argocds.argoproj.io/namespace: spring-petclinic kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}} rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2024-08-14T08:20:58Z" labels: app.kubernetes.io/managed-by: spring-petclinic app.kubernetes.io/name: example app.kubernetes.io/part-of: argocd name: example-spring-petclinic-argocd-application-controller resourceVersion: "79203" uid: aeeb2ef5-b531-4fe3-a61a-b5ad8dd8ca6e rules: - apiGroups: - "" resources: - namespaces - persistentvolumeclaims - persistentvolumes - configmaps verbs: - '*' - apiGroups: - compliance.openshift.io resources: - scansettingbindings verbs: - '*' - apiGroups: - '*' resources: - '*' verbs: - get - list - watch - nonResourceURLs: - '*' verbs: - get - list
TipAlternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User Management → Roles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role. You must open the cluster role to check the details and configurations.
3.6. Additional resources
Chapter 4. Sharding clusters across Argo CD Application Controller replicas
You can shard clusters across multiple Argo CD Application Controller replicas if the controller is managing too many clusters and uses too much memory.
4.1. Enabling the round-robin sharding algorithm
The round-robin
sharding algorithm is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, the Argo CD Application Controller uses the non-uniform legacy
hash-based sharding algorithm to assign clusters to shards. This can result in uneven cluster distribution. You can enable the round-robin
sharding algorithm to achieve more equal cluster distribution across all shards.
Using the round-robin
sharding algorithm in Red Hat OpenShift GitOps provides the following benefits:
- Ensure more balanced workload distribution
- Prevent shards from being overloaded or underutilized
- Optimize the efficiency of computing resources
- Reduce the risk of bottlenecks
- Improve overall performance and reliability of the Argo CD system
The introduction of alternative sharding algorithms allows for further customization based on specific use cases. You can select the algorithm that best aligns with your deployment needs, which results in greater flexibility and adaptability in diverse operational scenarios.
To leverage the benefits of alternative sharding algorithms in GitOps, it is crucial to enable sharding during deployment.
4.1.1. Enabling the round-robin sharding algorithm in the web console
You can enable the round-robin
sharding algorithm by using the OpenShift Container Platform web console.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have access to the OpenShift Container Platform web console.
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
- In the Administrator perspective of the web console, go to Operators → Installed Operators.
- Click Red Hat OpenShift GitOps from the installed operators and go to the Argo CD tab.
-
Click the Argo CD instance where you want to enable the
round-robin
sharding algorithm, for example,openshift-gitops
. Click the YAML tab and edit the YAML file as shown in the following example:
Example Argo CD instance with round-robin sharding algorithm enabled
apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: enabled: true 1 replicas: 3 2 env: 3 - name: ARGOCD_CONTROLLER_SHARDING_ALGORITHM value: round-robin logLevel: debug 4
Click Save.
A success notification alert,
openshift-gitops has been updated to version <version>
, appears.NoteIf you edit the default
openshift-gitops
instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.Verify that the sharding is enabled with
round-robin
as the sharding algorithm by performing the following steps:- Go to Workloads → StatefulSets.
- Select the namespace where you installed the Argo CD instance from the Project drop-down list.
- Click <instance_name>-application-controller, for example, openshift-gitops-application-controller, and go to the Pods tab.
- Observe the number of created application controller pods. It should correspond with the number of set replicas.
Click on the controller pod you want to examine and go to the Logs tab to view the pod logs.
Example controller pod logs snippet
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" 1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
- 1
- Look for the
"Using filter function: round-robin"
message.
In the log Search field, search for
processed by shard
to verify that the cluster distribution across shards is even, as shown in the following example.ImportantEnsure that you set the log level to
debug
to observe these logs.Example controller pod logs snippet
time="2023-12-13T09:05:34Z" level=debug msg="ClustersList has 3 items" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id= and name=in-cluster to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 and name=in-cluster2 to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w and name=in-cluster3 to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3
NoteIf the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.
4.1.2. Enabling the round-robin sharding algorithm by using the CLI
You can enable the round-robin
sharding algorithm by using the command-line interface.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
Enable sharding and set the number of replicas to the wanted value by running the following command:
$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=merge
Example output
argocd.argoproj.io/<argocd_instance> patched
Configure the sharding algorithm to
round-robin
by running the following command:$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=merge
Example output
argocd.argoproj.io/<argocd_instance> patched
Verify that the number of Argo CD Application Controller pods corresponds with the number of set replicas by running the following command:
$ oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>
Example output
NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s
Verify that the sharding is enabled with
round-robin
as the sharding algorithm by running the following command:$ oc logs <argocd_application_controller_pod> -n <namespace>
Example output snippet
time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" 1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"
- 1
- Look for the
"Using filter function: round-robin"
message.
Verify that the cluster distribution across shards is even by performing the following steps:
Set the log level to
debug
by running the following command:$ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=merge
Example output
argocd.argoproj.io/<argocd_instance> patched
View the logs and search for
processed by shard
to observe to which shard each cluster is attached by running the following command:$ oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard"
Example output snippet
time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3
NoteIf the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.
4.2. Enabling dynamic scaling of shards of the Argo CD Application Controller
Dynamic scaling of shards is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
By default, the Argo CD Application Controller assigns clusters to shards indefinitely. If you are using the round-robin
sharding algorithm, this static assignment can result in uneven distribution of shards, particularly when replicas are added or removed. You can enable dynamic scaling of shards to automatically adjust the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. This ensures that shards are well-balanced and optimizes the use of compute resources.
After you enable dynamic scaling, you cannot manually modify the shard count. The system automatically adjusts the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time.
4.2.1. Enabling dynamic scaling of shards in the web console
You can enable dynamic scaling of shards by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
Procedure
- In the Administator perspective of the OpenShift Container Platform web console, go to Operators → Installed Operators.
- From the the list of Installed Operators, select the Red Hat OpenShift GitOps Operator, and then click the ArgoCD tab.
-
Select the Argo CD instance name for which you want to enable dynamic scaling of shards, for example,
openshift-gitops
. Click the YAML tab, and then edit and configure the
spec.controller.sharding
properties as follows:Example Argo CD YAML file with dynamic scaling enabled
apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: dynamicScalingEnabled: true 1 minShards: 1 2 maxShards: 3 3 clustersPerShard: 1 4
- 1
- Set
dynamicScalingEnabled
totrue
to enable dynamic scaling. - 2
- Set
minShards
to the minimum number of shards that you want to have. The value must be set to1
or greater. - 3
- Set
maxShards
to the maximum number of shards that you want to have. The value must be greater than the value ofminShards
. - 4
- Set
clustersPerShard
to the number of clusters that you want to have per shard. The value must be set to1
or greater.
Click Save.
A success notification alert,
openshift-gitops has been updated to version <version>
, appears.NoteIf you edit the default
openshift-gitops
instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.
Verification
Verify that sharding is enabled by checking the number of pods in the namespace:
- Go to Workloads → StatefulSets.
-
Select the namespace where the Argo CD instance is deployed from the Project drop-down list, for example,
openshift-gitops
. -
Click the name of the
StatefulSet
object that has the name of the Argo CD instance, for exampleopenshift-gitops-apllication-controller
. -
Click the Pods tab, and then verify that the number of pods is equal to or greater than the value of
minShards
that you have set in the Argo CDYAML
file.
4.2.2. Enabling dynamic scaling of shards by using the CLI
You can enable dynamic scaling of shards by using the OpenShift CLI (oc
).
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
-
Log in to the cluster by using the
oc
tool as a user withcluster-admin
privileges. Enable dynamic scaling by running the following command:
$ oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}'
Example command
$ oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}' 1
- 1
- The example command enables dynamic scaling for the
openshift-gitops
Argo CD instance in theopenshift-gitops
namespace, and sets the minimum number of shards to1
, the maximum number of shards to3
, and the number of clusters per shard to1
. The values ofminShard
andclustersPerShard
must be set to1
or greater. The value ofmaxShard
must be equal to or greater than the value ofminShard
.
Example output
argocd.argoproj.io/openshift-gitops patched
Verification
Check the
spec.controller.sharding
properties of the Argo CD instance:$ oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'
Example command
$ oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'
Example output when dynamic scaling of shards is enabled
{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}
-
Optional: Verify that dynamic scaling is enabled by checking the configured
spec.controller.sharding
properties in the configurationYAML
file of the Argo CD instance in the OpenShift Container Platform web console. Check the number of Argo CD Application Controller pods:
$ oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller
Example command
$ oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller
Example output
NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m 1
- 1
- The number of Argo CD Application Controller pods must be greater than or equal to the value of
minShard
.