Declarative cluster configuration


Red Hat OpenShift GitOps 1.15

Configuring an OpenShift cluster with cluster configurations by using OpenShift GitOps and creating and synchronizing applications in the default and code mode by using the GitOps CLI

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for configuring Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster. It also discusses about how to create and synchronize applications in the default and code mode by using the GitOps CLI.

Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations

With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster.

1.1. Prerequisites

  • You have logged in to the OpenShift Container Platform cluster as an administrator.
  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.

1.2. Using an Argo CD instance to manage cluster-scoped resources

Warning

Do not elevate the permissions of Argo CD instances to be cluster-scoped unless you have a distinct use case that requires it. Only users with cluster-admin privileges should manage the instances you elevate. Anyone with access to the namespace of a cluster-scoped instance can elevate their privileges on the cluster to become a cluster administrator themselves.

To manage cluster-scoped resources, update the existing Subscription object for the Red Hat OpenShift GitOps Operator and add the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES environment variable in the spec section.

Procedure

  1. In the Administrator perspective of the web console, navigate to OperatorsInstalled OperatorsRed Hat OpenShift GitOpsSubscription.
  2. Click the Actions list and then click Edit Subscription.
  3. On the openshift-gitops-operator Subscription details page, under the YAML tab, edit the Subscription YAML file by adding the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES environment variable in the spec section:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openshift-gitops-operator
      namespace: openshift-gitops-operator
    # ...
    spec:
      config:
        env:
        - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES
          value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances>
    # ...
  4. Click Save and Reload.
  5. To verify that the Argo CD instance is configured with a cluster role to manage cluster-scoped resources, perform the following steps:

    1. Navigate to User ManagementRoles and from the Filter list select Cluster-wide Roles.
    2. Search for the argocd-application-controller by using the Search by name field.

      The Roles page displays the created cluster role.

      Tip

      Alternatively, in the OpenShift CLI, run the following command:

      oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller

      The output yes verifies that the Argo instance is configured with a cluster role to manage cluster-scoped resources. Else, check your configurations and take necessary steps as required.

1.3. Default permissions of an Argo CD instance

By default Argo CD instance has the following permissions:

  • Argo CD instance has the admin privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the foo namespace has the admin privileges to manage resources only for that namespace.
  • Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide read privileges on resources to function appropriately:

    - verbs:
        - get
        - list
        - watch
       apiGroups:
        - '*'
       resources:
        - '*'
     - verbs:
        - get
        - list
       nonResourceURLs:
        - '*'
Note
  • You can edit the cluster roles used by the argocd-server and argocd-application-controller components where Argo CD is running such that the write privileges are limited to only the namespaces and resources that you wish Argo CD to manage.

    $ oc edit clusterrole argocd-server
    $ oc edit clusterrole argocd-application-controller

1.4. Running the Argo CD instance at the cluster-level

The default Argo CD instance and the accompanying controllers, installed by the Red Hat OpenShift GitOps Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle.

Procedure

  1. Label the existing nodes:

    $ oc label node <node-name> node-role.kubernetes.io/infra=""
  2. Optional: If required, you can also apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes:

    $ oc adm taint nodes -l node-role.kubernetes.io/infra \
    infra=reserved:NoSchedule infra=reserved:NoExecute
  3. Add the runOnInfra toggle in the GitOpsService custom resource:

    apiVersion: pipelines.openshift.io/v1alpha1
    kind: GitopsService
    metadata:
      name: cluster
    spec:
      runOnInfra: true
  4. Optional: If taints have been added to the nodes, then add tolerations to the GitOpsService custom resource.

    Example

    apiVersion: pipelines.openshift.io/v1alpha1
    kind: GitopsService
    metadata:
      name: cluster
      spec:
        runOnInfra: true
        tolerations:
        - effect: NoSchedule
          key: infra
          value: reserved
        - effect: NoExecute
          key: infra
          value: reserved

  5. Verify that the workloads in the openshift-gitops namespace are now scheduled on the infrastructure nodes by viewing PodsPod details for any pod in the console UI.
Note

Any nodeSelectors and tolerations manually added to the default Argo CD custom resource are overwritten by the toggle and tolerations in the GitOpsService custom resource.

Additional resources

1.5. Creating an application by using the Argo CD dashboard

Argo CD provides a dashboard which allows you to create applications.

This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the red hat applications menu icon menu in the web console, and defines a namespace spring-petclinic on the cluster.

Prerequisites

  • You have logged in to the OpenShift Container Platform cluster as an administrator.
  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have logged in to Argo CD instance.

Procedure

  1. In the Argo CD dashboard, click NEW APP to add a new Argo CD application.
  2. For this workflow, create a cluster-configs application with the following configurations:

    Application Name
    cluster-configs
    Project
    default
    Sync Policy
    Manual
    Repository URL
    https://github.com/redhat-developer/openshift-gitops-getting-started
    Revision
    HEAD
    Path
    cluster
    Destination
    https://kubernetes.default.svc
    Namespace
    spring-petclinic
    Directory Recurse
    checked
  3. Click CREATE to create your application.
  4. Open the Administrator perspective of the web console and expand AdministrationNamespaces.
  5. Search for and select the namespace, then enter argocd.argoproj.io/managed-by=openshift-gitops in the Label field so that the Argo CD instance in the openshift-gitops namespace can manage your namespace.

1.6. Creating an application by using the oc tool

You can create Argo CD applications in your terminal by using the oc tool.

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have logged in to an Argo CD instance.

Procedure

  1. Download the sample application:

    $ git clone git@github.com:redhat-developer/openshift-gitops-getting-started.git
  2. Create the application:

    $ oc create -f openshift-gitops-getting-started/argo/app.yaml
  3. Run the oc get command to review the created application:

    $ oc get application -n openshift-gitops
  4. Add a label to the namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it:

    $ oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops

1.7. Creating an application in the default mode by using the GitOps CLI

You can create applications in the default mode by using the GitOps argocd CLI.

This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift GitOps argocd CLI.
  • You have logged in to Argo CD instance.

Procedure

  1. Get the admin account password for the Argo CD server:

    $ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
  2. Get the Argo CD server URL:

    $ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
  3. Log in to the Argo CD server by using the admin account password and enclosing it in single quotes:

    Important

    Enclosing the password in single quotes ensures that special characters, such as $, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.

    $ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}

    Example

    $ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing

  4. Verify that you are able to run argocd commands in the default mode by listing all applications:

    $ argocd app list

    If the configuration is correct, then existing applications will be listed with the following header:

    Sample output

    NAME CLUSTER NAMESPACE  PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO PATH TARGET

  5. Create an application in the default mode:

    $ argocd app create app-cluster-configs \
        --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \
        --path cluster \
        --revision main \
        --dest-server  https://kubernetes.default.svc \
        --dest-namespace spring-petclinic \
        --directory-recurse \
        --sync-policy none \
        --sync-option Prune=true \
        --sync-option CreateNamespace=true
  6. Label the spring-petclinic destination namespace to be managed by the openshif-gitops Argo CD instance:

    $ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
  7. List the available applications to confirm that the application is created successfully:

    $ argocd app list

    Even though the cluster-configs Argo CD application has the Healthy status, it is not automatically synced due to its none sync policy, causing it to remain in the OutOfSync status.

1.8. Creating an application in core mode by using the GitOps CLI

You can create applications in core mode by using the GitOps argocd CLI.

This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift GitOps argocd CLI.

Procedure

  1. Log in to the OpenShift Container Platform cluster by using the oc CLI tool:

    $ oc login -u <username> -p <password> <server_url>

    Example

    $ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443

  2. Check whether the context is set correctly in the kubeconfig file:

    $ oc config current-context
  3. Set the default namespace of the current context to openshift-gitops:

    $ oc config set-context --current --namespace openshift-gitops
  4. Set the following environment variable to override the Argo CD component names:

    $ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
  5. Verify that you are able to run argocd commands in core mode by listing all applications:

    $ argocd app list --core

    If the configuration is correct, then existing applications will be listed with the following header:

    Sample output

    NAME CLUSTER NAMESPACE  PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO PATH TARGET

  6. Create an application in core mode:

    $ argocd app create app-cluster-configs --core \
        --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \
        --path cluster \
        --revision main \
        --dest-server  https://kubernetes.default.svc \
        --dest-namespace spring-petclinic \
        --directory-recurse \
        --sync-policy none \
        --sync-option Prune=true \
        --sync-option CreateNamespace=true
  7. Label the spring-petclinic destination namespace to be managed by the openshif-gitops Argo CD instance:

    $ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
  8. List the available applications to confirm that the application is created successfully:

    $ argocd app list --core

    Even though the cluster-configs Argo CD application has the Healthy status, it is not automatically synced due to its none sync policy, causing it to remain in the OutOfSync status.

1.9. Synchronizing your application with your Git repository

You can synchronize your application with your Git repository by modifying the synchronization policy for Argo CD. The policy modification automatically applies the changes in your cluster configurations from your Git repository to the cluster.

Procedure

  1. In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync. Because the application was configured with a manual sync policy, Argo CD does not sync it automatically.
  2. Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE. Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync. You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster.
  3. Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced. Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster.
  4. Navigate to the OpenShift Container Platform web console and click red hat applications menu icon to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there.
  5. Navigate to the Project page and search for the spring-petclinic namespace to verify that it has been added to the cluster.

    Your cluster configurations have been successfully synchronized to the cluster.

1.10. Synchronizing an application in the default mode by using the GitOps CLI

You can synchronize applications in the default mode by using the GitOps argocd CLI.

This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have logged in to Argo CD instance.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift GitOps argocd CLI.

Procedure

  1. Get the admin account password for the Argo CD server:

    $ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
  2. Get the Argo CD server URL:

    $ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
  3. Log in to the Argo CD server by using the admin account password and enclosing it in single quotes:

    Important

    Enclosing the password in single quotes ensures that special characters, such as $, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.

    $ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}

    Example

    $ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing

  4. Because the application is configured with the none sync policy, you must manually trigger the sync operation:

    $ argocd app sync openshift-gitops/app-cluster-configs
  5. List the application to confirm that the application has the Healthy and Synced statuses:

    $ argocd app list

1.11. Synchronizing an application in core mode by using the GitOps CLI

You can synchronize applications in core mode by using the GitOps argocd CLI.

This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic namespace on the cluster.

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift GitOps argocd CLI.

Procedure

  1. Log in to the OpenShift Container Platform cluster by using the oc CLI tool:

    $ oc login -u <username> -p <password> <server_url>

    Example

    $ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443

  2. Check whether the context is set correctly in the kubeconfig file:

    $ oc config current-context
  3. Set the default namespace of the current context to openshift-gitops:

    $ oc config set-context --current --namespace openshift-gitops
  4. Set the following environment variable to override the Argo CD component names:

    $ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
  5. Because the application is configured with the none sync policy, you must manually trigger the sync operation:

    $ argocd app sync --core openshift-gitops/app-cluster-configs
  6. List the application to confirm that the application has the Healthy and Synced statuses:

    $ argocd app list --core

1.12. In-built permissions for cluster configuration

By default, the Argo CD instance has permissions to manage specific cluster-scoped resources such as cluster Operators, optional OLM Operators and user management.

Note
  • Argo CD does not have cluster-admin permissions.
  • You can extend the permissions bound to any Argo CD instances managed by the GitOps Operator. However, you must not modify the permission resources, such as roles or cluster roles created by the GitOps Operator, because the Operator might reconcile them back to their initial state. Instead, create dedicated role and cluster role objects and bind them to the appropriate service account that the application controller uses.

Permissions for the Argo CD instance:

ResourcesDescriptions

Resource Groups

Configure the user or administrator

operators.coreos.com

Optional Operators managed by OLM

user.openshift.io , rbac.authorization.k8s.io

Groups, Users and their permissions

config.openshift.io

Control plane Operators managed by CVO used to configure cluster-wide build configuration, registry configuration and scheduler policies

storage.k8s.io

Storage

console.openshift.io

Console customization

1.13. Adding permissions for cluster configuration

You can grant permissions for an Argo CD instance to manage cluster configuration. Create a cluster role with additional permissions and then create a new cluster role binding to associate the cluster role with a service account.

Prerequisites

  • You have access to an OpenShift Container Platform cluster with cluster-admin privileges and are logged in to the web console.
  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.

Procedure

  1. In the web console, select User ManagementRolesCreate Role. Use the following ClusterRole YAML template to add rules to specify the additional permissions.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: secrets-cluster-role
    rules:
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["*"]
  2. Click Create to add the cluster role.
  3. To create the cluster role binding, select User ManagementRole BindingsCreate Binding.
  4. Select All Projects from the Project list.
  5. Click Create binding.
  6. Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
  7. Enter a unique value for the RoleBinding name.
  8. Select the newly created cluster role or an existing cluster role from the drop-down list.
  9. Select the Subject as ServiceAccount and the provide the Subject namespace and name.

    1. Subject namespace: openshift-gitops
    2. Subject name: openshift-gitops-argocd-application-controller

      Note

      The value of Subject name depends on the GitOps control plane components for which you create the cluster roles and cluster role bindings.

  10. Click Create. The YAML file for the ClusterRoleBinding object is as follows:

    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: cluster-role-binding
    subjects:
      - kind: ServiceAccount
        name: openshift-gitops-argocd-application-controller
        namespace: openshift-gitops
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: secrets-cluster-role

1.14. Installing OLM Operators using Red Hat OpenShift GitOps

Red Hat OpenShift GitOps with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators.

Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the OpenShift Container Platform web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster.

Red Hat OpenShift GitOps places your Kubernetes resources in your Git repository. As a cluster administrator, use Red Hat OpenShift GitOps to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using Red Hat OpenShift GitOps, the Red Hat OpenShift GitOps automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster.

1.14.1. Installing cluster-scoped Operators

Operator Lifecycle Manager (OLM) uses a default global-operators Operator group in the openshift-operators namespace for cluster-scoped Operators. Hence you do not have to manage the OperatorGroup resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the OperatorGroup resource in that namespace.

To install cluster-scoped Operators, create and place the Subscription resource of the required Operator in your Git repository.

Example: Grafana Operator subscription

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: grafana
spec:
  channel: v4
  installPlanApproval: Automatic
  name: grafana-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace

1.14.2. Installing namepace-scoped Operators

To install namespace-scoped Operators, create and place the Subscription and OperatorGroup resources of the required Operator in your Git repository.

Example: Ansible Automation Platform Resource Operator

# ...
apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: ansible-automation-platform
# ...
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: ansible-automation-platform-operator
  namespace: ansible-automation-platform
spec:
  targetNamespaces:
    - ansible-automation-platform
# ...
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: ansible-automation-platform
  namespace: ansible-automation-platform
spec:
  channel: patch-me
  installPlanApproval: Automatic
  name: ansible-automation-platform-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
# ...

Important

When deploying multiple Operators using Red Hat OpenShift GitOps, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a failure state with the TooManyOperatorGroups reason. After the number of Operator groups in their corresponding namespaces reaches one, all the previous failure state CSVs transition to pending state. You must manually approve the pending install plan to complete the Operator installation.

1.15. Additional resources

Chapter 2. Customizing permissions by creating user-defined cluster roles for cluster-scoped instances

For the default cluster-scoped instance, the Red Hat OpenShift GitOps Operator grants additional permissions for managing certain cluster-scoped resources. Consequently, as a cluster administrator, when you deploy an Argo CD as a cluster-scoped instance, the Operator creates additional cluster roles and cluster role bindings for the GitOps control plane components. These cluster roles and cluster role bindings provide the additional permissions that Argo CD requires to operate at the cluster level.

If you do not want the cluster-scoped instance to have all of the Operator-given permissions and choose to add or remove permissions to cluster-wide resources, you must first disable the creation of the default cluster roles for the cluster-scoped instance. Then, you can customize permissions for the following cluster-scoped instances:

  • Default ArgoCD instance (default cluster-scoped instance)
  • User-defined cluster-scoped Argo CD instance

This guide provides instructions with examples to help you create a user-defined cluster-scoped Argo CD instance, deploy an Argo CD application in your defined namespace that contains custom configurations for your cluster, disable the creation of the default cluster roles for the cluster-scoped instance, and customize permissions for user-defined cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.

Note

As a developer, if you are creating an Argo CD application and deploying cluster-wide resources, ensure that your cluster administrator grants the necessary permissions to them.

Otherwise, after the Argo CD reconciliation, you will see an authentication error message in the application’s Status field similar to the following example:

Example authentication error message

persistentvolumes is forbidden: User "system:serviceaccount:gitops-demo:argocd-argocd-application-controller" cannot create resource "persistentvolumes" in API group "" at the cluster scope.

2.1. Prerequisites

  • You have installed Red Hat OpenShift GitOps 1.13.0 or a later version on your OpenShift Container Platform cluster.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift GitOps argocd CLI.
  • You have installed a cluster-scoped Argo CD instance in your defined namespace. For example, spring-petclinic namespace.
  • You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:

    • Argo CD Application Controller
    • Argo CD server
    • Argo CD ApplicationSet Controller (provided the ApplicationSet Controller is created)
  • You have deployed a cluster-configs Argo CD application with the customclusterrole path in the spring-petclinic namespace and created the test-gitops-ns namespace and test-gitops-pv persistent volume resources.

    Note

    The cluster-configs Argo CD application must be managed by a user-defined cluster-scoped instance with the following parameters set:

    • The selfHeal field value set to true
    • The syncPolicy field value set to automated
    • The Label field set to the app.kubernetes.io/part-of=argocd value
    • The Label field set to the argocd.argoproj.io/managed-by=<user_defined_namespace> value so that the Argo CD instance in your defined namespace can manage your namespace
    • The Label field set to the app.kubernetes.io/name=<user_defined_argocd_instance> value

2.2. Disabling the creation of the default cluster roles for the cluster-scoped instance

To add or remove permissions to cluster-wide resources, as needed, you must disable the creation of the default cluster roles for the cluster-scoped instance by editing the YAML file of the Argo CD custom resource (CR).

Procedure

  1. In the Argo CD CR, set the value of the .spec.defaultClusterScopedRoleDisabled field to true:

    Example Argo CD CR

    apiVersion: argoproj.io/v1beta1
    kind: ArgoCD
    metadata:
      name: example 1
      namespace: spring-petclinic 2
    # ...
    spec:
      defaultClusterScopedRoleDisabled: true 3
    # ...

    1
    The name of the cluster-scoped instance.
    2
    The namespace where you want to run the cluster-scoped instance.
    3
    The flag value that disables the creation of the default cluster roles for the cluster-scoped instance. If you want the Operator to recreate the default cluster roles and cluster role bindings for the cluster-scoped instance, set the field value to false.

    Sample output

    argocd.argoproj.io/example configured

  2. Verify that the Red Hat OpenShift GitOps Operator has deleted the default cluster roles and cluster role bindings for the GitOps control plane components by running the following commands:

    $ oc get ClusterRoles/<argocd_name>-<argocd_namespace>-<control_plane_component>
    $ oc get ClusterRoleBindings/<argocd_name>-<argocd_namespace>-<control_plane_component>

    Sample output

    No resources found

    The default cluster roles and cluster role bindings for the cluster-scoped instance are not created. As a cluster administrator, you can now create and customize permissions for cluster-scoped instances by creating new cluster roles and cluster role bindings for the GitOps control plane components.

2.3. Customizing permissions for cluster-scoped instances

As a cluster administrator, to customize permissions for cluster-scoped instances, you must create new cluster roles and cluster role bindings for the GitOps control plane components.

For example purposes, the following instructions focus only on user-defined cluster-scoped instances.

Procedure

  1. Open the Administrator perspective of the web console and go to User ManagementRolesCreate Role.
  2. Use the following ClusterRole YAML template to add rules to specify the additional permissions.

    Example cluster role YAML template

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: example-spring-petclinic-argocd-application-controller 1
    rules:
      - verbs:
          - get
          - list
          - watch
        apiGroups:
          - '*'
        resources:
          - '*'
      - verbs:
          - '*'
        apiGroups:
          - ''
        resources: 2
          - namespaces
          - persistentvolumes

    1
    The name of the cluster role according to the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention.
    2
    The resources to which you want to grant permissions at the cluster level.
  3. Click Create to add the cluster role.
  4. Find the service account used by the control plane component you are customizing permissions for, by performing the following steps:

    1. Go to WorkloadsPods.
    2. From the Project list, select the project where the user-defined cluster-scoped instance is installed.
    3. Click the pod of the control plane component and go to the YAML tab.
    4. Find the spec.ServiceAccount field and note the service account.
  5. Go to User ManagementRoleBindingsCreate binding.
  6. Click Create binding.
  7. Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
  8. Enter a unique value for RoleBinding name by following the <argocd_name>-<argocd_namespace>-<control_plane_component> naming convention.
  9. Select the newly created cluster role from the drop-down list for Role name.
  10. Select the Subject as ServiceAccount and the provide the Subject namespace and name.

    1. Subject namespace: spring-petclinic
    2. Subject name: example-argocd-application-controller

      Note

      For Subject name, ensure that the value you configure is the same as the value of the spec.ServiceAccount field of the control plane component you are customizing permissions for.

  11. Click Create.

    You have created the required permissions for the control plane component’s service account and namespace. The YAML file for the ClusterRoleBinding object looks similar to the following example:

    Example YAML file for a cluster role binding

    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: example-spring-petclinic-argocd-application-controller
    subjects:
      - kind: ServiceAccount
        name: example-argocd-application-controller
        namespace: spring-petclinic
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: example-spring-petclinic-argocd-application-controller

2.4. Additional resources

Chapter 3. Customizing permissions by creating aggregated cluster roles

The default cluster role for the Argo CD Application Controller has a specific set of hard-coded permissions. The Red Hat OpenShift GitOps Operator manages this cluster role, so you cannot modify it. As a cluster administrator, you can customize the permissions by using any one of the following methods:

3.1. Aggregated cluster roles

By using aggregated cluster roles, you do not have to define permissions by creating new cluster roles from scratch. Instead, you can combine several cluster roles into a single one.

With Red Hat OpenShift GitOps 1.14 and later, as a cluster administrator, you can use aggregated cluster roles and enable users to easily add user-defined permissions for Argo CD Application Controller.

Important
  • You can create aggregated cluster roles only for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
  • Deleting the aggregatedClusterRoles field from the Argo CD custom resource (CR) does not delete the user-defined cluster role. You must manually delete the user-defined cluster role using the CLI or UI.

3.2. Prerequisites

  • You understand aggregated cluster roles.
  • You have installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift GitOps argocd CLI.
  • You have installed a cluster-scoped Argo CD instance in your defined namespace.
  • You have validated that the user-defined cluster-scoped instance is configured with the cluster roles and cluster role bindings for the following components:

    • Argo CD Application Controller
    • Argo CD server
    • Argo CD ApplicationSet Controller, if ApplicationSet Controller is created
  • You have disabled the creation of the default cluster roles for the cluster-scoped instance.

3.3. Creating aggregated cluster roles

The process of creating aggregated cluster roles consists of the following procedures:

  1. Enabling the creation of aggregated cluster roles
  2. Creating user-defined cluster roles and configuring user-defined permissions for Application Controller

3.3.1. Enable the creation of aggregated cluster roles

You can enable the creation of aggregated cluster roles by setting the value of the .spec.aggregatedClusterRoles field to true in the Argo CD custom resource (CR). When you enable the creation of aggregated cluster roles, the {gitops} Operator takes the following actions:

  • Creates an <argocd_name>-<argocd_namespace>-argocd-application-controller aggregated cluster role with a predefined aggregationRule field by default.
  • Creates a corresponding cluster role binding and manages it.
  • Creates and manages view and admin cluster roles for Application Controller to add user-defined permissions into the aggregated cluster role.

3.3.2. Create user-defined cluster roles and configure user-defined permissions

To configure user-defined permissions into the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role and aggregated cluster role, you must create one or more user-defined cluster roles and then configure the user-defined permissions for Application Controller.

Note
  • The aggregated cluster role inherits permissions from the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin and <argocd_name>-<argocd_namespace>-argocd-application-controller-view cluster roles.
  • The <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role inherits permissions from the user-defined cluster role.

3.4. Enabling the creation of aggregated cluster roles

To enable the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance, you must configure the corresponding field by editing the YAML file of the Argo CD custom resource (CR).

Procedure

  1. In the Argo CD CR, set the value of the .spec.aggregatedClusterRoles field to true:

    Example Argo CD CR

    apiVersion: argoproj.io/v1beta1
    kind: ArgoCD
    metadata:
      name: example 1
      namespace: spring-petclinic 2
    # ...
    spec:
      aggregatedClusterRoles: true 3
    # ...

    1
    The name of the cluster-scoped instance.
    2
    The namespace where you want to run the cluster-scoped instance.
    3
    The value set to true enables the creation of aggregated cluster roles. If you do not want to enable the creation of aggregated cluster roles, either do not include this line or set the value to false.

    Example output

    argocd.argoproj.io/example configured

  2. Verify that the Status field of the cluster-scoped Argo CD instance shows as Phase: Available by running the following command:

    $ oc describe argocd.argoproj.io/example -n spring-petclinic

    Example output

    Name:         example
    Namespace:    spring-petclinic
    Labels:       <none>
    Annotations:  <none>
    API Version:  argoproj.io/v1beta1
    Kind:         ArgoCD
    Metadata:
      Creation Timestamp:  2024-08-14T08:20:53Z
      Finalizers:
        argoproj.io/finalizer
      Generation:        3
      Resource Version:  60437
      UID:               57940e54-d60b-4c1a-bc4a-85c81c63ab69
    Spec:
      Aggregated Cluster Roles:  true
    ...
    Status:
      Application Controller:      Running
      Application Set Controller:  Unknown
      Phase:                       Available 1
      Redis:                       Running
      Repo:                        Running
      Server:                      Running
      Sso:                         Unknown
    Events:                        <none>

    1
    The Available status indicates that the cluster-scoped Argo CD instance is healthy and available.
    Note

    The Red Hat OpenShift GitOps Operator creates the following default cluster roles and manages them:

    • <argocd_name>-<argocd_namespace>-argocd-application-controller aggregated cluster role
    • <argocd_name>-<argocd_namespace>-argocd-application-controller-view
    • <argocd_name>-<argocd_namespace>-argocd-application-controller-admin
  3. Verify that the Operator has created the default cluster roles and cluster role bindings for the Argo CD Application Controller and Argo CD server components by running the following commands:

    $ oc get ClusterRoles -l app.kubernetes.io/part-of=argocd

    Example output

    NAME                                                           CREATED AT
    example-spring-petclinic-argocd-application-controller         2024-08-14T08:20:58Z
    example-spring-petclinic-argocd-application-controller-admin   2024-08-14T09:08:38Z
    example-spring-petclinic-argocd-application-controller-view    2024-08-14T09:08:38Z
    example-spring-petclinic-argocd-server                         2024-08-14T08:20:59Z

    $ oc get ClusterRoleBindings -l app.kubernetes.io/part-of=argocd

    Example output

    NAME                                                     ROLE                                                                 AGE
    example-spring-petclinic-argocd-application-controller   ClusterRole/example-spring-petclinic-argocd-application-controller   54m
    example-spring-petclinic-argocd-server                   ClusterRole/example-spring-petclinic-argocd-server                   54m

    The cluster role bindings for the view and admin cluster roles are not created. This is because the view and admin cluster roles only add permissions to the aggregated cluster role and do not directly configure permissions to the Argo CD Application Controller.

    Tip

    Alternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User ManagementRoles and User ManagementRoleBindings, respectively. You can search for the cluster roles and cluster role bindings that have the app.kubernetes.io/part-of:argocd label.

  4. Verify that the aggregated cluster role is created by checking the permissions of outputs of the roles created by running the following command:

    $ oc get ClusterRole/<cluster_role_name> -o yaml 1
    1
    Replace <cluster_role_name> with the name of the role created.

    Example output of the aggregated cluster role

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        argocds.argoproj.io/name: example
        argocds.argoproj.io/namespace: spring-petclinic
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}}
        rbac.authorization.kubernetes.io/autoupdate: "true"
      creationTimestamp: "2024-08-14T08:20:58Z"
      labels:
        app.kubernetes.io/managed-by: spring-petclinic
        app.kubernetes.io/name: example
        app.kubernetes.io/part-of: argocd
      name: example-spring-petclinic-argocd-application-controller 1
      resourceVersion: "78640"
      uid: aeeb2ef5-b531-4fe3-a61a-b5ad8dd8ca6e
    aggregationRule: 2
      clusterRoleSelectors:
      - matchLabels:
          app.kubernetes.io/managed-by: spring-petclinic
          argocd/aggregate-to-controller: "true"
    rules: [] 3

    1
    The name of the aggregated cluster role.
    2
    The predefined list of labels indicates that the aggregated cluster role can inherit permissions from the other user-defined cluster roles.
    3
    No predefined permissions are set. However, when the Operator immediately creates a <argocd_name>-<argocd_namespace>-argocd-application-controller-view cluster role, the corresponding predefined view permissions are added into the aggregated cluster role.

    Example output of the view cluster role

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        argocds.argoproj.io/name: example
        argocds.argoproj.io/namespace: spring-petclinic
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}}
      creationTimestamp: "2024-08-14T09:59:14Z"
      labels: 1
        app.kubernetes.io/managed-by: spring-petclinic
        app.kubernetes.io/name: example
        app.kubernetes.io/part-of: argocd
        argocd/aggregate-to-controller: "true"
      name: example-spring-petclinic-argocd-application-controller-view 2
      resourceVersion: "78639"
      uid: 068b8867-7a0c-4af3-a17a-0560a00eba41
    rules: 3
    - apiGroups:
      - '*'
      resources:
      - '*'
      verbs:
      - get
      - list
      - watch
    - nonResourceURLs:
      - '*'
      verbs:
      - get
      - list

    1
    The labels match the predefined list of an existing aggregated cluster role.
    2
    The name of the view cluster role.
    3
    The predefined view permissions. These permissions are added into the existing aggregated cluster role.

    Example output of the admin cluster role

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        argocds.argoproj.io/name: example
        argocds.argoproj.io/namespace: spring-petclinic
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}}
        rbac.authorization.kubernetes.io/autoupdate: "true"
      creationTimestamp: "2024-08-14T09:59:15Z"
      labels: 1
        app.kubernetes.io/managed-by: spring-petclinic
        app.kubernetes.io/name: example
        app.kubernetes.io/part-of: argocd
        argocd/aggregate-to-controller: "true"
      name: example-spring-petclinic-argocd-application-controller-admin 2
      resourceVersion: "78642"
      uid: e2d35b6f-0832-4993-8b24-915a725454f9
    aggregationRule: 3
      clusterRoleSelectors:
      - matchLabels:
          app.kubernetes.io/managed-by: spring-petclinic
          argocd/aggregate-to-admin: "true"
    rules: null 4

    1
    The labels match the predefined list of an existing aggregated cluster role.
    2
    The name of the admin cluster role.
    3
    The predefined list of labels indicates that the existing <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role can inherit permissions from the other user-defined cluster roles.
    4
    Specifies that no permissions are defined yet in one or more user-defined cluster roles.
    Tip

    Alternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User ManagementRoles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role, view, and admin cluster roles. You must open the cluster role to check the details and configurations.

    As a cluster administrator, you can now create one or more user-defined cluster roles and configure user-defined permissions for Argo CD Application Controller.

3.5. Creating user-defined cluster roles and configuring user-defined permissions for Application Controller

As a cluster administrator, to add user-defined permissions to your aggregated cluster role, you must create one or more user-defined cluster roles and then configure the user-defined permissions for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.

Prerequisites

  • You have enabled the creation of aggregated cluster roles for the Argo CD Application Controller component of a cluster-scoped Argo CD instance.
  • You have the following default cluster roles that are created and managed by the Red Hat OpenShift GitOps Operator:

    • <argocd_name>-<argocd_namespace>-argocd-application-controller aggregated cluster role with a predefined aggregationRule field
    • <argocd_name>-<argocd_namespace>-argocd-application-controller-view with predefined view permissions
    • <argocd_name>-<argocd_namespace>-argocd-application-controller-admin with no predefined permissions

Procedure

  1. Create a new cluster role with the required labels and permissions by using the following command:

    $ oc apply -n <namespace> -f <cluster_role_name>.yaml

    where:

    <namespace>
    Specifies the name of your defined namespace.
    <cluster_role_name>

    Specifies the name of your defined cluster role YAML file.

    Example user-defined cluster role YAML

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: user-application-controller 1
      labels: 2
        app.kubernetes.io/managed-by: spring-petclinic
        app.kubernetes.io/name: example
        app.kubernetes.io/part-of: argocd
        argocd/aggregate-to-admin: 'true'
    rules: 3
      - verbs:
          - '*'
        apiGroups:
          - ''
        resources:
          - namespaces
          - persistentvolumeclaims
          - persistentvolumes
          - configmaps
      - verbs:
          - '*'
        apiGroups:
          - compliance.openshift.io
        resources:
          - scansettingbindings

    1
    The name of the user-defined cluster role.
    2
    The labels match the predefined list of an existing <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role.
    3
    The user-defined permissions that are to be added into the aggregated cluster role through the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role.
    Tip

    Alternatively, you can use the web console to create a user-defined cluster role from the Administrator perspective. You can go to User ManagementRolesCreate Role, use the preceding YAML template to add permissions, and click Create.

    Example output

    clusterrole.rbac.authorization.k8s.io/user-application-controller created

    A user-defined cluster role is created.

  2. Verify that the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role inherits permissions from the user-defined cluster role by running the following command:

    $ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller-admin -o yaml

    where:

    <argocd_name>
    Specifies the name of your user-defined cluster-scoped Argo CD instance.
    <argocd_namespace>

    Specifies the namespace where Argo CD is installed.

    Example output

    aggregationRule:
      clusterRoleSelectors:
      - matchLabels:
          app.kubernetes.io/managed-by: spring-petclinic
          argocd/aggregate-to-admin: "true"
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        argocds.argoproj.io/name: example
        argocds.argoproj.io/namespace: spring-petclinic
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}}
      creationTimestamp: "2024-08-14T09:59:15Z"
      labels:
        app.kubernetes.io/managed-by: spring-petclinic
        app.kubernetes.io/name: example
        app.kubernetes.io/part-of: argocd
        argocd/aggregate-to-controller: "true"
      name: example-spring-petclinic-argocd-application-controller-admin
      resourceVersion: "79202"
      uid: e2d35b6f-0832-4993-8b24-915a725454f9
    rules:
    - apiGroups:
      - ""
      resources:
      - namespaces
      - persistentvolumeclaims
      - persistentvolumes
      - configmaps
      verbs:
      - '*'
    - apiGroups:
      - compliance.openshift.io
      resources:
      - scansettingbindings
      verbs:
      - '*'

    Tip

    Alternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User ManagementRoles, use the Filter option, select Cluster-wide Roles, and search for the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin cluster role. You must open the cluster role to check the details and configurations.

  3. Verify that the <argocd_name>-<argocd_namespace>-argocd-application-controller aggregated cluster role inherits permissions from the <argocd_name>-<argocd_namespace>-argocd-application-controller-admin and <argocd_name>-<argocd_namespace>-argocd-application-controller-view cluster roles by running the following command:

    $ oc get ClusterRole/<argocd_name>-<argocd_namespace>-argocd-application-controller -o yaml

    where:

    <argocd_name>
    Specifies the name of your user-defined cluster-scoped Argo CD instance.
    <argocd_namespace>

    Specifies the namespace where Argo CD is installed.

    Example output of the aggregated cluster role

    aggregationRule:
      clusterRoleSelectors:
      - matchLabels:
          app.kubernetes.io/managed-by: spring-petclinic
          argocd/aggregate-to-controller: "true"
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        argocds.argoproj.io/name: example
        argocds.argoproj.io/namespace: spring-petclinic
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"argoproj.io/v1beta1","kind":"ArgoCD","metadata":{"annotations":{},"name":"example","namespace":"spring-petclinic"},"spec":{"aggregatedClusterRoles":true}}
        rbac.authorization.kubernetes.io/autoupdate: "true"
      creationTimestamp: "2024-08-14T08:20:58Z"
      labels:
        app.kubernetes.io/managed-by: spring-petclinic
        app.kubernetes.io/name: example
        app.kubernetes.io/part-of: argocd
      name: example-spring-petclinic-argocd-application-controller
      resourceVersion: "79203"
      uid: aeeb2ef5-b531-4fe3-a61a-b5ad8dd8ca6e
    rules:
    - apiGroups:
      - ""
      resources:
      - namespaces
      - persistentvolumeclaims
      - persistentvolumes
      - configmaps
      verbs:
      - '*'
    - apiGroups:
      - compliance.openshift.io
      resources:
      - scansettingbindings
      verbs:
      - '*'
    - apiGroups:
      - '*'
      resources:
      - '*'
      verbs:
      - get
      - list
      - watch
    - nonResourceURLs:
      - '*'
      verbs:
      - get
      - list

    Tip

    Alternatively, you can use the OpenShift Container Platform web console to verify from the Administrator perspective. You can go to User ManagementRoles, use the Filter option, select Cluster-wide Roles, and search for the aggregated cluster role. You must open the cluster role to check the details and configurations.

3.6. Additional resources

Chapter 4. Sharding clusters across Argo CD Application Controller replicas

You can shard clusters across multiple Argo CD Application Controller replicas if the controller is managing too many clusters and uses too much memory.

4.1. Enabling the round-robin sharding algorithm

Important

The round-robin sharding algorithm is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

By default, the Argo CD Application Controller uses the non-uniform legacy hash-based sharding algorithm to assign clusters to shards. This can result in uneven cluster distribution. You can enable the round-robin sharding algorithm to achieve more equal cluster distribution across all shards.

Using the round-robin sharding algorithm in Red Hat OpenShift GitOps provides the following benefits:

  • Ensure more balanced workload distribution
  • Prevent shards from being overloaded or underutilized
  • Optimize the efficiency of computing resources
  • Reduce the risk of bottlenecks
  • Improve overall performance and reliability of the Argo CD system

The introduction of alternative sharding algorithms allows for further customization based on specific use cases. You can select the algorithm that best aligns with your deployment needs, which results in greater flexibility and adaptability in diverse operational scenarios.

Tip

To leverage the benefits of alternative sharding algorithms in GitOps, it is crucial to enable sharding during deployment.

4.1.1. Enabling the round-robin sharding algorithm in the web console

You can enable the round-robin sharding algorithm by using the OpenShift Container Platform web console.

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have access to the OpenShift Container Platform web console.
  • You have access to the cluster with cluster-admin privileges.

Procedure

  1. In the Administrator perspective of the web console, go to OperatorsInstalled Operators.
  2. Click Red Hat OpenShift GitOps from the installed operators and go to the Argo CD tab.
  3. Click the Argo CD instance where you want to enable the round-robin sharding algorithm, for example, openshift-gitops.
  4. Click the YAML tab and edit the YAML file as shown in the following example:

    Example Argo CD instance with round-robin sharding algorithm enabled

    apiVersion: argoproj.io/v1beta1
    kind: ArgoCD
    metadata:
      name: openshift-gitops
      namespace: openshift-gitops
    spec:
      controller:
        sharding:
          enabled: true 1
          replicas: 3 2
        env: 3
          - name: ARGOCD_CONTROLLER_SHARDING_ALGORITHM
            value: round-robin
        logLevel: debug 4

    1
    Set the sharding.enabled parameter to true to enable sharding.
    2
    Set the number of replicas to the wanted value, for example, 3.
    3
    Set the sharding algorithm to round-robin.
    4
    Set the log level to debug so that you can verify to which shard each cluster is attached.
  5. Click Save.

    A success notification alert, openshift-gitops has been updated to version <version>, appears.

    Note

    If you edit the default openshift-gitops instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.

  6. Verify that the sharding is enabled with round-robin as the sharding algorithm by performing the following steps:

    1. Go to WorkloadsStatefulSets.
    2. Select the namespace where you installed the Argo CD instance from the Project drop-down list.
    3. Click <instance_name>-application-controller, for example, openshift-gitops-application-controller, and go to the Pods tab.
    4. Observe the number of created application controller pods. It should correspond with the number of set replicas.
    5. Click on the controller pod you want to examine and go to the Logs tab to view the pod logs.

      Example controller pod logs snippet

      time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4
      time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1"
      time="2023-12-13T09:05:34Z" level=info msg="Using filter function:  round-robin" 1
      time="2023-12-13T09:05:34Z" level=info msg="Using filter function:  round-robin"
      time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"

      1
      Look for the "Using filter function: round-robin" message.
    6. In the log Search field, search for processed by shard to verify that the cluster distribution across shards is even, as shown in the following example.

      Important

      Ensure that you set the log level to debug to observe these logs.

      Example controller pod logs snippet

      time="2023-12-13T09:05:34Z" level=debug msg="ClustersList has 3 items"
      time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id= and name=in-cluster to cluster's map"
      time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 and name=in-cluster2 to cluster's map"
      time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w and name=in-cluster3 to cluster's map"
      time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1
      time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2
      time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3

      1 2 3
      In this example, 3 clusters are attached consecutively to shard 0, shard 1, and shard 2.
      Note

      If the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.

4.1.2. Enabling the round-robin sharding algorithm by using the CLI

You can enable the round-robin sharding algorithm by using the command-line interface.

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have access to the cluster with cluster-admin privileges.

Procedure

  1. Enable sharding and set the number of replicas to the wanted value by running the following command:

    $ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=merge

    Example output

    argocd.argoproj.io/<argocd_instance> patched

  2. Configure the sharding algorithm to round-robin by running the following command:

    $ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=merge

    Example output

    argocd.argoproj.io/<argocd_instance> patched

  3. Verify that the number of Argo CD Application Controller pods corresponds with the number of set replicas by running the following command:

    $ oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>

    Example output

    NAME                                        READY   STATUS    RESTARTS   AGE
    <argocd_instance>-application-controller-0   1/1     Running   0          11s
    <argocd_instance>-application-controller-1   1/1     Running   0          32s
    <argocd_instance>-application-controller-2   1/1     Running   0          22s

  4. Verify that the sharding is enabled with round-robin as the sharding algorithm by running the following command:

    $ oc logs <argocd_application_controller_pod> -n <namespace>

    Example output snippet

    time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4
    time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1"
    time="2023-12-13T09:05:34Z" level=info msg="Using filter function:  round-robin" 1
    time="2023-12-13T09:05:34Z" level=info msg="Using filter function:  round-robin"
    time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s"

    1
    Look for the "Using filter function: round-robin" message.
  5. Verify that the cluster distribution across shards is even by performing the following steps:

    1. Set the log level to debug by running the following command:

      $ oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=merge

      Example output

      argocd.argoproj.io/<argocd_instance> patched

    2. View the logs and search for processed by shard to observe to which shard each cluster is attached by running the following command:

      $ oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard"

      Example output snippet

      time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1
      time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2
      time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3

      1 2 3
      In this example, 3 clusters are attached consecutively to shard 0, shard 1, and shard 2.
      Note

      If the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The previous example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned.

4.2. Enabling dynamic scaling of shards of the Argo CD Application Controller

Important

Dynamic scaling of shards is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

By default, the Argo CD Application Controller assigns clusters to shards indefinitely. If you are using the round-robin sharding algorithm, this static assignment can result in uneven distribution of shards, particularly when replicas are added or removed. You can enable dynamic scaling of shards to automatically adjust the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. This ensures that shards are well-balanced and optimizes the use of compute resources.

Note

After you enable dynamic scaling, you cannot manually modify the shard count. The system automatically adjusts the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time.

4.2.1. Enabling dynamic scaling of shards in the web console

You can enable dynamic scaling of shards by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.
  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.

Procedure

  1. In the Administator perspective of the OpenShift Container Platform web console, go to OperatorsInstalled Operators.
  2. From the the list of Installed Operators, select the Red Hat OpenShift GitOps Operator, and then click the ArgoCD tab.
  3. Select the Argo CD instance name for which you want to enable dynamic scaling of shards, for example, openshift-gitops.
  4. Click the YAML tab, and then edit and configure the spec.controller.sharding properties as follows:

    Example Argo CD YAML file with dynamic scaling enabled

    apiVersion: argoproj.io/v1beta1
    kind: ArgoCD
    metadata:
      name: openshift-gitops
      namespace: openshift-gitops
    spec:
      controller:
        sharding:
          dynamicScalingEnabled: true 1
          minShards: 1 2
          maxShards: 3 3
          clustersPerShard: 1 4

    1
    Set dynamicScalingEnabled to true to enable dynamic scaling.
    2
    Set minShards to the minimum number of shards that you want to have. The value must be set to 1 or greater.
    3
    Set maxShards to the maximum number of shards that you want to have. The value must be greater than the value of minShards.
    4
    Set clustersPerShard to the number of clusters that you want to have per shard. The value must be set to 1 or greater.
  5. Click Save.

    A success notification alert, openshift-gitops has been updated to version <version>, appears.

    Note

    If you edit the default openshift-gitops instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes.

Verification

Verify that sharding is enabled by checking the number of pods in the namespace:

  1. Go to WorkloadsStatefulSets.
  2. Select the namespace where the Argo CD instance is deployed from the Project drop-down list, for example, openshift-gitops.
  3. Click the name of the StatefulSet object that has the name of the Argo CD instance, for example openshift-gitops-apllication-controller.
  4. Click the Pods tab, and then verify that the number of pods is equal to or greater than the value of minShards that you have set in the Argo CD YAML file.

4.2.2. Enabling dynamic scaling of shards by using the CLI

You can enable dynamic scaling of shards by using the OpenShift CLI (oc).

Prerequisites

  • You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
  • You have access to the cluster with cluster-admin privileges.

Procedure

  1. Log in to the cluster by using the oc tool as a user with cluster-admin privileges.
  2. Enable dynamic scaling by running the following command:

    $ oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}'

    Example command

    $ oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}' 1

    1
    The example command enables dynamic scaling for the openshift-gitops Argo CD instance in the openshift-gitops namespace, and sets the minimum number of shards to 1, the maximum number of shards to 3, and the number of clusters per shard to 1. The values of minShard and clustersPerShard must be set to 1 or greater. The value of maxShard must be equal to or greater than the value of minShard.

    Example output

    argocd.argoproj.io/openshift-gitops patched

Verification

  1. Check the spec.controller.sharding properties of the Argo CD instance:

    $ oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'

    Example command

    $ oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'

    Example output when dynamic scaling of shards is enabled

    {"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}

  2. Optional: Verify that dynamic scaling is enabled by checking the configured spec.controller.sharding properties in the configuration YAML file of the Argo CD instance in the OpenShift Container Platform web console.
  3. Check the number of Argo CD Application Controller pods:

    $ oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller

    Example command

    $ oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller

    Example output

    NAME                                           READY   STATUS    RESTARTS   AGE
    openshift-gitops-application-controller-0      1/1     Running   0          2m  1

    1
    The number of Argo CD Application Controller pods must be greater than or equal to the value of minShard.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.