Chapter 1. Argo CD Agent architecture overview


This guide provides a complete reference for the Argo CD Agent architecture, including its hub-and-spoke model, synchronization mechanisms, operational modes, security considerations, namespace management, and known limitations.

The Argo CD Agent introduces an opinionated mechanism for managing multiple clusters by using Argo CD in a hub-and-spoke configuration. Argo CD Agent, configured alongside Argo CD, enables the familiar Argo CD Web console, but extends its capabilities to allow you to simultaneously configure and monitor multiple, separate Argo CD instances from a single location.

Unlike a traditional single-instance Argo CD deployment, the Argo CD Agent architecture provides a single pane of glass for monitoring and managing Argo CD Applications across multiple clusters.

Important

The Argo CD Agent configuration is designed for advanced users who already understand Argo CD and Red Hat OpenShift GitOps concepts. If you are new to Argo CD, start with the traditional push-based hub-and-spoke model before adopting the Agent-based approach.

1.2. Architecture and Terminology

Argo CD Agent works alongside Argo CD to manage multi-cluster deployments in a hub-and-spoke architecture using a standalone agent process, enabling pull-based change management.

In this architecture, a single hub (referred to as the control plane cluster) manages the configuration for multiple spokes (known as workload clusters), each of which runs its own Argo CD instance.

  • control plane cluster (hub) - In a hub-and-spoke architecture, the control plane functions as the hub. It provides a single pane of glass to monitor the status of all Argo CD Applications and their resources. In managed mode, it also serves as the single source of truth for Argo CD Application definitions. An Argo CD agent configuration includes only one control plane cluster. On this cluster, a single Argo CD instance runs alongside the Argo CD agent and is responsible for monitoring all Applications, even if they are deployed across multiple workload clusters.
  • workload cluster (spoke) - In a hub-and-spoke architecture, each workload cluster is a spoke that runs application workloads deployed by Argo CD. Every workload cluster hosts a lightweight Argo CD instance for local reconciliation, along with an Argo CD agent that monitors it. In autonomous mode, the workload cluster also holds the single source of truth for Application definitions. A single Argo CD instance and an agent are deployed in a workload cluster.
Note

The Argo CD Agent does not replace existing Argo CD functionality. It extends multi-cluster management capabilities by enabling pull-based synchronization and centralized observability.

The Argo CD Agent architecture offers several advantages and tradeoffs compared to the traditional push-based hub-and-spoke model.

Expand
CapabilityTraditional push-based Argo CDArgo CD Agent pull-based

Single pane of glass

Does not provide a single pane of glass to monitor Argo CD application resources across multiple Argo CD instances. With traditional Argo CD, an instance only shows applications managed by that instance.

Provides a single pane of glass to monitor all Argo CD application resources across all managed Argo CD instances.

Network connectivity

The hub (Argo CD) must connect directly to the destination cluster Kubernetes APIs. Firewalled clusters cannot be managed.

The hub (Argo CD) enables deployments to firewalled clusters through pull-based synchronization.

Scalability

A single Argo CD instance manages multiple destination clusters, which may lead to CPU and memory bottlenecks when scaling to a large number of deployed resources.

Cluster management is not performed centrally by the hub (Argo CD); each child cluster is managed by its own dedicated Argo CD deployment. As it is much easier to scale up a single Argo CD instance deploying to a single cluster, this avoids scalability challenges with the traditional Argo CD configuration.

Security

The hub (Argo CD) stores credentials for all workload clusters, increasing the attack surface. Additionally, workload clusters must expose their Kubernetes API endpoints to the control plane, increasing the attack surface on that cluster.

The control plane does not require workload cluster credentials or API access, reducing risk.

Local reconciliation

The traditional hub-spoke configuration of Argo CD modifies and deletes resources across cluster boundaries, introducing the potential for network latency, network bottlenecks, network egress charges, and network instability.

In an Argo CD Agent-based configuration, each cluster deploys only to itself using its local Argo CD instance, avoiding network-related restrictions, costs, and instability.

Single point of failure

A single hub (Argo CD) instance outage prevents synchronization to all spoke clusters.

Each cluster operates independently, so a failure affects only that cluster.

Complexity

Is easier to configure initially but becomes more complex at scale.

Is slightly more complex to set up but scales consistently.

Maturity

Is fully mature and supported.

Is an emerging configuration; not all features are currently supported.

Each Argo CD Agent on a cluster manages the local Argo CD instance and ensures that applications, AppProjects, and secrets remain synchronized with their source of truth.

The Agent performs the following key functions:

Resource synchronization

  • Synchronizes Argo CD resources such as Application, AppProject, and related Secret objects between the control plane and workload clusters.
  • When you create or modify an application on the control plane, the change propagates to the workload cluster with the agent configured in the Managed mode.
  • When an application status changes on a workload cluster, the change reports back to the control plane.

Unified observability

  • Enables the control plane Argo CD web console to display the real-time state of applications across all clusters.
  • Enables independent synchronization by the Agent container, ensuring consistent two-way updates outside the Argo CD internal reconciliation loop.

1.5. Argo CD Agent modes

The Argo CD Agent supports two operational modes that determine where the authoritative source of truth for the Application .spec field resides:

  • Managed mode — the control plane defines application specifications.
  • Autonomous mode — each workload cluster defines its own application specifications.

You can also use a mixed mode configuration, where different clusters operate under different modes.

Note

Regardless of the mode, the control plane Argo CD instance always displays up-to-date application status across all clusters. The operational mode determines only where the authoritative specification resides, the direction of synchronization, and the ability to interact with that specification from the control plane.

1.5.1. Argo CD Agent Managed mode

In the Argo CD Agent Managed mode, application resources originate on the control plane and are distributed to managed mode clusters.

In the Managed mode, changes on the control plane propagate to workload clusters. Also, direct changes on workload clusters revert to match the control plane definition. With this mode, the control plane is the source of truth for Argo CD application state.

The following list shows the direction of synchronization between the control plane and workload clusters:

  • .spec- from control plane to workload cluster
  • .status - from workload cluster to control plane

The following list highlights the key advantages of using this configuration:

  • Provides a familiar, centralized Argo CD experience.
  • Allows creation and management of applications using the control plane Argo CD console, CLI, or API.
  • Managed mode is an improvement over traditional Argo CD by eliminating the need for centralized cluster credentials and cluster API access.
  • Managed mode improves security by removing the need to centralize cluster credentials.

Consider the following limitations when using this configuration:

  • Provides limited support for the app-of-apps pattern.
  • Exposes workload clusters to risk if the control plane cluster is compromised, because it changes sync downstream.
  • Creates a single point of failure in the control plane, preventing synchronization of application changes to workload clusters if the control plane is non-functional.

1.5.2. Argo CD Agent Autonomous mode

In autonomous mode, Argo CD Applications resources are defined locally on workload clusters and synchronized back to control plane clusters for observability.

In autonomous mode, changes made on workload clusters appear on the control plane. Also, the control plane cannot modify applications directly. With this mode, the workload cluster is the source of truth for Argo CD application state.

The following list describes the direction of synchronization between the workload and control plane clusters:

  • .spec - from workload cluster to control plane
  • .status - from workload cluster to control plane

The following list highlights the key advantages of using this configuration:

  • Ensures that Argo CD Application definitions are stored authoritatively in Git to deliver a key GitOps advantage. Git serves as the single source of truth, while you also gain the benefit of centralized observability.
  • Removes the control plane as a single point of failure or compromise.
  • Supports complex deployment patterns such as app-of-apps.

Consider the following limitations when using this configuration:

  • Prevents modification of applications from the control plane interface.
  • Requires external management of application definitions in Git repositories.

1.6. Security and Authentication

The Argo CD Agent architecture uses mutual TLS (mTLS) certificates to secure communication between the principal Agent (hub) and workload Agents (spokes). mTLS certificates are more secure than traditional password-based authentication, unlike passwords, mTLS certificates allow for certificate expiration, certificate rotation, as well as subject verification to prevent Man-in-the-Middle (MITM) attacks.

A root CA certificate signs both the principal and agent certificates.

The following points describe how certificate-based authentication and trust are established between the control plane (principal) and the agents in the Argo CD Agent architecture:

  • The public root CA certificate must be available on both the control plane and workload clusters for certificate validation.
  • The principal Agent certificate, signed by the root CA, is used to authenticate the principal to agents.
  • Each agent has its own root CA-signed certificate, used to authenticate with the principal.
  • This approach provides secure, verifiable communication with certificate rotation and expiration policies, reducing the risk of credential exposure.
Note

You are responsible for generating and managing mTLS certificates.

1.7. Application Management Across Namespaces

The Argo CD Agent architecture on the control plane cluster uses the Applications in any namespace feature to separate the Application resources that originate from individual workload clusters. This feature enables the control plane Argo CD instance to reconcile Applications that are defined outside its own namespace while maintaining strict isolation between workload clusters.

Each workload cluster is assigned a dedicated namespace on the control plane cluster. These namespaces act as source namespaces for the control plane Argo CD instance, and each contains only the Applications for a single workload cluster.

For a control plane cluster with three workload clusters (w1, w2, and w3), the namespaces are organized as follows:

  • Namespace argocd:

    • Argo CD and Argo CD Agent instances
  • Namespace argocd-w1:

    • Argo CD Application w1-a
    • Argo CD Application w1-b
  • Namespace argocd-w2:

    • Argo CD Application w2-a
    • Argo CD Application w2-b
  • Namespace argocd-w3:

    • Argo CD Application w3-a
    • Argo CD Application w3-b

The argocd-w1, argocd-w2, and argocd-w3 namespaces are treated as source namespaces. Although these namespaces are outside the main Argo CD installation namespace (argocd), the control plane Argo CD instance still discovers and reconciles the Applications stored within them.

Applications in each argocd-w* namespace are scoped to a single cluster. For example, no Applications from workload cluster w2 are defined in the argocd-w3 namespace.

  • Workload cluster w1, namespace argocd:

    • Argo CD and Argo CD Agent instances
    • Argo CD Application w1-a
    • Argo CD Application w1-b
  • Workload cluster w2, namespace argocd:

    • Argo CD and Argo CD Agent instances
    • Argo CD Application w2-a
    • Argo CD Application w2-b
  • Workload cluster w3, namespace argocd:

    • Argo CD and Argo CD Agent instances
    • Argo CD Application w3-a
    • Argo CD Application w3-b

Each workload cluster maintains its own argocd namespace that hosts its local Argo CD and Argo CD Agent instances. The Applications defined in the control plane namespace argocd-w1 are synchronized to the argocd namespace on workload cluster w1, and the same pattern applies to workload clusters w2 and w3.

Note

The namespace names used in this layout are user-defined. The Argo CD Agent architecture does not require a specific naming convention.

To enable the control plane Argo CD instance to discover and reconcile Applications from these workload-specific namespaces, configure the sourceNamespaces field in the Argo CD CR:

apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
  name: argo-cd
spec:
  sourceNamespaces:
    - argocd-w1
    - argocd-w2
    - argocd-w3
  # (...)
Copy to Clipboard Toggle word wrap

With this configuration, the control plane Argo CD instance monitors and reconciles the Applications from each source namespace as part of the overall multi-cluster environment.

1.8. Known Limitations

The Argo CD Agent feature is under active development. The following limitations are currently applicable for this feature:

  • Limited or no support for Applications in any namespace on workload clusters
  • Partial support for ApplicationSets
  • Limited App-of-Apps pattern support in managed mode
  • Pod log streaming and terminal access from the control plane are not available
  • The principal Agent does not support high availability
  • Advanced RBAC and multi-tenancy are under development
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat