Argo CD Agent installation
Installing and deploying the Argo CD Agent, enabling the Argo CD Agent in different modes by using a helm chart.
Abstract
Chapter 1. Installing Argo CD Agent Copy linkLink copied to clipboard!
The Argo CD Agent consists of two components, Principal and Agent that work together to synchronize applications between the control plane and workload clusters in a hub-and-spoke configuration. Install the Principal component by using Argo CD resources and the Agent component by using Helm charts to complete the Argo CD Agent setup.
For more information, see the Additional Resources section, which includes an architectural overview of the Argo CD Agent architecture.
1.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the Red Hat OpenShift GitOps Operator on two OpenShift Container Platform clusters.
-
You have installed the OpenShift CLI (
oc). -
You have installed a
cluster-scoped Argo CDinstance on the hub cluster. -
You have configured
Apps in any namespacefeature on the hub cluster. -
You have installed
argocd-agentctlCLI tool. -
You have installed
helmCLI for your Agent setup. Ensure that thehelmCLI version is greater than v3.8.0. - You have read the key Argo CD terms required to understand the Principal and Agent components in the Argo CD Terminologies section.
1.2. Argo CD Agent Terminologies Copy linkLink copied to clipboard!
The following definitions help you understand the required namespaces, contexts, and command-line parameters used when configuring Argo CD Agent across hub and spoke clusters.
- Principal namespace
-
Specifies the namespace where you install the Principal component. This namespace is not created by default, you must create it before adding the resources in this namespace. In Argo CD Agent CLI commands, this value is provided using the
--principal-namespaceflag. - Agent namespace
-
Specifies the namespace hosting the
Agentcomponent. This namespace is not created by default, you must create it before adding the resources in this namespace. In Argo CD Agent CLI commands, this value is provided using the--agent-namespaceflag. - Context
-
A context refers to a named configuration in the
ocCLI that allows you to switch between different clusters. You must be logged in to all clusters and assign distinct context names for the hub and spoke clusters. Examples for cluster names includeprincipal-cluster,hub-cluster,managed-agent-cluster, orautonomous-agent-cluster. - Principal context
-
The context name you provide for the hub (control plane) cluster. For example, if you log in to the hub cluster and rename its context to
principal-cluster, you specify it in Argo CD Agent CLI commands as--principal-context principal-cluster. - Agent context
-
The context name you provide for the spoke (workload) cluster. For example, if you log in to a spoke cluster and rename its context to
autonomous-agent-cluster, you specify it in Argo CD Agent CLI commands as--agent-context autonomous-agent-cluster.
1.3. Installing the Principal component Copy linkLink copied to clipboard!
You can install the Principal component on the hub cluster by using the Argo CD custom resource (CR) to manage communication with Agent components. Deploy the Principal component, provide the resource details to create the required secrets, and then restart the Principal component. The principal component uses a public key infrastructure (PKI) to secure all communications.
Procedure
Deploy the Argo CD instance with the Principal component enabled. The following example shows the configuration for enabling the Principal component in the Argo CD CR:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<argoCDAgent>- Specifies the Argo CD Agent configuration.
<principal>- Specifies the Principal component configuration.
<auth>- Specifies the authentication method used in the Principal component.
<namespace>- Specifies the namespace configuration used in the Principal component.
<tls>- Specifies the configuration used for secure communication.
<sourceNamespaces>Specifies the
sourceNamespacesconfiguration.Note-
The namespace names specified in
sourceNamespacesmust exactly match the corresponding Agent names. If they do not match, the Principal cannot deploy or monitor applications on the spoke cluster in managed mode, or monitor applications on the spoke cluster in autonomous mode. -
The Argo CD CR must be installed as a
cluster-scopedArgo CD instance. See the Additional Resources section for how to install acluster-scopedArgo CD instance.
Apply the Argo CD resource to the hub cluster:
oc apply -f argocd.yaml -n "<principal_namespace>" --context "<principal_context>"
$ oc apply -f argocd.yaml -n "<principal_namespace>" --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal_namespace- Specifies the namespace where the principal component is installed on the hub cluster.
principal_context- Specifies the context of the hub cluster where the principal component is running.
Verify Argo CD instance deployment. Check the pods in the principal namespace by running the following command.
oc get pod -n "<principal_namespace>" --context "<principal_context>"
$ oc get pod -n "<principal_namespace>" --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS openshift-gitops-agent-principal-xxxxxxxxx-xxxxx 0/1 Running openshift-gitops-redis-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-repo-server-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-server-xxxxxxxxx-xxxxx 1/1 Running
NAME READY STATUS openshift-gitops-agent-principal-xxxxxxxxx-xxxxx 0/1 Running openshift-gitops-redis-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-repo-server-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-server-xxxxxxxxx-xxxxx 1/1 RunningCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantAt this stage, Argo CD pods start successfully, but the principal pod remains in a
Running(not Ready) state because the required secrets have not yet been created. Secrets must be created later because some configurations, such as the principal hostname and resource proxy service names, are available only after the Red Hat OpenShift GitOps Operator enables the principal component.Verify the services created by the operator by running the following command:
oc get svc -n "<principal_namespace>" --context "<principal_context>"
$ oc get svc -n "<principal_namespace>" --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, the Operator creates a route for the Principal, which is used to access Principal services. Verify the default route for the Principal component by running the following command:
oc get route -n "<principal_namespace>" --context "<principal_context>"
$ oc get route -n "<principal_namespace>" --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME HOST/PORT openshift-gitops-agent-principal example-host-name
NAME HOST/PORT openshift-gitops-agent-principal example-host-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to disable the route or use a
LoadBalancerservice, update the following section in the Argo CD resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The namespace names specified in
Create the required namespaces. You must create one namespace for each Agent that you plan to connect to the Principal. The namespace names must match the Agent names. Run the following commands:
oc create namespace agent-managed --context "<principal_context>" oc create namespace agent-autonomous --context "<principal_context>"
$ oc create namespace agent-managed --context "<principal_context>" $ oc create namespace agent-autonomous --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEach Agent requires a dedicated namespace because Argo CD Agent architecture uses a namespace-to-agent mapping model. For managed Agents, the Principal sends applications only to those created in the corresponding Agent namespace on the hub cluster. Similarly, for autonomous Agents, applications are created in the respective Agent namespace at the hub.
Ensure that the
AppProjectresource used for your Argo CD applications on the hub includes all Agent namespaces under the.spec.sourceNamespacesfield, for example:spec: sourceNamespaces: - "agent-managed" - "agent-autonomous"
spec: sourceNamespaces: - "agent-managed" - "agent-autonomous"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If you modify the
AppProject, restart the Argo CD pods to apply the changes.
-
If you modify the
Create required secrets.
ImportantThe CLI-generated PKI is intended for development and testing purposes only. For production environments, use certificates issued by your organization’s PKI or a trusted certificate authority.
Initialize the PKI
To initialize the certificate authority (CA) that signs other certificates, run the following command:
argocd-agentctl pki init \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>"
$ argocd-agentctl pki init \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal_namespace- Specifies the namespace where the Principal component is installed.
principal_contextSpecifies the context of the Principal component.
This command generates a self-signed CA certificate and private key, and creates the
argocd-agent-casecret that contains the CA credentials. The CA is valid for signing certificates and includes a default expiration period.
To generate the server certificate for the Principal’s gRPC service, run the following command:
argocd-agentctl pki issue principal \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>" \ --dns "<principal_dns_name>"$ argocd-agentctl pki issue principal \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>" \ --dns "<principal_dns_name>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal-namespace- Specifies the namespace where the Principal component is installed.
principal-context- Specifies the context of the Principal component.
principal_dns_name-
Specifies the hostname of the Principal service. Agents use this value to connect to the Principal’s gRPC service. This should match the
.spec.hostvalue of the Principal’s route, or.status.loadBalancer.ingress.hostnamewhen the Principal service is of typeLoadBalancer. ip(optional)- Specifies the comma-separated list of IP addresses where the Principal component is accessible.
u,upsert(optional)Updates the existing certificate if one already exists.
This command issues a TLS certificate for the Principal to enable secure gRPC communication between the Principal and Agent components. It also creates the
argocd-agent-principal-tlssecret that contains the generated certificate and key.
Generate the server certificate for the resource proxy service by running the following command:
argocd-agentctl pki issue resource-proxy \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>" \ --dns "<resource_proxy_dns>"$ argocd-agentctl pki issue resource-proxy \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>" \ --dns "<resource_proxy_dns>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal-namespace- Specifies the namespace where the Principal component is installed.
principal-context- Specifies the context of the Principal component.
resource_proxy_dns-
Specifies the service name for the
resource-proxyin the hub cluster, for example,<ArgoCD instance name>-agent-principal-resource-proxy. Argo CD uses this service to connect to theresource-proxy. Because the Argo CD and theresource-proxyrun in the same cluster, Argo CD can connect using the service name directly. ip(optional)- Specifies the comma-separated list of IP addresses for the resource proxy that are reachable by the Argo CD API server.
u,upsert(optional)Updates the existing certificate if one already exists.
This command creates the
argocd-agent-resource-proxy-tlssecret that contains the TLS certificate for the resource proxy component. The certificate includes the required subject alternative names (SANs) for the resource proxy service and is signed by the same CA that was created during PKI initialization.
Generate the RSA private key by running the following command:
argocd-agentctl jwt create-key \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>"$ argocd-agentctl jwt create-key \ --principal-namespace "<principal_namespace>" \ --principal-context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal-namespace- Specifies the namespace where the Principal component is installed.
principal-context- Specifies the context of the Principal component.
u,upsert(optional)Updates the existing JWT key if one already exists.
This command generates a 4096-bit RSA private key in the
PKCS#8 PEMformat and stores it in theargocd-agent-jwtsecret.
Verify that the corresponding secrets are created in the Principal namespace by running the following command:
oc get secrets -n "<principal_namespace>" --context "<principal_context>" | grep agent
$ oc get secrets -n "<principal_namespace>" --context "<principal_context>" | grep agentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME TYPE argocd-agent-ca kubernetes.io/tls argocd-agent-principal-tls kubernetes.io/tls argocd-agent-resource-proxy-tls kubernetes.io/tls argocd-agent-jwt Opaque
NAME TYPE argocd-agent-ca kubernetes.io/tls argocd-agent-principal-tls kubernetes.io/tls argocd-agent-resource-proxy-tls kubernetes.io/tls argocd-agent-jwt OpaqueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the principal component is now successfully started by running the following command:
oc get pod -n "<principal_namespace>" --context "<principal_context>"
$ oc get pod -n "<principal_namespace>" --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS openshift-gitops-agent-principal-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-redis-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-repo-server-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-server-xxxxxxxxx-xxxxx 1/1 Running
NAME READY STATUS openshift-gitops-agent-principal-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-redis-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-repo-server-xxxxxxxxx-xxxxx 1/1 Running openshift-gitops-server-xxxxxxxxx-xxxxx 1/1 RunningCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the pod logs in the Principal namespace by running the following command:
oc logs "<principal_pod>" -n "<principal_namespace>" --context "<principal_context>"
$ oc logs "<principal_pod>" -n "<principal_namespace>" --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal_podSpecifies the name of the principal pod running in the principal namespace. This pod hosts the principal component, which manages communication and certificate exchange with connected Agents.
Example output:
level=info msg="Starting metrics server on http://0.0.0.0:8000/metrics" level=info msg="Redis proxy started on 0.0.0.0:6379" level=info msg="Resource proxy started" level=info msg="Namespace informer synced and ready" level=info msg="Starting healthz server on :8003"
level=info msg="Starting metrics server on http://0.0.0.0:8000/metrics" level=info msg="Redis proxy started on 0.0.0.0:6379" level=info msg="Resource proxy started" level=info msg="Namespace informer synced and ready" level=info msg="Starting healthz server on :8003"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Ensure that all Principal pods show a
Runningstatus and that the corresponding services and route are created. The Principal component is now ready to communicate securely with the Agent components on workload clusters. For more information on metrics exposed by the Principal component, see Principal metrics, Argo CD upstream documentation.
1.4. Setting up a spoke cluster environment Copy linkLink copied to clipboard!
After you set up the Principal component, you can connect it with one or more Agents. This connection allows the principal to securely manage Argo CD Applications on each Agent cluster.
Prerequisites
- The Principal setup is complete and running.
- You have access to both the Principal and Agent clusters.
-
The
argocd-agentctlCLI tool is installed and accessible from your environment. -
The
helmCLI is installed and configured. Ensure that thehelmCLI version is later than v3.8.0.
Procedure
Create the Agent secret on the Principal cluster by running the following command:
argocd-agentctl agent create "<agent_name>" \ --principal-context "<principal_context>" \ --principal-namespace "<principal_namespace>" \ --resource-proxy-server "<principal_resource_proxy_url>"
$ argocd-agentctl agent create "<agent_name>" \ --principal-context "<principal_context>" \ --principal-namespace "<principal_namespace>" \ --resource-proxy-server "<principal_resource_proxy_url>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
agent_name-
A unique name used to identify each Agent managed by the Principal. The Agent CLI uses this value to create the Argo CD cluster secret (
cluster-<agent-name>) and to construct the server URL. principal-namespace- Specifies the namespace where the Principal component is installed.
principal-context- Specifies the cluster context of the Principal component.
principal_resource_proxy_url-
Specifies the URL of the Principal resource-proxy, for example,
<ArgoCD Instance Name>-agent-principal-resource-proxy:9090, which Argo CD uses to access live resources in the Agent’s cluster. In this model, Argo CD does not connect directly to the Agent cluster’s API server; rather, it sends requests to the Principal’sresource-proxy, which then forwards them to the appropriate Agent via gRPC. label, l(optional)Adds optional metadata labels for the Agent.
This command generates a client certificate signed by the Principal CA, creates an Argo CD cluster secret with the certificate data and sets up credentials for resource proxy access.
Copy the CA certificate to the Agent cluster by running the following command:
argocd-agentctl pki propagate \ --agent-context "<agent_context>" \ --principal-context "<principal_context>" \ --principal-namespace "<principal_namespace>" \ --agent-namespace "<agent_namespace>"$ argocd-agentctl pki propagate \ --agent-context "<agent_context>" \ --principal-context "<principal_context>" \ --principal-namespace "<principal_namespace>" \ --agent-namespace "<agent_namespace>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal-namespace- Specifies the namespace where the Principal component is installed.
agent-namespace- Specifies the namespace where the Agent component is installed.
principal-context- Specifies the kubeconfig context of the Principal (hub) cluster.
agent-context- Specifies the kubeconfig context of the Agent (workload) cluster.
force, f(optional)Forces regeneration of the PKI if it already exists on the target cluster.
This command copies the CA certificate from the Principal cluster and creates the
argocd-agent-casecret in the Agent cluster.
Generate a client certificate for the Agent by running the following command:
argocd-agentctl pki issue agent "<agent_name>" \ --principal-context "<principal_context>" \ --agent-context "<agent_context>" \ --agent-namespace "<agent_namespace>" \ --principal-namespace "<principal_namespace>"$ argocd-agentctl pki issue agent "<agent_name>" \ --principal-context "<principal_context>" \ --agent-context "<agent_context>" \ --agent-namespace "<agent_namespace>" \ --principal-namespace "<principal_namespace>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal-context- Specifies the kubeconfig context of the Principal (hub) cluster.
agent-context- Specifies the kubeconfig context of the Agent (workload) cluster.
agent-namespace- Specifies the namespace where the Agent component is installed.
principal-namespace- Specifies the namespace where the Principal component is installed.
upsert, u(optional)Updates the existing JWT key if one already exists in the Agent namespace.
This command creates the
argocd-agent-client-tlssecret, generates the Agent’s client certificate and private key, and signs the certificate using the Principal CA.
Verify that all required secrets are created in the Agent namespace by running the following command:
oc get secrets -n "<agent_namespace>" --context "<agent_context>" | grep agent
$ oc get secrets -n "<agent_namespace>" --context "<agent_context>" | grep agentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME TYPE argocd-agent-ca kubernetes.io/tls argocd-agent-principal-tls kubernetes.io/tls argocd-agent-resource-proxy-tls kubernetes.io/tls argocd-agent-jwt Opaque
NAME TYPE argocd-agent-ca kubernetes.io/tls argocd-agent-principal-tls kubernetes.io/tls argocd-agent-resource-proxy-tls kubernetes.io/tls argocd-agent-jwt OpaqueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an Argo CD instance on the Agent cluster by creating an Argo CD CR similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by running the following command:
oc apply -f argocd.yaml -n "<agent_namespace>" --context "<agent_context>"
$ oc apply -f argocd.yaml -n "<agent_namespace>" --context "<agent_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the Argo CD instance is created, run the following command:
oc get pod -n "<AGENT_NAMESPACE>" --context "<AGENT_CONTEXT>"
$ oc get pod -n "<AGENT_NAMESPACE>" --context "<AGENT_CONTEXT>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS openshift-gitops-application-controller-0 1/1 Running openshift-gitops-redis-5b6668544-sxtth 1/1 Running openshift-gitops-repo-server-8cb87b698-cnfgl 1/1 Running
NAME READY STATUS openshift-gitops-application-controller-0 1/1 Running openshift-gitops-redis-5b6668544-sxtth 1/1 Running openshift-gitops-repo-server-8cb87b698-cnfgl 1/1 RunningCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Installing the Argo CD Agent in managed mode by using a Helm chart Copy linkLink copied to clipboard!
Use the Helm chart provided by the Argo CD Agent project to install the managed mode Agent component on the workload cluster. In managed mode, the Agent must communicate with the spoke Redis server. However, the default network policy created by the Red Hat OpenShift GitOps Operator does not allow this communication.
Procedure
To enable communication, you must create a custom network policy by using the following YAML configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<podSelector>-
Selects the pods this policy applies to, for example, the Redis pod labeled
app.kubernetes.io/name: <ArgoCD Instance Name>-redis. <ingress>- Defines the allowed inbound traffic rules for the selected pods.
<policyTypes>Indicates that this policy is an Ingress policy, meaning it only checks incoming connections.
Apply the network policy by using the following command:
oc apply -f network-policy.yaml -n "<agent_namespace>" --context "<agent_context>"
$ oc apply -f network-policy.yaml -n "<agent_namespace>" --context "<agent_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
agent_namespace- Specifies the namespace where the network policy is created.
agent_context- Ensures that the command runs against the Agent cluster and not on the hub cluster.
Add the OpenShift Container Platform Helm chart repository by running the following command:
helm repo add openshift-helm-charts https://charts.openshift.io/
$ helm repo add openshift-helm-charts https://charts.openshift.io/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the managed Agent using Helm by using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
namespaceOverride- Overrides the default namespace in which the Agent is installed. Set this to the namespace created for the Agent on the managed (spoke) cluster. Ensure the namespace exists before installation unless using the Helm namespace creation flags.
agentModeDefines how the Agent operates.
-
managed: The Principal manages the Agent configuration and lifecycle. -
autonomous: The Agent functions independently of the Principal.
-
serverSpecifies the hostname of the Principal service. Agents use this value to connect to the Principal’s gRPC service. This should match either:
-
The
.spec.hostvalue of the Principal route (when using OpenShift Routes), or -
The
.status.loadBalancer.ingress.hostnameof the Principal service (when using a LoadBalancer service type). This value must be reachable from the Agent cluster, and DNS resolution must work across clusters.
-
The
argoCdRedisSecretName- The name of the Kubernetes Secret in the Agent namespace that contains the Redis password for the Argo CD instance running on the Agent cluster. This must match the secret created by the Argo CD installation.
argoCdRedisPasswordKey-
The key within the Secret specified in
argoCdRedisSecretNamethat stores the Redis password. This is usuallyadmin.passwordfor Red Hat OpenShift GitOps deployments. redisAddress-
The host and port of the Redis instance associated with the Agent’s Argo CD installation. Typically formatted as
<argocd_name>-redis:6379. Ensure the Redis Service name and port match the installed Argo CD instance. kube-context- The kubeconfig context of the Agent (managed) cluster. The Helm install runs against this context, so it must point to the correct remote cluster configured for the Agent.
1.6. Installing the Argo CD Agent in autonomous mode by using a Helm chart Copy linkLink copied to clipboard!
Use the Helm chart provided by the Argo CD Agent project to install the autonomous mode Agent component on the spoke (workload) cluster. In autonomous mode, the Agent must communicate with the spoke Redis server. However, the default network policy created by the Red Hat OpenShift GitOps Operator does not allow this.
Procedure
To enable communication, you must create a custom network policy by using the following YAML configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<podSelector>-
Selects the pods this policy applies to, for example, the Redis pod labeled
app.kubernetes.io/name: <ArgoCD Instance Name>-redis. <ingress>- Defines the allowed inbound traffic rules for the selected pods.
<policyTypes>- Indicates that this policy is an Ingress policy, meaning it only checks incoming connections.
Apply the network policy by using the following command:
oc apply -f network-policy.yaml -n "<agent_namespace>" --context "<agent_context>"
$ oc apply -f network-policy.yaml -n "<agent_namespace>" --context "<agent_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
agent_namespace- Specifies the namespace where the Agent instance is deployed. Use this value to specify the target namespace when you apply the network policy.
agent_context- Specifies the context for the cluster that runs the Agent instance. Use this value to ensure that the command is executed on the correct cluster when multiple contexts are configured.
Add the OpenShift Container Platform Helm chart repository by running the following command:
helm repo add openshift-helm-charts https://charts.openshift.io/
$ helm repo add openshift-helm-charts https://charts.openshift.io/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the autonomous Agent using Helm by using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
chart_version-
Specifies the version of the
redhat-argocd-agentHelm chart that you want to install. agent_namespace- Defines the namespace where the Argo CD Agent component will be installed on the spoke cluster.
agent_context-
Defines the
kube-contextfor the spoke cluster where the Agent is deployed. principal_route_dns-
Specifies the hostname of the Principal service. Agents use this value to connect to the Principal’s gRPC service. This should match the
.spec.hostvalue of the Principal’s route, or.status.loadBalancer.ingress.hostnamewhen the Principal service is of typeLoadBalancer. argocd_name- Specifies the name of the Argo CD instance installed alongside the Agent. This value is used to construct Redis-related resource names.
<argocd_name>-redis-initial-password- Defines the name of the Kubernetes secret that stores the initial Redis password for the Argo CD instance.
Create an
AppProjectin the agent namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the Additional Resources section, which includes the upstream documentation link on creating
AppProjectresources in Agents.
1.7. Verifying the Argo CD Agent installation in Managed Mode or Autonomous Mode Copy linkLink copied to clipboard!
To verify that the Argo CD Agent is correctly installed and connected to the Principal cluster in managed or autonomous mode, complete the following steps.
Procedure
Verify that the Agent deployment is created by running the following command:
oc get pod -n "<agent_namespace>" --context "<agent_context>"
$ oc get pod -n "<agent_namespace>" --context "<agent_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS argocd-agent-agent-6489fc5dd-48whw 1/1 Running
NAME READY STATUS argocd-agent-agent-6489fc5dd-48whw 1/1 RunningCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the Agent component is successfully connected with Principal, run the following command:
oc logs "<agent_pod>" -n "<agent_namespace>" --context "<agent_context>"
$ oc logs "<agent_pod>" -n "<agent_namespace>" --context "<agent_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow As a result, the following messages are displayed in the pod logs.
level=info msg="Authentication successful" level=info msg="Connected to argocd-agent-0.0.1-alpha" level=info msg="Starting event writer" level=info msg="Starting to send events to event stream" level=info msg="Starting to receive events from event stream"
level=info msg="Authentication successful" level=info msg="Connected to argocd-agent-0.0.1-alpha" level=info msg="Starting event writer" level=info msg="Starting to send events to event stream" level=info msg="Starting to receive events from event stream"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
(Optional) Verify that the Agent metrics service is available. The Agent starts a metrics server exposed through a ClusterIP service, for example:
openshift-gitops-agent-agent-metrics. You can create a route to expose this service externally. Verify the connection from the Principal cluster.
On the Principal cluster, check the logs of the Principal pod by running the following command:
oc logs "<principal_pod>" --tail 10 -n "<principal_namespace>" --context "<principal_context>"
$ oc logs "<principal_pod>" --tail 10 -n "<principal_namespace>" --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the following log messages are displayed:
level=info msg="An agent connected to the subscription stream" level=info msg="Updated connection status to 'Successful' in Cluster: '<agent_name>' mapped with Agent: '<agent_name>'" level=info msg="Starting event writer"
level=info msg="An agent connected to the subscription stream" level=info msg="Updated connection status to 'Successful' in Cluster: '<agent_name>' mapped with Agent: '<agent_name>'" level=info msg="Starting event writer"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
(Optional) Verify the Principal metrics endpoint, a successful connection increases the metric
agent_connected_with_principalby number of agents connected to the hub. To list connected Agents, run the following command:
argocd-agentctl agent list --principal-namespace "<principal_namespace>" --principal-context "<principal_context>"
$ argocd-agentctl agent list --principal-namespace "<principal_namespace>" --principal-context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Verify that all Agent pods are in the
Runningstate and that secrets and Helm deployments have been successfully created before proceeding with synchronization.
1.8. Deploying Argo CD applications Copy linkLink copied to clipboard!
You can deploy Argo CD Applications using either the managed agent mode or the autonomous agent mode. The deployment mode determines which mode is the source of truth and where the Argo CD application must be created.
- In managed agent mode, the hub cluster is the source of truth.
- In autonomous agent mode, the spoke cluster is the source of truth.
1.8.1. Deploying an Argo CD application in managed mode Copy linkLink copied to clipboard!
In managed agent mode, the hub acts as the source of truth. Argo CD applications must be created in the hub cluster. The agent creates and manages the application in the spoke cluster based on the configuration defined in the hub.
Prerequisites
- You have deployed and configured the Principal and Agent component.
-
You have the
ocCLI installed and are logged in with sufficient permissions. - You know the context names for both the hub (control plane) and spoke (workload) clusters.
Procedure
Create the Argo CD application manifest file,
application.yaml, with the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Specifies the namespace on the principal cluster where the Agent’s managed applications are stored.
<name>- Specifies the name of the Agent cluster on the hub.
Apply the configuration in the hub cluster by running the following command:
oc apply -f application.yaml -n agent-managed --context "<principal_context>"
$ oc apply -f application.yaml -n agent-managed --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal_context- Specifies the context that points to the hub cluster where the Principal component is running.
Verification
To verify that the application is created in the spoke cluster, run the following command:
oc get application.argoproj.io/my-demo-app -n openshift-gitops --context "<agent_context>"
$ oc get application.argoproj.io/my-demo-app -n openshift-gitops --context "<agent_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
agent_context- Specifies the context that points to the Agent cluster where the Agent component is running.
To verify that the application is synced and healthy from the hub, run the following command:
oc get application.argoproj.io/my-demo-app -n agent-managed --context "<principal_context>"
$ oc get application.argoproj.io/my-demo-app -n agent-managed --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal_context- Specifies the context that points to the hub cluster where the Principal component is running.
- If the configuration is correct, the Agent creates the application and its associated resources in the spoke cluster. Because the hub is the source of truth, any direct changes made in the spoke cluster are automatically reverted by the agent.
1.8.2. Deploying an Argo CD application in autonomous mode Copy linkLink copied to clipboard!
In autonomous agent mode, the spoke acts as the source of truth. The Argo CD applications must be created directly in the spoke cluster, and the agent reports the application state back to the Principal component in the hub.
Prerequisites
- You have deployed and configured the Principal and Agent component.
-
You have the
ocCLI installed and are logged in with sufficient permissions. - You know the context names for both the hub (control plane) and spoke (workload) clusters.
Procedure
Create the Argo CD application manifest file,
application.yaml, with the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
namespace: openshift-gitops- Specifies the namespace where the Argo CD instance (Agent) is running on the spoke cluster. This controller manages the Application resource.
source.repoURL- Specifies the Git repository that stores the application manifests.
source.targetRevision- Specifies the Git revision Argo CD should track.
source.path- Specifies the directory inside the repository containing the application manifests.
destination.server- Specifies the Kubernetes API server where the application will be deployed.
destination.namespace: my-app- Specifies the target namespace for the deployed application. If the namespace does not exist, it can be automatically created.
syncPolicy.automated.prune- Enables automated pruning. Argo CD deletes resources that were previously applied but no longer appear in Git.
syncPolicy.automated.selfHeal- Enables automatic self-healing. Argo CD corrects any drift between live resources and the Git source.
managedNamespaceMetadata.labels-
Labels that Argo CD applies to the destination namespace when it creates or manages it. The label in this example indicates the namespace is managed by the
openshift-gitopsinstance.
Apply the configuration in the spoke cluster by running the following command:
oc apply -f application.yaml -n openshift-gitops --context "<agent_context>"
$ oc apply -f application.yaml -n openshift-gitops --context "<agent_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
agent_context- Specifies the context name for the agent cluster in an Agent-Principal setup.
Verification
To verify that the application is created in the hub cluster, run the following command:
oc get application.argoproj.io/my-demo-app -n agent-autonomous --context "<principal_context>"
$ oc get application.argoproj.io/my-demo-app -n agent-autonomous --context "<principal_context>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
principal_context- Specifies the context name for the principal cluster in an Agent-Principal setup.
To verify that the application is synced and healthy from the spoke, run the following command:
oc get application.argoproj.io/my-demo-app -n openshift-gitops --context "<AGENT_CONTEXT>"
$ oc get application.argoproj.io/my-demo-app -n openshift-gitops --context "<AGENT_CONTEXT>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the configuration is correct, the agent creates all application resources in the spoke cluster. As the spoke cluster is the source of truth, direct changes in the hub cluster are not allowed.
1.9. Troubleshooting Principal-Agent communication and deployment issues Copy linkLink copied to clipboard!
Use the following troubleshooting steps if you encounter issues during Principal-Agent communication or application deployment.
- Agent cannot connect to the Principal
This issue can occur for one or more of the following reasons:
- The Principal or Agent pods are not healthy.
-
The hostname of the principal, exposed via Route in the Agent Helm chart (
serverparameter in thevalues.yamlfile or the--set serverflag) is incorrect. - Required certificates or secrets were not created correctly.
To resolve this issue:
- Verify that the Principal and Agent pods are running and healthy.
-
Confirm that the
serverparameter in the Agent Helm chart matches the hostname of the principal, exposed via Route. - Check that the certificates and secrets required for Principal-Agent communication exist in the correct namespaces.
- Agent cannot connect to Redis
- You see the following connection error messages in agent pod logs:
redis: connection pool: failed to dial after 2 attempts: dial tcp xxx.xx.xx.xxx:6379: i/o timeout level=error msg="Failed to get cluster info from cache" error="dial tcp 172.30.60.217:6379: i/o timeout" event=addClusterCacheInfoUpdateToQueue module=Agent level=error msg="Failed to get cluster info from cache" error="NOAUTH Authentication required." event=addClusterCacheInfoUpdateToQueue module=Agent
redis: connection pool: failed to dial after 2 attempts: dial tcp xxx.xx.xx.xxx:6379: i/o timeout
level=error msg="Failed to get cluster info from cache" error="dial tcp 172.30.60.217:6379: i/o timeout" event=addClusterCacheInfoUpdateToQueue module=Agent
level=error msg="Failed to get cluster info from cache" error="NOAUTH Authentication required." event=addClusterCacheInfoUpdateToQueue module=Agent
This issue can occur if one or more of the following conditions are true:
- The Redis pod is not healthy.
-
The Redis address specified in the Agent Helm chart (
redisAddressinvalues.yamlor the--set redisAddressflag) is incorrect. -
The Redis secret name (
argoCdRedisSecretName) is incorrect or the secret does not exist in the Agent’s namespace. -
The key name containing the Redis password (
argoCdRedisPasswordKey) is incorrect or missing from the secret. - The Redis service is not accessible due to namespace or networking restrictions.
-
A
NetworkPolicyis blocking API calls from the Agent to Redis.
To resolve this issue:
- Verify that the Redis pod is running and healthy.
- Check that the Redis service and secret configuration in the Helm chart match the expected values.
-
Ensure that no
NetworkPolicyobjects are preventing communication between the Agent and Redis.
- Agent handshake with Principal fails
- You might see the following warning in the Agent pod logs:
level=warning msg="Auth failure: rpc error: code = Unavailable desc = connection error: desc = \"transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate is valid for openshift-gitops-agent-principal--generated, not openshift-gitops-agent-principal-argocd.apps.dcrs-h.ocp-dhte-citc.com\" (retrying in 1.44s)"
level=warning msg="Auth failure: rpc error: code = Unavailable desc = connection error: desc = \"transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate is valid for openshift-gitops-agent-principal--generated, not openshift-gitops-agent-principal-argocd.apps.dcrs-h.ocp-dhte-citc.com\" (retrying in 1.44s)"
This failure usually occurs when:
-
The Principal component automatically generates insecure TLS certificates (
.spec.argoCDAgent.principal.tls.insecureGenerate: true). -
The Agent is configured to validate certificates (
tlsClientInSecureis set tofalsein the Helm install command).
To resolve this issue, perform one of the following actions:
Disable certificate validation at the Agent by setting
tlsClientInSecure=true.WarningThe
tlsClientInSecure=truesetting disables TLS verification and is insecure. Do not use it in production environments.-
Disable automatic certificate generation at the Principal component by setting
insecureGenerate: false.
- Agent cannot view Workload Cluster resources
You might view the following error in the Argo CD Web UI:
Resource not found in cluster: apps/v1/Deployment:(resource-name) Please update your resource specification to use the latest Kubernetes API resources supported by the target cluster. The recommended syntax is apps/v1/Deployment:guestbook-ui
Resource not found in cluster: apps/v1/Deployment:(resource-name) Please update your resource specification to use the latest Kubernetes API resources supported by the target cluster. The recommended syntax is apps/v1/Deployment:guestbook-uiCopy to Clipboard Copied! Toggle word wrap Toggle overflow In addition, you might view the following error in the Managed or Autonomous Agent pod logs:
time="(...)" level=error msg="could not request resource: pods \"(...)\" is forbidden: User \"system:serviceaccount:argocd:argocd-agent-agent\" cannot get resource \"pods\" in API group \"\" in the namespace \"guestbook\"" method=processIncomingResourceRequest
time="(...)" level=error msg="could not request resource: pods \"(...)\" is forbidden: User \"system:serviceaccount:argocd:argocd-agent-agent\" cannot get resource \"pods\" in API group \"\" in the namespace \"guestbook\"" method=processIncomingResourceRequestCopy to Clipboard Copied! Toggle word wrap Toggle overflow These errors occurs when the Argo CD Agent does not have the necessary permissions to view resources outside the namespace in which it is installed. To resolve this issue, ensure that the Agent has appropriate access to the required namespace.