Questo contenuto non è disponibile nella lingua selezionata.
Chapter 1. GitOps overview
Red Hat OpenShift Container Platform GitOps and Argo CD is integrated with Red Hat Advanced Cluster Management for Kubernetes, with advanced features compared to the previous Application Lifecycle Channel and Subscription model.
GitOps integration with Argo CD development is active, as well as the large community that contributes feature enhancements and updates to Argo CD. By utilizing the OpenShift Container Platform GitOps Operator, you can use the latest advancements in Argo CD development and receive support from the GitOps Operator subscription.
See the following topics to learn more about Red Hat Advanced Cluster Management for Kubernetes integration with OpenShift Container Platform GitOps and Argo CD:
- GitOps console
- Deploying the Argo CD ApplicationSet resource in any namespace for pull model (Technology Preview).
- Enabling the ApplicationSet resource in any namespace
- Registering managed clusters to Red Hat OpenShift GitOps operator
- Configuring application placement tolerations for GitOps
- Deploying Argo CD with the Push and Pull model
- Generating a policy to install GitOps Operator
- Managing policy definitions with OpenShift Container Platform GitOps (Argo CD)
- Managing the Red Hat OpenShift GitOps add-on
- Enabling Red Hat OpenShift GitOps add-on with ArgoCD agent
- Enabling Red Hat OpenShift GitOps add-on without the ArgoCD agent
- Skipping the OpenShift GitOps add-on enforcement
- Uninstalling the OpenShift GitOps add-on
- Verifying the {gitops-short) add-on functions
- Verifying the ArgoCD agent function
- Implementing progressive rollout strategy by using ApplicationSet resource (Technology Preview)
1.1. GitOps console Copia collegamentoCollegamento copiato negli appunti!
Learn more about integrated OpenShift Container Platform GitOps console features. Create and view applications, such as ApplicationSet, and Argo CD types. An ApplicationSet represents Argo applications that are generated from the controller.
- You click Launch resource in Search to search for related resources.
-
Use Search to find application resources by the component
kindfor each resource.
Important: Available actions are based on your assigned role. Learn about access requirements from the Role-based access control documentation.
Prerequisites
See the following prerequisites and requirements to access the Argo server drop-down menu in the console and create an ApplicationSet resource:
-
Create an Argo CD
ApplicationSetresource. Click the Automatically sync when cluster state changes option in the Sync policy step. -
You need GitOps cluster resources and the GitOps operator installed to create an
ApplicationSet.
1.1.1. Querying Argo CD applications Copia collegamentoCollegamento copiato negli appunti!
When you search for an Argo CD application, you are directed to the Applications page. Complete the following steps to access the Argo CD application from the Search page:
- Log in to your Red Hat Advanced Cluster Management hub cluster.
- From the console header, select the Search icon.
-
Filter your query with the following values:
kind:applicationandapigroup:argoproj.io. - Select an application to view. The Application page displays an overview of information for the application.
For more information about search, see Search service.
1.2. Deploying the Argo CD ApplicationSet resource in any namespace for pull model (Technology Preview) Copia collegamentoCollegamento copiato negli appunti!
With the Argo CD pull model, you can create ApplicationSet resources in any namespace on your hub clusters.
To fully manage your Argo CD ApplicationSet resources, complete the following sections:
Required access: Cluster administrator
Prerequisites
- Complete the procedures to register your managed clusters. For directions, see Registering managed clusters to Red Hat OpenShift GitOps operator.
-
Complete the procedures to enable the
ApplicationSetandApplicationresources in any custom namespace. For directions, see Enabling the ApplicationSet resource in any namespace.
1.2.1. Deploying the ApplicationSet resource for standard configurations Copia collegamentoCollegamento copiato negli appunti!
If you have limited support for role-based access control (RBAC), then you might want to deploy the ApplicationSet resource for standard configurations.
For simple RBAC management, you can deploy the ApplicationSet resource for standard configurations, giving you the following benefits:
- Namespaces are not specified in the GitHub repository resources.
-
The destination of the workload namespace is specified in the
Applicationtemplate. -
The
ApplicationSetresource uses the defaultAppProjectresource.
To deploy the ApplicationSet resource for standard configurations, complete the following steps:
In the
openshift-gitopsnamespace, create aPlacementresource. See the following sample:apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: all-openshift-clusters-except-local namespace: openshift-gitops spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: vendor operator: "In" values: - OpenShift - key: local-cluster operator: DoesNotExistCreate the
ApplicationSetresource in theappset-2namespace with the defaultAppProjectresource by adding the following YAML file sample:apiVersion: v1 kind: Namespace metadata: annotations: name: appset-2Apply the YAML file sample by running the following command:
oc apply -f namespace-example.yamlCreate the
ApplicationSetresource in theappset-2namespace with the defaultAppProjectresource by adding the following YAML file sample:apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: helloworld namespace: appset-2 spec: generators: - clusterDecisionResource: configMapRef: acm-placement labelSelector: matchLabels: cluster.open-cluster-management.io/placement: all-openshift-clusters requeueAfterSeconds: 30 template: metadata: annotations: apps.open-cluster-management.io/ocm-managed-cluster: '{{name}}' argocd.argoproj.io/skip-reconcile: "true" labels: apps.open-cluster-management.io/pull-to-ocm-managed-cluster: "true" name: '{{name}}-helloworld' spec: destination: namespace: helloworld server: https://kubernetes.default.svc project: default source: path: helloworld repoURL: https://github.com/stolostron/application-samples.git targetRevision: HEAD syncPolicy: automated: {}Apply the YAML file sample by running the following command:
oc apply -f applicationset-example.yaml-
The
ApplicationSetresource gets created in theappset-2namespace on your hub cluster. -
The
Applicationresources get deployed to theappset-2namespace on your managed clusters. -
The
Applicationresource deploys its workloads to theHelloworldnamespace on your managed clusters. -
The default Argo CD
AppProjectresource configuration gets applied -
All the
Applicationresources defined in the specified path of the GitHub repository are not namespace specific.
-
The
1.2.2. Deploying the ApplicationSet resource for advanced configurations Copia collegamentoCollegamento copiato negli appunti!
If you have support for role-based access control (RBAC), then you have the option to deploy the ApplicationSet resource for advanced configurations.
You can deploy the ApplicationSet resource for advanced configurations with more RBAC management, giving you the following benefits:
-
The
Applicationresource workload namespaces specified in the GitHub repository resources. -
The destination of the workload namespace that is specified in the
ApplicationSetresource matches the GitHub repository. -
The
ApplicationSetresource uses the custom Argo CDAppProjectresource for RBAC control.
To deploy the ApplicationSet resource for advanced configurations, complete the following steps:
-
In the
openshift-gitopsnamespace, create aPlacementresource. Create the
ApplicationSetresource in thebgdknamespace with the custombgdkAppProjectresource by adding the following YAML file sample:apiVersion: v1 kind: Namespace metadata: annotations: name: bgdkApply the YAML file sample by running the following command:
oc apply -f namespace-example.yamlSet the
bgdkAppProjectresource configuration in the OpenShift GitOps namespace by adding the following YAML file sample:apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: bgdk namespace: openshift-gitops spec: sourceNamespaces: - bgdk sourceRepos: - https://github.com/redhat-developer-demos/openshift-gitops-examples.git destinations: - namespace: bgdk server: https://kubernetes.default.svc clusterResourceWhitelist: - group: '' kind: Namespace-
sourceNamespacesis the namespace where theApplicationitself gets created. -
sourceReposis the repository that theApplicationtemplate uses. -
destinationsis the namespace where theApplicationdeploys its workloads. -
clusterResourceWhitelistis a cluster scoped resource list that theApplicationis allowed to deploy. In this scenario, this namespace kind is mandatory because theApplicationmust create a new namespace.
-
Apply the YAML file sample by running the following command:
oc apply -f appproject-example.yamlApply the customized Argo CD
AppProjectresource configuration to theApplicationSetresource. by adding the following YAML file sample:apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: bgdk-2 namespace: bgdk spec: generators: - clusterDecisionResource: configMapRef: acm-placement labelSelector: matchLabels: cluster.open-cluster-management.io/placement: all-openshift-clusters requeueAfterSeconds: 30 template: metadata: annotations: apps.open-cluster-management.io/ocm-managed-cluster: '{{name}}' argocd.argoproj.io/skip-reconcile: "true" labels: apps.open-cluster-management.io/pull-to-ocm-managed-cluster: "true" name: '{{name}}-bgdk' spec: destination: namespace: bgdk server: https://kubernetes.default.svc project: bgdk source: path: apps/bgd/overlays/bgdk repoURL: https://github.com/redhat-developer-demos/openshift-gitops-examples.git targetRevision: HEAD syncPolicy: automated: {}Apply the YAML file sample by running the following command:
oc apply -f applicationset-example.yaml
Additional resources
To learn more about the ArgoCD ApplicationSet, see the following resources:
1.3. Enabling the ApplicationSet resource in any namespace Copia collegamentoCollegamento copiato negli appunti!
You can enable the ApplicationSet resources in any namespace on your hub clusters.
To enable your Argo CD ApplicationSet resources, complete the following sections:
Required access: Cluster administrator
1.3.1. Enabling the ApplicationSet resource in any namespace on the hub cluster Copia collegamentoCollegamento copiato negli appunti!
To enable the Argo CD ApplicationSet resource in any namespace on your hub cluster, complete the following steps:
From your command-line interface, clone the GitHub repository by running the following command:
git clone https://github.com/stolostron/multicloud-integrationsGo to the GitHub repository that you cloned by running the following command:
cd multicloud-integrations/deploy/appset-any-namespaceEnable the
ApplicationSetresource in any namespace by running the following command:./setup-appset-any-namespace.sh --namespace openshift-gitops --argocd-name openshift-gitopsVerify that the OpenShift GitOps instance restarted and is running on your hub cluster. Run the following command on your hub cluster:
oc get pods -n openshift-gitops
1.3.2. Enabling the Application resource in any namespace on the managed clusters Copia collegamentoCollegamento copiato negli appunti!
The Red Hat Advanced Cluster Management OpenShift GitOps add-on launches a OpenShift GitOps instance that you can use to enable the Application resource in any namespace on your managed cluster. To enable the Argo CD Application resource in any namespace on the managed clusters, complete the following steps:
Create a global
ManagedClusterSetBindingresource by adding the following YAML file sample:apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: name: global namespace: openshift-gitops spec: clusterSet: globalApply the YAML file sample by running the following command:
oc apply -f managedclustersetbinding-example.yamlCreate a
Placementcustom resource for selecting the managed clusters where the (gitops-short) add-on gets enabled. Add the following YAML file sample:apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: all-openshift-clusters namespace: openshift-gitops spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: vendor operator: "In" values: - OpenShiftApply the YAML file sample by running the following command:
oc apply -f placement-example.yamlCreate the
GitOpsClusterresource and add thegitopsAddonspecification. Your YAML file might resemble the following sample:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: argo-acm-importer namespace: openshift-gitops spec: argoServer: cluster: notused argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops gitopsAddon: enabled: true overrideExistingConfigs: trueApply the YAML file sample by running the following command:
oc apply -f gitopscluster-example.yamlVerify that the OpenShift GitOps instance restarted and is running on your managed clusters by running the following command on your managed clusters:
oc get pods -n openshift-gitops
Additional resources
Continue to fully manage your Argo CD ApplicationSet resources by deploying them. For directions, see Deploying the Argo CD ApplicationSet resource in any namespace for pull model (Technology Preview).
To learn more about the Argo CD ApplicationSet resources, see the following resources:
1.4. Registering managed clusters to Red Hat OpenShift GitOps operator Copia collegamentoCollegamento copiato negli appunti!
To configure OpenShift GitOps with the Push model, you can register a set of one or more Red Hat Advanced Cluster Management for Kubernetes managed clusters to an instance of OpenShift GitOps operator. After registering, you can deploy applications to those clusters. Set up a continuous OpenShift GitOps environment to automate application consistency across clusters in development, staging, and production environments.
1.4.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You need to install the Red Hat OpenShift GitOps operator on your Red Hat Advanced Cluster Management for Kubernetes.
- Import one or more managed clusters.
- To register managed clusters to OpenShift GitOps, complete the Creating a ManagedClusterSet documentation.
-
Enable the
ManagedServiceAccountaddon to rotate the token that is used for the Argo CD push and pull model to connect to the managed cluster. For help enabling the addon, see Configuring klusterlet add-ons.
1.4.2. Registering managed clusters to Red Hat OpenShift GitOps Copia collegamentoCollegamento copiato negli appunti!
Complete the following steps to register managed clusters to OpenShift GitOps:
Create a managed cluster set by binding it to the namespace where OpenShift GitOps is deployed.
- See Creating a ManagedClusterSetBinding resource.
- For placement information, see Filtering with ManagedCluster objects.
In the namespace that is used in managed cluster set binding, create a
Placementcustom resource to select a set of managed clusters to register to a OpenShift GitOps operator instance. Use themulticloud-integrationplacement example as a template. See Using ManagedClusterSets with Placement for placement information.Notes:
- Only OpenShift Container Platform clusters are registered to an OpenShift GitOps operator instance, not other Kubernetes clusters.
-
In some unstable network scenarios, the managed clusters might be temporarily placed in
unavailableorunreachablestate. See Configuring placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps for more details.
Create a
GitOpsClustercustom resource to register the set of managed clusters from the placement decision to the specified instance of OpenShift GitOps. This enables the OpenShift GitOps instance to deploy applications to any of those Red Hat Advanced Cluster Management managed clusters. Use themulticloud-integrationsOpenShift GitOps cluster example.Note: The referenced
Placementresource must be in the same namespace as theGitOpsClusterresource. See the following example:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-cluster-sample namespace: dev spec: argoServer: cluster: <your-local-cluster-name> argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters1 -
The
placementRef.namevalue isall-openshift-clusters, and is specified as target clusters for the OpenShift GitOps instance that is installed inargoNamespace: openshift-gitops. -
The
argoServer.clusterspecification requires the<your-local-cluster-name>value.
-
The
- Save your changes. You can now follow the OpenShift GitOps workflow to manage your applications.
1.4.3. Registering non-OpenShift Container Platform clusters to Red Hat OpenShift GitOps Copia collegamentoCollegamento copiato negli appunti!
You can now use the Red Hat Advanced Cluster Management GitOpsCluster resource to register a non-OpenShift Container Platform cluster to a OpenShift GitOps cluster. With this capability, you can deploy application resources to the non-OpenShift Container Platform cluster by using the OpenShift GitOps console. To register a non-OpenShift Container Platform cluster to a OpenShift GitOps cluster, complete the following steps:
Go to the API server URL in the non-OpenShift Container Platform
ManagedClusterresourcespecand validate it by running the following command:oc get managedclusters eks-1Verify that your output resembles the following information:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37mIf the API server URL in the non-OpenShift Container Platform
MangedClusterresourcespecis empty, update it manually by completing the following steps:To complete the API server URL, edit the
MangedClusterresourcespecby running the following command:oc edit managedclusters eks-1Verify that your YAML resembles the following file:
spec: managedClusterClientConfigs: - caBundle: ZW1wdHlDQWJ1bmRsZQo= url: https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.comSave the changes then validate that the API server is completed by running the following command:
oc get managedclusters eks-1- Verify that your output resembles the following information:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37mTo verify that the cluster secret is generated, go to the
openshift-gitopsnamespace and confirm that thatGitOpsClusterresource status reports,successful.Notes:
The API server URL for all types of non-OpenShift Container Platform
ManagedClusterresources renders automatically if you use the following importing modes:- Entering your server URL and API token for the existing cluster.
-
Entering the
kubeconfigfile for the existing cluster.
The following cases can make the API server URL empty for one of the
ManagedClustersresources:- The non-OpenShift Container Platform cluster is imported to the Red Hat Advanced Cluster Management hub cluster before version 2.12.
-
The non-OpenShift Container Platform cluster is manually imported to the Red Hat Advanced Cluster Management hub cluster through the import mode,
Run import commands.
1.4.4. Red Hat OpenShift GitOps token Copia collegamentoCollegamento copiato negli appunti!
When you integrate with the OpenShift GitOps operator, for every managed cluster that is bound to the OpenShift GitOps namespace through the placement and ManagedClusterSetBinding custom resources, a secret with a token to access the ManagedCluster is created in the OpenShift GitOps instance server namespace.
The OpenShift GitOps controller needs the secret to sync resources to the managed cluster. By default, the service account application-manager works with the cluster administrator permissions on the managed cluster to generate the secure OpenShift GitOps cluster secret in the OpenShift GitOps instance server namespace. The default namespace is openshift-gitops.
The OpenShift GitOps instance server namespace renders secure cluster secrets based on the following priority order:
- Cluster proxy service
The cluster proxy service is the default option because the
cluster-proxyadd-on is always enabled in multicluster engine operator. You can use the secure Argo CD cluster secrets that are rendered by the cluster proxy service in all Kubernetes clusters. The secrets include the following components:- Server URL: The cluster proxy URL for the managed cluster.
-
caData: The service Certificate Authority (CA) from the
openshift-service-caconfig map in the managed cluster namespace for Transport Layer Security (TLS). -
Bearer token: The
ManagedServiceAccountapplication-managertoken.
- Managed cluster client configurations
If the cluster proxy is unavailable, for example, if the
cluster-proxyadd-on is disabled in multicluster engine operator, the system automatically detects the API server endpoint and CA of the managed cluster. You can use the secure Argo CD cluster secrets that are rendered by the managed cluster client configurations only in OpenShift Container Platform clusters. The secrets include the following components:- Server URL and caData: Derived from the client configuration in the managed cluster specification.
-
Bearer token: Derived from the same
ManagedServiceAccountapplication-managertoken.
If you do not want this default, create a service account with customized permissions on the managed cluster for generating the OpenShift GitOps cluster secret in the OpenShift GitOps instance server namespace. The default namespace is still, openshift-gitops. For more information, see Creating a customized service account for Argo CD push model.
1.4.5. Additional resources Copia collegamentoCollegamento copiato negli appunti!
For more information, see the following resources and examples:
1.5. Configuring application placement tolerations for GitOps Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management provides a way for you to register managed clusters that deploy applications to Red Hat OpenShift GitOps.
In some unstable network scenarios, the managed clusters might be temporarily placed in Unavailable state. If a Placement resource is being used to facilitate the deployment of applications, add the following tolerations for the Placement resource to continue to include unavailable clusters. The following example shows a Placement resource with tolerations:
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement
namespace: ns1
spec:
tolerations:
- key: cluster.open-cluster-management.io/unreachable
operator: Exists
- key: cluster.open-cluster-management.io/unavailable
operator: Exists
1.6. Deploying Argo CD with Push and Pull model Copia collegamentoCollegamento copiato negli appunti!
Using a Push model, The Argo CD server on the hub cluster deploys the application resources on the managed clusters. For the Pull model, the application resources are propagated by the Propagation controller to the managed clusters by using manifestWork.
For both models, the same ApplicationSet CRD is used to deploy the application to the managed cluster.
Required access: Cluster administrator
1.6.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
View the following prerequisites for the Argo CD Pull model:
Important:
-
If your
openshift-gitops-ArgoCD-application-controllerservice account is not assigned as a cluster administrator, the Red Hat OpenShift GitOps application controller might not deploy resources. The application status might send an error similar to the following error:
cannot create resource "services" in API group "" in the namespace
"mortgage",deployments.apps is forbidden: User
"system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller"
-
After you install the
OpenShift Gitopsoperator on the managed clusters, you must create theClusterRoleBindingcluster administrator privileges on the same managed clusters. To add the
ClusterRoleBindingcluster administrator privileges to your managed clusters, see the following example YAML:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: argo-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminIf you are not a cluster administrator and need to resolve this issue, complete the following steps:
- Create all namespaces on each managed cluster where the Argo CD application will be deployed.
Add the
managed-bylabel to each namespace. If an Argo CD application is deployed to multiple namespaces, each namespace should be managed by Argo CD.See the following example with the
managed-bylabel:
apiVersion: v1 kind: Namespace metadata: name: mortgage2 labels: argocd.argoproj.io/managed-by: openshift-gitops-
You must declare all application destination namespaces in the repository for the application and include the
managed-bylabel in the namespaces. Refer to Additional resources to learn how to declare a namespace.
See the following requirements to use the Argo CD Pull model:
-
The Red Hat OpenShift GitOps operator must be installed on the hub cluster and the target managed clusters in the
openshift-gitopsnamespace. - The required hub cluster OpenShift Container Platform OpenShift GitOps operator must be version 1.9.0 or later.
- The version of the Red Hat OpenShift GitOps Operator on the hub cluster must be equal to or more recent than the version of the operator on the managed clusters.
- You need the ApplicationSet controller to propagate the Argo CD application template for a managed cluster.
Every managed cluster must have a cluster secret in the Argo CD server namespace on the hub cluster, which is required by the ArgoCD application set controller to propagate the Argo CD application template for a managed cluster.
To create the cluster secret, create a
gitOpsClusterresource that contains a reference to aplacementresource. Theplacementresource selects all the managed clusters that need to support the Pull model. When the OpenShift GitOps cluster controller reconciles, it creates the cluster secrets for the managed clusters in the Argo CD server namespace.
1.6.2. Architecture Copia collegamentoCollegamento copiato negli appunti!
For both Push and Pull model, the Argo CD ApplicationSet controller on the hub cluster reconciles to create application resources for each target managed cluster. See the following information about architecture for both models:
1.6.2.1. Architecture Push model Copia collegamentoCollegamento copiato negli appunti!
- With Push model, OpenShift Container Platform GitOps applies resources directly from a centralized hub cluster to the managed clusters.
- An Argo CD application that is running on the hub cluster communicates with the GitHub repository and deploys the manifests directly to the managed clusters.
- Push model implementation only contains the Argo CD application on the hub cluster, which has credentials for managed clusters. The Argo CD application on the hub cluster can deploy the applications to the managed clusters.
- Important: With a large number of managed clusters that require resource application, consider potential strain on the OpenShift GitOps GitOps controller memory and CPU usage. To optimize resource management, see Configuring resource quota or requests in the Red Hat OpenShift GitOps documentation.
-
By default, the Push model is used to deploy the application unless you add the
apps.open-cluster-management.io/ocm-managed-clusterandapps.open-cluster-management.io/pull-to-ocm-managed-clusterannotations to the template section of theApplicationSet.
1.6.2.2. Architecture Pull model Copia collegamentoCollegamento copiato negli appunti!
- Pull model can provide scalability relief compared to the push model by reducing stress on the controller in the hub cluster, but with more requests and status reporting required.
- With Pull model, OpenShift Container Platform OpenShift GitOps does not apply resources directly from a centralized hub cluster to the managed clusters. The Argo CD Application is propagated from the hub cluster to the managed clusters.
-
Pull model implementation applies OpenShift Cluster Manager registration, placement, and
manifestWorkAPIs so that the hub cluster can use the secure communication channel between the hub cluster and the managed cluster to deploy resources. - Each managed cluster individually communicates with the GitHub repository to deploy the resource manifests locally, so you must install and configure OpenShift GitOps operators on each managed cluster.
-
An Argo CD server must be running on each target managed cluster. The Argo CD application resources are replicated on the managed clusters, which are then deployed by the local Argo CD server. The distributed Argo CD applications on the managed clusters are created with a single Argo CD
ApplicationSetresource on the hub cluster. -
The managed cluster is determined by the value of the
ocm-managed-clusterannotation. -
For successful implementation of Pull model, the Argo CD application controller must ignore Push model application resources with the
argocd.argoproj.io/skip-reconcileannotation in the template section of theApplicationSet. - For Pull model, the Argo CD Application controller on the managed cluster reconciles to deploy the application.
- The Pull model Resource sync controller on the hub cluster queries the OpenShift Cluster Manager search V2 component on each managed cluster periodically to retrieve the resource list and error messages for each Argo CD application.
-
The Aggregation controller on the hub cluster creates and updates the
MulticlusterApplicationSetReportfrom across clusters by using the data from the Resource sync controller, and the status information frommanifestWork. - The status of the deployments is gathered back to the hub cluster, but not all the detailed information is transmitted. Additional status updates are periodically scraped to provide an overview. The status feedback is not real-time, and each managed cluster OpenShift GitOps operator needs to communicate with the Git repository, which results in multiple requests.
1.6.3. Creating the ApplicationSet custom resource Copia collegamentoCollegamento copiato negli appunti!
The Argo CD ApplicationSet resource is used to deploy applications on the managed clusters by using the Push or Pull model with a placement resource in the generator field that is used to get a list of managed clusters.
- For the Pull model, set the destination for the application to the default local Kubernetes server, as displayed in the following example. The application is deployed locally by the application controller on the managed cluster.
Add the annotations that are required to override the default Push model, as displayed in the following example
ApplicationSetYAML, which uses the Pull model with template annotations:apiVersion: argoproj.io/v1alpha1 kind: `ApplicationSet` metadata: name: guestbook-allclusters-app-set namespace: openshift-gitops spec: generators: - clusterDecisionResource: configMapRef: ocm-placement-generator labelSelector: matchLabels: cluster.open-cluster-management.io/placement: aws-app-placement requeueAfterSeconds: 30 template: metadata: annotations: apps.open-cluster-management.io/ocm-managed-cluster: '{{name}}'1 apps.open-cluster-management.io/ocm-managed-cluster-app-namespace: openshift-gitops argocd.argoproj.io/skip-reconcile: "true"2 labels: apps.open-cluster-management.io/pull-to-ocm-managed-cluster: "true"3 name: '{{name}}-guestbook-app' spec: destination: namespace: guestbook server: https://kubernetes.default.svc project: default sources: [ { repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: main path: guestbook } ] syncPolicy: automated: {} syncOptions: - CreateNamespace=true
1.6.4. MulticlusterApplicationSetReport Copia collegamentoCollegamento copiato negli appunti!
-
For the Pull model, the
MulticlusterApplicationSetReportaggregates application status from across your managed clusters. - The report includes the list of resources and the overall status of the application from each managed cluster.
-
A separate report resource is created for each Argo CD ApplicationSet resource. The report is created in the same namespace as the
ApplicationSet. The report includes the following items:
- A list of resources for the Argo CD application
- The overall sync and health status for each Argo CD application
-
An error message for each cluster where the overall status is
out of syncorunhealthy - A summary status all the states of your managed clusters
- The Resource sync controller and the Aggregation controller both run every 10 seconds to create the report.
The two controllers, along with the Propagation controller, run in separate containers in the same
multicluster-integrationspod, as shown in the following example output:NAMESPACE NAME READY STATUS open-cluster-management multicluster-integrations-7c46498d9-fqbq4 3/3 Running
The following is an example MulticlusterApplicationSetReport YAML file for the guestbook application:
apiVersion: apps.open-cluster-management.io/v1alpha1
kind: MulticlusterApplicationSetReport
metadata:
labels:
apps.open-cluster-management.io/hosting-applicationset: openshift-gitops.guestbook-allclusters-app-set
name: guestbook-allclusters-app-set
namespace: openshift-gitops
statuses:
clusterConditions:
- cluster: cluster1
conditions:
- message: 'Failed sync attempt: one or more objects failed to apply, reason: services is forbidden: User "system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller" cannot create resource "services" in API group "" in the namespace "guestbook",deployments.apps is forbidden: User <name> cannot create resource "deployments" in API group "apps" in the namespace "guestboo...'
type: SyncError
healthStatus: Missing
syncStatus: OutOfSync
- cluster: pcluster1
healthStatus: Progressing
syncStatus: Synced
- cluster: pcluster2
healthStatus: Progressing
syncStatus: Synced
summary:
clusters: "3"
healthy: "0"
inProgress: "2"
notHealthy: "3"
notSynced: "1"
synced: "2"
Note: If a resource fails to deploy, the resource is not included in the resource list. See error messages for information.
1.6.5. Additional resources Copia collegamentoCollegamento copiato negli appunti!
- See Configuring an OpenShift cluster by deploying an application with cluster configurations in the Red Hat OpenShift GitOps documentation.
- See Setting up an Argo CD instance in the Red Hat OpenShift GitOps documentation.
1.7. Managing policy definitions with Red Hat OpenShift GitOps Copia collegamentoCollegamento copiato negli appunti!
With the ArgoCD resource, you can use Red Hat OpenShift GitOps to manage policy definitions by granting OpenShift GitOps access to create policies on the Red Hat Advanced Cluster Management hub cluster.
1.7.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
You must log in to your hub cluster.
Required access: Cluster administrator
Deprecated: PlacementRule
1.7.2. Creating a ClusterRole resource for OpenShift GitOps Copia collegamentoCollegamento copiato negli appunti!
To create a ClusterRole resource for OpenShift GitOps, with access to create, read, update, and delete policies and placements:
Create a
ClusterRolefrom the console. YourClusterRolemight resemble the following example:kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin rules: - verbs: - get - list - watch - create - update - patch - delete apiGroups: - policy.open-cluster-management.io resources: - policies - configurationpolicies - certificatepolicies - operatorpolicies - policysets - placementbindings - verbs: - get - list - watch - create - update - patch - delete apiGroups: - apps.open-cluster-management.io resources: - placementrules - verbs: - get - list - watch - create - update - patch - delete apiGroups: - cluster.open-cluster-management.io resources: - placements - placements/status - placementdecisions - placementdecisions/statusCreate a
ClusterRoleBindingobject to grant the OpenShift GitOps service account access to theopenshift-gitops-policy-adminClusterRoleobject. YourClusterRoleBindingmight resemble the following example:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: openshift-gitops-policy-adminNotes: - When a Red Hat Advanced Cluster Management policy definition is deployed with OpenShift GitOps, a copy of the policy is created in each managed cluster namespace for resolving hub template differences. These copies are called replicated policies. - To prevent OpenShift GitOps from repeatedly deleting this replicated policy or show that the Argo CD
Applicationis out of sync, theargocd.argoproj.io/compare-options: IgnoreExtraneousannotation is automatically set on each replicated policy by the Red Hat Advanced Cluster Management policy framework.-
There are labels and annotations used by Argo CD to track objects. For replicated policies to not show up at all in Argo CD, disable the Argo CD tracking labels and annotations by setting
spec.copyPolicyMetadatatofalseon the Red Hat Advanced Cluster Management policy definition.
1.7.3. Integrating the Policy Generator with OpenShift GitOps Copia collegamentoCollegamento copiato negli appunti!
You can use OpenShift GitOps to generate policies by using the Policy Generator through GitOps. Since the Policy Generator does not come preinstalled in the OpenShift GitOps container image, you must complete customizations.
1.7.3.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- You must install OpenShift GitOps on your Red Hat Advanced Cluster Management hub cluster.
- You must log in to the hub cluster.
1.7.3.2. Accessing the Policy Generator from OpenShift GitOps Copia collegamentoCollegamento copiato negli appunti!
To access the Policy Generator from OpenShift GitOps, you must configure the Init Container to copy the Policy Generator binary from the Red Hat Advanced Cluster Management Application Subscription container image. You must also configure OpenShift GitOps by providing the --enable-alpha-plugins flag when it runs Kustomize.
To create, read, update, and delete policies and placements with the Policy Generator, grant access to the Policy Generator from OpenShift GitOps. Complete the following steps:
Edit the OpenShift GitOps
argocdobject with the following command:oc -n openshift-gitops edit argocd openshift-gitopsTo update the Policy Generator to a newer version, add the
registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9image used by the Init Container to a newer tag. Replace the<version>parameter with the latest Red Hat Advanced Cluster Management version in yourArgoCDresource.Your
ArgoCDresource might resemble the following YAML file:apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: kustomizeBuildOptions: --enable-alpha-plugins repo: env: - name: KUSTOMIZE_PLUGIN_HOME value: /etc/kustomize/plugin initContainers: - args: - -c - cp /policy-generator/PolicyGenerator-not-fips-compliant /policy-generator-tmp/PolicyGenerator command: - /bin/bash image: registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<version> name: policy-generator-install volumeMounts: - mountPath: /policy-generator-tmp name: policy-generator volumeMounts: - mountPath: /etc/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator name: policy-generator volumes: - emptyDir: {} name: policy-generatorNote: Alternatively, you can create a
ConfigurationPolicyresource that contains theArgoCDmanifest and template the version to match the version set in theMulticlusterHub:image: '{{ (index (lookup "apps/v1" "Deployment" "open-cluster-management" "multicluster-operators-hub-subscription").spec.template.spec.containers 0).image }}'If you want to enable the processing of Helm charts within the Kustomize directory before generating policies, set the
POLICY_GEN_ENABLE_HELMenvironment variable to"true"in thespec.repo.envfield:env: - name: POLICY_GEN_ENABLE_HELM value: "true"To create, read, update, and delete policies and placements, create a
ClusterRoleBindingobject to grant the OpenShift GitOps service account access to Red Hat Advanced Cluster Management hub cluster. YourClusterRoleBindingmight resemble the following resource:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: openshift-gitops-policy-admin
1.7.4. Configuring policy health checks in OpenShift GitOps Copia collegamentoCollegamento copiato negli appunti!
Use OpenShift GitOps with the ArgoCD resource to define a custom logic that sets the health status of a resource based on the resource state by adding to the resourceHealthChecks field. For example, you can define custom health checks that only report a policy as healthy if your policy is compliant.
Important: To verify that you did not download something malicious from the Internet, review every policy before you apply it.
To define health checks for your resource kinds complete the following steps:
Add a health check to your
CertificatePolicy,ConfigurationPolicy,OperatorPolicy, andPolicyresources by downloading theargocd-policy-healthchecks.yaml. Run the following command:wget https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/stable/CM-Configuration-Management/argocd-policy-healthchecks.yamlApply the
argocd-policy-healthchecks.yamlpolicy by going to Governance > Create policy in the console and pasting the content into the YAML editor.Note: You can add Placement information in the YAML editor to determine which clusters have the policy active.
-
Verify that the health checks work as expected by viewing the Summary tab of the
ArgoCDresource. View the health details from the Argo CD console.
1.7.5. Additional resources Copia collegamentoCollegamento copiato negli appunti!
- See the Understanding OpenShift GitOps documentation.
1.8. Generating a policy to install GitOps Operator Copia collegamentoCollegamento copiato negli appunti!
A common use of Red Hat Advanced Cluster Management policies is to install an Operator on one or more managed Red Hat OpenShift Container Platform clusters. Continue reading to learn how to generate policies by using the Policy Generator, and to install the OpenShift Container Platform GitOps Operator with a generated policy:
1.8.1. Generating a policy that installs OpenShift Container Platform GitOps Copia collegamentoCollegamento copiato negli appunti!
You can generate a policy that installs OpenShift Container Platform GitOps by using the Policy Generator. The OpenShift Container Platform GitOps operator offers the all namespaces installation mode, which you can view in the following example. Create a Subscription manifest file called openshift-gitops-subscription.yaml, similar to the following example:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-gitops-operator
namespace: openshift-operators
spec:
channel: stable
name: openshift-gitops-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
To pin to a specific version of the operator, add the following parameter and value: spec.startingCSV: openshift-gitops-operator.v<version>. Replace <version> with your preferred version.
A PolicyGenerator configuration file is required. Use the configuration file named policy-generator-config.yaml to generate a policy to install OpenShift Container Platform GitOps on all OpenShift Container Platform managed clusters. See the following example:
apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
name: install-openshift-gitops
policyDefaults:
namespace: policies
placement:
clusterSelectors:
vendor: "OpenShift"
remediationAction: enforce
policies:
- name: install-openshift-gitops
manifests:
- path: openshift-gitops-subscription.yaml
The last required file is kustomization.yaml, which requires the following configuration:
generators:
- policy-generator-config.yaml
The generated policy might resemble the following file with PlacementRule(Deprecated):
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-install-openshift-gitops
namespace: policies
spec:
clusterSelector:
matchExpressions:
- key: vendor
operator: In
values:
- OpenShift
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-install-openshift-gitops
namespace: policies
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: placement-install-openshift-gitops
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: install-openshift-gitops
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
annotations:
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
policy.open-cluster-management.io/standards: NIST SP 800-53
policy.open-cluster-management.io/description:
name: install-openshift-gitops
namespace: policies
spec:
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: install-openshift-gitops
spec:
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-gitops-operator
namespace: openshift-operators
spec:
channel: stable
name: openshift-gitops-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
remediationAction: enforce
severity: low
Generated policies from manifests in the OpenShift Container Platform documentation is supported. Any configuration guidance from the OpenShift Container Platform documentation can be applied using the Policy Generator.
1.8.2. Using policy dependencies with OperatorGroups Copia collegamentoCollegamento copiato negli appunti!
When you install an operator with an OperatorGroup manifest, the OperatorGroup must exist on the cluster before the Subscription is created. Use the policy dependency feature along with the Policy Generator to ensure that the OperatorGroup policy is compliant before you enforce the Subscription policy.
Set up policy dependencies by listing the manifests in the order that you want. For example, you might want to create the namespace policy first, create the OperatorGroup next, and create the Subscription last.
Enable the policyDefaults.orderManifests parameter and disable policyDefaults.consolidateManifests in the Policy Generator configuration manifest to automatically set up dependencies between the manifests.
1.9. Creating a customized service account for Argo CD push model Copia collegamentoCollegamento copiato negli appunti!
Create a service account on a managed cluster by creating a ManagedServiceAccount resource on the hub cluster. Use the ClusterPermission resource to grant specific permissions to the service account.
Creating a customzied service account for use with the Argo CD push model includes the following benefits:
- An Application manager add-on runs on each managed cluster. By default, the Argo CD controller uses the service account Application manager to push these resources to the managed cluster.
- The Application manager service account has a large set of permissions because the application subscription add-on uses the Application manager service to deploy applications on the managed cluster. Do not use the Application manager service account if you want a limited set of permissions.
- You can specify a different service account that you want the Argo CD push model to use. When the Argo CD controller pushes resources from the centralized hub cluster to the managed cluster, you can use a different service account than the default Application manager. By using a different service account, you can control the permissions that are granted to this service account.
-
The service account must exist on the managed cluster. To facilitate the creation of the service account with the associated permissions, use the
ManagedServiceAccountresource and the newClusterPermissionresource on the centralized hub cluster.
After completing all the following procedures, you can grant cluster permissions to your managed service account. Having the cluster permissions, your managed service account has the necessary permissions to deploy your application resources on the managed clusters. Complete the following procedures:
1.9.1. Creating a managed service account Copia collegamentoCollegamento copiato negli appunti!
The ManagedServiceAccount custom resource on the hub provides a convenient way to create a ServiceAccount on the managed clusters. When a ManagedServiceAccount custom resource is created in the <managed-cluster> namespace on the hub cluster, a ServiceAccount is created on the managed cluster.
To create a managed service account, see Enabling ManagedServiceAccount add-ons.
1.9.2. Using a managed service account in the GitOpsCluster resource Copia collegamentoCollegamento copiato negli appunti!
The GitOpsCluster resource uses placement to import selected managed clusters into the Argo CD, including the creation of the Argo CD cluster secrets which contains information used to access the clusters. By default, the Argo CD cluster secret uses the application manager service account to access the managed clusters.
-
To update the
GitOpsClusterresource to use the managed service account, add theManagedServiceAccountRefproperty with the name of the managed service account. Save the following sample as a
gitops.yamlfile to create aGitOpsClustercustom resource:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: argo-acm-importer namespace: openshift-gitops spec: managedServiceAccountRef: <managed-sa-sample> argoServer: cluster: notused argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops-
Run
oc apply -f gitops.yamlto apply the file. Go to the
openshift-gitopsnamespace and verify that there is a new Argo CD cluster secret with the name<managed cluster-managed-sa-sample-cluster-secret>. Run the following command:oc get secrets -n openshift-gitops <managed cluster-managed-sa-sample-cluster-secret>See the following output to verify:
NAME TYPE DATA AGE <managed cluster-managed-sa-sample-cluster-secret> Opaque 3 4m2s
1.9.3. Creating an Argo CD application Copia collegamentoCollegamento copiato negli appunti!
Deploy an Argo CD application from the Argo CD console by using the pushing model. The Argo CD application is deployed with the managed service account, <managed-sa-sample>.
- Log in to the Argo CD console.
- Click Create a new application.
- Choose the cluster URL.
-
Go to your Argo CD application and verify that it has the given permissions, like roles and cluster roles, that you propagated to
<managed cluster>.
1.9.4. Using policy to create managed service accounts and cluster permissions Copia collegamentoCollegamento copiato negli appunti!
When the GitOpsCluster resource is updated with the ManagedServiceAccountRef, each managed cluster in the placement of this GitOpsCluster needs to have the service account. If you have several managed clusters, it becomes tedious for you to create the managed service account and cluster permission for each managed cluster. You can simply this process by using a policy to create the managed service account and cluster permission for all your managed clusters
When you apply the ManagedServiceAccount and ClusterPermission resources to the hub cluster, the placement of this policy is bound to the local cluster. Replicate those resources to the managed cluster namespace for all of the managed clusters in the placement of the GitOpsCluster resource.
Using a policy to create the ManagedServiceAccount and ClusterPermission resources include the following attributes:
-
Updating the
ManagedServiceAccountandClusterPermissionobject templates in the policy results in updates to all of theManagedServiceAccountandClusterPermissionresources in all of the managed clusters. -
Updating directly to the
ManagedServiceAccountandClusterPermissionresources becomes reverted back to the original state because it is enforced by the policy. If the placement decision for the GitOpsCluster placement changes, the policy manages the creation and deletion of the resources in the managed cluster namespaces.
- To create a policy for a YAML to create a managed service account and cluster permission, create a YAML with the following content:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-gitops namespace: openshift-gitops annotations: policy.open-cluster-management.io/standards: NIST-CSF policy.open-cluster-management.io/categories: PR.PT Protective Technology policy.open-cluster-management.io/controls: PR.PT-3 Least Functionality spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gitops-sub spec: pruneObjectBehavior: None remediationAction: enforce severity: low object-templates-raw: | {{ range $placedec := (lookup "cluster.open-cluster-management.io/v1beta1" "PlacementDecision" "openshift-gitops" "" "cluster.open-cluster-management.io/placement=aws-app-placement").items }} {{ range $clustdec := $placedec.status.decisions }} - complianceType: musthave objectDefinition: apiVersion: authentication.open-cluster-management.io/v1alpha1 kind: ManagedServiceAccount metadata: name: <managed-sa-sample> namespace: {{ $clustdec.clusterName }} spec: rotation: {} - complianceType: musthave objectDefinition: apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: <clusterpermission-msa-subject-sample> namespace: {{ $clustdec.clusterName }} spec: roles: - namespace: default rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "create", "update", "delete"] - apiGroups: [""] resources: ["configmaps", "secrets", "pods", "podtemplates", "persistentvolumeclaims", "persistentvolumes"] verbs: ["get", "update", "list", "create", "delete"] - apiGroups: ["storage.k8s.io"] resources: ["*"] verbs: ["list"] - namespace: mortgage rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "create", "update", "delete"] - apiGroups: [""] resources: ["configmaps", "secrets", "pods", "services", "namespace"] verbs: ["get", "update", "list", "create", "delete"] clusterRole: rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "list"] roleBindings: - namespace: default roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> - namespace: mortgage roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> clusterRoleBinding: subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> {{ end }} {{ end }} --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-gitops namespace: openshift-gitops placementRef: name: lc-app-placement kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: policy-gitops kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: lc-app-placement namespace: openshift-gitops spec: numberOfClusters: 1 predicates: - requiredClusterSelector: labelSelector: matchLabels: name: <your-local-cluster-name>-
Save the YAML file in a file called,
policy.yaml. -
Run
oc apply -f policy.yaml. -
In the object template of the policy, it iterates through the placement decision of the GitOpsCluster associated placement and applies the following
ManagedServiceAccountandClusterPermissiontemplates:
1.10. Managing the Red Hat OpenShift GitOps add-on Copia collegamentoCollegamento copiato negli appunti!
The OpenShift GitOps add-on automates the deployment and lifecycle management of the managed clusters. Based on your architecture and connectivity requirements, decide if you want to deploy the GitOps add-on with the ArgoCD agent component. Otherwise, you can deploy the OpenShift GitOps add-on without the ArgoCD agent.
Important: If you enable the OpenShift GitOps add-on by using the GitOpsCluster custom resource, then the GitOpsCluster disables the Push Model for all applications.
When you enable the OpenShift GitOps add-on, you have the following deployment modes:
-
Basicmode: Deploys the OpenShift GitOps operator and theArgoCDinstance on managed clusters through theGitOpsClustercustom resource. -
Agentmode: Includes all basic mode components along with theArgoCDagent for enhanced pull-based architecture.
To enable the OpenShift GitOps add-on for your selected managed clusters, reference your Placement and use the GitOpsCluster custom resource as the interface for enabling.
Prerequisites
If you want to enable the OpenShift GitOps add-on with the ArgoCD agent, use the Agent mode. See Enabling Red Hat OpenShift GitOps add-on with ArgoCD agent
If you want to enable the OpenShift GitOps add-on without the ArgoCD agent, use the Basic mode. See Enabling Red Hat OpenShift GitOps add-on without the ArgoCD agent
1.10.1. Configuring OpenShift GitOps add-on settings Copia collegamentoCollegamento copiato negli appunti!
The OpenShift GitOps add-on supports various configuration options to customize the deployment according to your requirements.
The OpenShift GitOps add-on supports the following configuration options in the gitopsAddon specification:
-
enabled: Enable or disable the GitOps add-on. The default isfalse. -
gitOpsOperatorImage: Custom container image for the GitOps operator. -
gitOpsImage: Custom container image forArgoCDcomponents. -
redisImage: Custom container image forRedis. -
gitOpsOperatorNamespace: Namespace where the GitOps operator is deployed. The default isopenshift-gitops-operator. -
gitOpsNamespace: Namespace whereArgoCDinstance is deployed. The default is`openshift-gitops`. -
reconcileScope: ControlArgoCDreconciliation scope that includesAll-NamespacesorSingle-Namespace. The default:Single-Namespaces. -
overrideExistingConfigs: Override existingAddOnDeploymentConfigvalues with new values fromGitOpsClusterspecification. Must be set totruewhen performing any uninstall operation. The default isfalse. -
argoCDAgent:ArgoCDagent configuration sub-section.
The Argo CD Agent supports the following configuration options in the argoCDAgent specification:
-
enabled: Enable or disable the agent. The defaultfalse. -
propagateHubCA: Propagate hub certified authority (CA) certificate to managed clusters. The default istrue. -
image: Custom agent container image. -
serverAddress: Override theArgoCDagent principal server address. -
serverPort: Override theArgoCDagent principal server port. -
mode: Agent operation mode. The default ismanaged.
To configure the OpenShift GitOps add-on setting, complete the following steps on your hub cluster:
Customize your container images for the OpenShift GitOps components by adding the YAML sample with the
GitOpsCluster:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-custom-images namespace: openshift-gitops spec: argoServer: argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops gitopsAddon: enabled: true gitOpsOperatorImage: "registry.redhat.io/openshift-gitops-1/gitops-operator@sha256:..." gitOpsImage: "registry.redhat.io/openshift-gitops-1/argocd@sha256:..." redisImage: "registry.redhat.io/rhel8/redis-6@sha256:..."Apply the YAML sample by running the following command:
oc apply -f gitopscluster-example.yamlCustomize the namespaces where you deploy the the OpenShift GitOps components by adding the following YAML with the
GitOpsCluster:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-custom-namespaces namespace: openshift-gitops spec: argoServer: argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops gitopsAddon: enabled: true gitOpsOperatorNamespace: openshift-gitops-operator gitOpsNamespace: openshift-gitopsApply the YAML sample by running the following command:
oc apply -f gitopscluster-example.yamlSpecify if the
ArgoCDagent can reconcile application in all namespaces by adding the following YAML with theGitOpsCluster:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-reconcile-scope namespace: openshift-gitops spec: argoServer: argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops gitopsAddon: enabled: true reconcileScope:-
For the
reconcileScopefield, give it theAll-Namespacesvalue if you want theArgoCDinstance to reconcile applications in all namespaces. -
For the
reconcileScopefield, give it theSingle-Namespacevalue if you want the theArgoCDinstance to only reconcile applications in its own namespace.
-
For the
Apply the YAML sample by running the following command:
oc apply -f gitopscluster-example.yaml
Additional resources
You can skip specific OpenShift GitOps add-on functionalities that you do not want. See Skipping the OpenShift GitOps add-on enforcement.
To verify that your OpenShift GitOps add-on works, see Verifying the {gitops-short) add-on functions.
To verify that your ArgoCD agent works, see Verifying the ArgoCD agent function.
To learn more about OpenShift GitOps, see the following documentation:
1.11. Enabling Red Hat OpenShift GitOps add-on with ArgoCD agent Copia collegamentoCollegamento copiato negli appunti!
The Agent mode for the pull model enables the OpenShift GitOps add-on for Advanced pull model with the ArgoCD agent to get detailed statuses on the health of your application. Use this Agent mode in environments with network restrictions, enhanced security requirements, or when you implement a pull model for application delivery. The advanced pull model is powered by the ArgoCD agent and gives you a fully automated OpenShift GitOps experience.
To enable the OpenShift GitOps add-on with the ArgoCD agent, complete the following sections:
Prerequisites
- Red Hat Advanced Cluster Management hub cluster installed
- Managed clusters registered with Red Hat Advanced Cluster Management
- OpenShift GitOps operator installed on the hub cluster
-
A
Placementresource to select target managed clusters -
ManagedClusterSetbound to the target namespace -
OpenShift GitOps operator subscription configured with the
ArgoCDagent environment -
The
ArgoCDcustom resource that are configured for theAgentmode
1.11.1. Configuring subscriptions and resources Copia collegamentoCollegamento copiato negli appunti!
To enable the ArgoCD agent, you must configure the OpenShift GitOps operator subscription and the ArgoCD custom resource. To configure the necessary subscriptions and resources, complete the following steps:
On the hub cluster only, modify the OpenShift GitOps operator subscription to include the required environment variables by running the following command:
oc edit subscription gitops-operator -n openshift-gitops-operatorAdd the following environment variables to the
spec.config.envfile by adding the following YAML sample:spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops - name: ARGOCD_PRINCIPAL_TLS_SERVER_ALLOW_GENERATE value: "false" - name: ARGOCD_PRINCIPAL_REDIS_SERVER_ADDRESS value: openshift-gitops-redis:6379Replace the existing
ArgoCDcustom resource with the compatibleAgentmode configuration by adding the following YAML sample:apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: enabled: false argoCDAgent: principal: auth: mtls:CN=system:open-cluster-management:cluster:([^:]+):addon:gitops-addon:agent:gitops-addon-agent enabled: true namespaces: allowedNamespaces: - '*'Apply the YAML sample by running the following command:
oc apply -f argocd-example.yaml-
Note: On the hub cluster only, this configuration disables the traditional
ArgoCDcontroller and enables the agent principal with mutual TLS authentication.
-
Note: On the hub cluster only, this configuration disables the traditional
1.11.2. Enabling ArgoCD agent Copia collegamentoCollegamento copiato negli appunti!
Create a GitOpsCluster resource to manage the ArgoCD agent add-on deployment. The controller controller automatically creates the following resources for each managed cluster selected by the Placement
The GitOpsCluster controller performs the following operations:
- Creates and automated PKI management
-
Creates
ArgoCDcluster secrets configured for agent mode - Deploys the Argo CD Agent on each selected managed cluster
To enable the advanced pull model Argo CD Agent architecture, complete the following steps:
On your managed cluster, create a
GitOpsCluster`resource with theArgoCDagent enabled by adding the following YAML sample:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-agent-clusters namespace: openshift-gitops spec: argoServer: argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: production-clusters namespace: openshift-gitops gitopsAddon: enabled: true argoCDAgent: enabled: trueApply the YAML sample by running the following command:
oc apply -f gitopscluster-example.yaml
1.11.3. Verifying the ArgoCD installation Copia collegamentoCollegamento copiato negli appunti!
After the ArgoCD agent is successfully deployed, verify the advanced Pull Model workflow is completed by creating an application on the hub cluster and confirming it works on the managed cluster.
To verify the necessary installations and resources for successful deployment, complete the following steps:
Check the
GitOpsClusterstatus for specificAgentconditions by running the following command:oc get gitopscluster gitops-agent-clusters -n openshift-gitops -o jsonpath='{.status.conditions}' | jqConfirm that you see the following condition types in the status:
-
Ready: TheGitOpsClusteris ready and all the components are functioning. -
PlacementResolved: ThePlacementreference is resolved and the managed clusters are retrieved. -
ClustersRegistered: The managed clusters are successfully registered with theArgoCDserver. -
AddOnDeploymentConfigsReady: TheAddOnDeploymentConfigsare created for all the managed clusters. -
ManagedClusterAddOnsReady: TheManagedClusterAddonscreates and updates the managed clusters. -
AddOnTemplateReady: The dynamicAddOnTemplatecreated for theArgoCDagent mode. -
ArgoCDAgentPrereqsReady: The agent prerequisites are set up. -
CertificatesReady: The TLS certificates are signed. -
ManifestWorksApplied: The CA certificates propagated to managed clusters.
-
Create an
ArgoCDapplication resource in the managed cluster namespace by adding the following YAML file:apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: <managed cluster name> spec: project: default source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook destination: server: https://<principal-external-ip:port>?agentName=<managed cluster name> namespace: guestbook syncPolicy: automated: prune: true selfHeal: trueApply the YAML sample by running the following command:
oc apply -f application-example.yamlOn the managed cluster, verify that the application resources are deployed by running the following command:
oc get all -n guestbookOn the hub cluster, verify the application status is reflected back to you by running the following command:
oc get application guestbook -n <managed cluster name>-
Confirm that the status shows
Syncedwhen the application is successfully deployed.
Additional resources
Continue setting up the OpenShift GitOps add-on by completing Managing the Red Hat OpenShift GitOps add-on.
1.12. Enabling Red Hat OpenShift GitOps add-on without the ArgoCD agent Copia collegamentoCollegamento copiato negli appunti!
The Basic mode for pull model does not include the ArgoCD agent, so the pull model gives you a simpler setup for your hub cluster management and only gives you the necessary statuses of the health of your hub clusters. This mode enables the OpenShift GitOps add-on to the managed clusters that you selected with the Placement.
After enabling the add-on, the Basic mode deploys the OpenShift GitOps ArgoCD components that are suitable for the cluster workflows.
Prerequisites
- Red Hat Advanced Cluster Management hub cluster installed
- Managed clusters that are registered with Red Hat Advanced Cluster Management
- OpenShift GitOps operator that is installed on the hub cluster
-
A
Placementresource that is defined to select target managed clusters -
ManagedClusterSetthat is bound to the target namespace
To enable a OpenShift GitOps add-on without the ArgoCD agent, complete the following sections:
1.12.1. Creating a GitOpsCluster resource Copia collegamentoCollegamento copiato negli appunti!
To enable basic pull model, create a GitOpsCluster resource. The controller automatically creates the following resources for each managed cluster selected by the Placement policy:
-
AddOnDeploymentConfigresource in the managed cluster namespace -
ManagedClusterAddOnresource in the managed cluster namespace
The Red Hat OpenShift GitOps add-on deploys to each selected managed cluster and installs the following resources:
-
OpenShift GitOps operator in the
openshift-gitops-operatornamespace -
ArgoCD instance in the
openshift-gitopsnamespace
To create a GitOpsCluster resource, complete the following steps:
On your hub cluster, create a
GitOpsClusterresource to enable the Red Hat OpenShift GitOps add-on by adding the following YAML sample:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-clusters namespace: openshift-gitops spec: argoServer: argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops gitopsAddon: enabled: trueApply the YAML sample by running the following command:
oc apply -f gitopscluster-example.yaml
1.12.2. Verifying the installation Copia collegamentoCollegamento copiato negli appunti!
To verify the necessary installations and resources for successful deployment, complete the following steps:
Verify that the
GitOpsClusterresource has a status for successful deployment by running the following command:oc get gitopscluster gitops-clusters -n openshift-gitops -o yamlVerify that the OpenShift GitOps add-on controller is working by running the following command:
oc get pods -n open-cluster-management-agent-addonVerify the OpenShift GitOps operator is working by running the following command:
oc get pods -n openshift-gitops-operatorVerify the
ArgoCD`instance is working by running the following command:oc get pods -n openshift-gitops
Additional resources
Continue setting up the OpenShift GitOps add-on by completing Managing the Red Hat OpenShift GitOps add-on.
1.13. Skipping the OpenShift GitOps add-on enforcement Copia collegamentoCollegamento copiato negli appunti!
The OpenShift GitOps add-on enforces certain resources on the managed clusters to maintain consistency. You can skip enforcement for specific resources by adding the gitops-addon.open-cluster-management.io/skip annotation to those specific resources on the managed cluster. Skipping enforcements helps you when you need to customize the ArgoCD custom resource or other OpenShift GitOps components that the add-on manages.
When a resource on the managed cluster has the skip annotation, the OpenShift GitOps add-on does not update or manage that resource. The add-on checks for this annotation before applying any changes, allowing you to maintain custom configurations that are different from the default settings for the add-on.
Note: When using the skip annotation, ensure that your custom configurations are compatible with the OpenShift GitOps add-on requirements. Skipping enforcement means the OpenShift GitOps add-on does not manage or reconcile these resources, so you are responsible for maintaining their consistency and correctness.
To skip enforcement for a resource, add the following annotation to the ArgoCD custom resource on the managed cluster:
metadata:
Annotations:
gitops-addon.open-cluster-management.io/skip: "true"
For managing skip annotations across many managed clusters at scale, use the Red Hat Advanced Cluster Management Policy to apply the annotation across the fleet.
1.13.1. Skipping enforcements by customizing the ArgoCD custom resource Copia collegamentoCollegamento copiato negli appunti!
Customizing the ArgoCD custom resource is a common use case to adjust resource limits, configure specific settings, or enable additional features.
To skip enforcement by customizing the ArgoCD custom resource, complete the following steps:
On the managed cluster, edit the
ArgoCDcustom resource:oc edit argocd openshift-gitops -n openshift-gitopsAdd the
gitops-addon.open-cluster-management.io/skipannotation and set totrue, as displayed in the following YAML:apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops annotations: gitops-addon.open-cluster-management.io/skip: "true"Apply the YAML sample by running the following command:
oc apply -f argocd-example.yamlOptional: Override the existing configuration values that the
GitOpsClustermaintains in theAddOnDeploymentConfigspecification by adding the following YAML sample:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-override-config namespace: openshift-gitops spec: argoServer: argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops gitopsAddon: enabled: true overrideExistingConfigs: true gitOpsImage: "registry.redhat.io/openshift-gitops-1/argocd@sha256:..."Apply the YAML sample by running the following command
oc apply -f gitopscluster-example.yaml
Additional resources
If you want to completely uninstall the OpenShift GitOps add-on, see Uninstalling the OpenShift GitOps add-on.
1.14. Uninstalling the OpenShift GitOps add-on Copia collegamentoCollegamento copiato negli appunti!
To completely uninstall the OpenShift GitOps add-on, including the operator and the GitOpsCluster resource, complete the following steps:
Delete the
GitOpsClusterresource by running the following command:oc delete gitopscluster gitops-clusters -n openshift-gitopsDelete the
ManagedClusterAddOnresource by running the following command:oc -n <managed_cluster_namespace> delete managedclusteraddon gitops-addonVerify that the
ManagedClusterAddOnresources are removed from managed cluster namespaces by running the following command:oc get managedclusteraddon gitops-addon -n <managed-cluster-name>
1.15. Verifying the {gitops-short) add-on functions Copia collegamentoCollegamento copiato negli appunti!
You can verify that your OpenShift GitOps add-on functions work by completing the following sections:
1.15.1. Verifying the GitOpsCluster status Copia collegamentoCollegamento copiato negli appunti!
To check the GitOpsCluster status, complete the following steps:
View the
GitOpsClusterresource status by running the following command:oc get gitopscluster <gitopscluster-name> -n openshift-gitops -o yamlCheck the status conditions for specific failure information by running the following command:
oc get gitopscluster <gitopscluster-name> -n openshift-gitops -o jsonpath='{.status.conditions}' | jq
See the following common condition types to check:
-
Ready: OverallGitOpsClusterhealth -
PlacementResolved:Placementresource status -
ClustersRegistered: Cluster registration status -
ArgoCDAgentPrereqsReady: Agent prerequisites, if you have enabled the agent -
CertificatesReady: TLS certificate status, if you have enabled the agent. -
ManifestWorksApplied:ManifestWorkpropagation status if you have enabled the agent.
1.15.2. Verifying the OpenShift GitOps add-on controller logs Copia collegamentoCollegamento copiato negli appunti!
To check the OpenShift GitOps add-on controller logs, complete the following steps:
- Go to the managed cluster.
Check the OpenShift GitOps add-on controller logs by running the following command:
oc logs -n open-cluster-management-agent-addon -l app=gitops-addon
1.15.3. Verifying the OpenShift GitOps operator Copia collegamentoCollegamento copiato negli appunti!
To confirm that the OpenShift GitOps operator works, complete the following steps:
Verify the operator is working by running the following command:
oc get pods -n openshift-gitops-operatorCheck operator logs by running the following command:
oc logs -n openshift-gitops-operator -l control-plane=controller-manager
1.16. Verifying the ArgoCD agent function Copia collegamentoCollegamento copiato negli appunti!
When you use the ArgoCD agent, you need to check that it works. To check that the ArgoCD instance is working, complete the following steps:
Verify Argo CD components are working by running the following command:
oc get pods -n openshift-gitopsCheck
ArgoCDapplication controller logs by running the following command:oc logs -n openshift-gitops -l app.kubernetes.io/name=argocd-application-controller
If you have enabled the ArgoCD agent, check that it is working by completing the following steps:
Verify the agent is working by running the following command:
oc get pods -n openshift-gitopsCheck that the agent logs are working by running the following command:
oc logs -n openshift-gitopsVerify that the
ManifestWorkstatus is set for the certificate authorization (CA) propagation by running the following command:oc get manifestwork -n <managed-cluster-name> argocd-agent-ca-propagation -o yaml
1.17. Implementing progressive rollout strategy by using ApplicationSet resource (Technology Preview) Copia collegamentoCollegamento copiato negli appunti!
By implementing the progressive rollout strategy to your application in your cluster fleet for both the push and pull model, you have a Red Hat OpenShift GitOps-based process that makes changes safely across your entire cluster fleet. With the rollout strategy, you can orchestrate staged updates across different clusters. Additionally, you can promote changes throughout your label defined groups, such as development and product clusters.
When you use the ApplicationSet resource for the progressive rollout strategy to your application, the ApplicationSet controller organizes the clusters for your application into groups. The controller processes one group at a time through the Argo CD health cycle, then only makes the group available if there is a Healthy status.
To learn more about implementing the progressive rollout strategy by using the ApplicationSet resource, complete the following sections:
- Enabling the progressive rollout strategy by using the ApplicationSet resource
- Understanding a sample ApplicationSet resource for the progressive rollout strategy
- Understanding labels for applications that use the progressive rollout strategy
- Understanding changes to the Git repository that uses the progressive rollout strategy
1.17.1. Enabling the progressive rollout strategy by using the ApplicationSet resource Copia collegamentoCollegamento copiato negli appunti!
To enable the progressive rollout strategy with your ApplicationSet resource, complete the following steps:
-
On your hub cluster, update the OpenShift GitOps operator instance configuration in the
openshift-gitopsnamespace. Enable the progressive syncs for the
ApplicationSetresource by running the following command:oc -n openshift-gitops patch argocd openshift-gitops --type='merge' -p '{"spec":{"applicationSet":{"extraCommandArgs":["--enable-progressive-syncs"]}}}'-
Restart the
openshift-gitops-applicationset-controllerdeployment in theopenshift-gitopsnamespace. Apply the updated OpenShift GitOps operator instance configuration to create a new rollout by running the following command:
oc -n openshift-gitops rollout restart deployment openshift-gitops-applicationset-controller
1.17.2. Understanding a sample ApplicationSet resource for the progressive rollout strategy Copia collegamentoCollegamento copiato negli appunti!
Understand a sample ApplicationSet with the Argo CD pull model that implements the progressive rollout strategy by viewing a strategy specification. This YAML activates the progressive rollout strategy. By adding the strategy parameter section on your hub cluster, your clusters can update fully and simultaneously instead of individually.
For example, see how a strategy is defined in the following ApplicationSet resource:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook-allclusters-app-set
namespace: openshift-gitops
spec:
generators:
- clusterDecisionResource:
configMapRef: ocm-placement-generator
labelSelector:
matchLabels:
cluster.open-cluster-management.io/placement: aws-app-placement
requeueAfterSeconds: 30
strategy:
type: RollingSync
rollingSync:
steps:
- name: dev-stage
matchExpressions:
- key: envLabel
operator: In
values: [env-dev]
maxUpdate: 100%
- name: qa-stage
matchExpressions:
- key: envLabel
operator: In
values: [env-qa]
maxUpdate: 100%
- name: prod-stage
matchExpressions:
- key: envLabel
operator: In
values: [env-prod]
maxUpdate: 100%
template:
preserveFields:
- metadata.labels.envLabel
metadata:
annotations:
apps.open-cluster-management.io/ocm-managed-cluster: '{{name}}'
apps.open-cluster-management.io/ocm-managed-cluster-app-namespace: openshift-gitops
argocd.argoproj.io/skip-reconcile: "true"
labels:
apps.open-cluster-management.io/pull-to-ocm-managed-cluster: "true"
name: '{{name}}-guestbook-app'
spec:
destination:
namespace: guestbook
server: https://kubernetes.default.svc
project: default
sources:
- repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: main
path: guestbook
syncPolicy:
syncOptions:
- CreateNamespace=true
- 1
- Use the
preserveFieldsparameter to list the labels that need to be ignored by theApplicationSetcontroller. You can manually apply a label that you want, for example,envLabel=env-qa. Without this field, theApplicationSetcontroller overwrites or deletes labels that are not defined in the template.
1.17.3. Understanding labels for applications that use the progressive rollout strategy Copia collegamentoCollegamento copiato negli appunti!
The rollout strategy is used to process one group of applications at a time through the Argo CD health cycle. The controller organizes these groups of applications by matching labels that you set on the application objects.
Ensure that there is a unique label that specifies the promotion order that you need for every application, which is using the rollout strategy. See the following examples:
For example, you can give your applications a unique label by running the following command on your hub cluster:
oc -n openshift-gitops label applications.argoproj.io/cluster1-guestbook-app envLabel=env-devIf you want to specify an application for quality assurance, run the following command:
oc -n openshift-gitops label applications.argoproj.io/cluster2-guestbook-app envLabel=env-qa- If you want to specify an application group for product, run the following command:
oc -n openshift-gitops label applications.argoproj.io/cluster3-guestbook-app envLabel=env-prod
1.17.4. Understanding changes to the Git repository that uses the progressive rollout strategy Copia collegamentoCollegamento copiato negli appunti!
To start the progressive rollout strategy for your ApplicationSet resource on your cluster fleet, the Argo CD application checks your Git repository for committed changes. When there are committed changes, the progressive rollout strategy begins.
When Argo CD starts the rollout strategy, the ApplicationSet controller detects the applications that have the OutOfSync status and schedules applications for a staged rollout. Based on the labels you use for your application groups, the ApplicationSet controller starts the first RollingSync step with the env-dev application group.
From the Argo CD health cycle, you can watch each application group move from OutOfSync to a Healthy status. After all your application groups have a Healthy status, the ApplicationSet controller syncs the application group with the env-qa label, then the application group with the env-prod label. After your last env-prod application group has a Healthy status, the progressive rollout strategy completes automatically and fully runs its strategy on your cluster fleet.