Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
GitOps
GitOps
Abstract
Chapter 1. GitOps overview
Red Hat OpenShift Container Platform GitOps and Argo CD is integrated with Red Hat Advanced Cluster Management for Kubernetes, with advanced features compared to the original Application Lifecycle Channel and Subscription model.
GitOps integration with Argo CD development is active, as well as the large community that contributes feature enhancements and updates to Argo CD. By utilizing the OpenShift Container Platform GitOps Operator, you can use the latest advancements in Argo CD development and receive support from the GitOps Operator subscription.
See the following topics to learn more about Red Hat Advanced Cluster Management for Kubernetes integration with OpenShift Container Platform GitOps and Argo CD:
- GitOps console
- Registering managed clusters to Red Hat OpenShift GitOps operator
- Configuring application placement tolerations for GitOps
- Deploying Argo CD with the Push and Pull model
- Generating a policy to install GitOps Operator
- Managing policy definitions with OpenShift Container Platform GitOps (Argo CD)
1.1. GitOps console
Learn more about integrated OpenShift Container Platform GitOps console features. Create and view applications, such as ApplicationSet, and Argo CD types. An ApplicationSet
represents Argo applications that are generated from the controller.
- You click Launch resource in Search to search for related resources.
-
Use Search to find application resources by the component
kind
for each resource.
Important: Available actions are based on your assigned role. Learn about access requirements from the Role-based access control documentation.
1.1.1. Prerequisites
See the following prerequisites and requirements:
-
For an Argo CD
ApplicationSet
to be created, you need to enableAutomatically sync when cluster state changes
from theSync policy
. -
For Flux with the
kustomization
controller, find Kubernetes resources with the labelkustomize.toolkit.fluxcd.io/name=<app_name>
. -
For Flux with the
helm
controller, find Kubernetes resources with the labelhelm.toolkit.fluxcd.io/name=<app_name>
. -
You need GitOps cluster resources and the GitOps operator installed to create an
ApplicationSet
. Without these prerequisites, you will see no Argo server options in the console to create anApplicationSet
.
1.1.2. Querying Argo CD applications
When you search for an Argo CD application, you are directed to the Applications page. Complete the following steps to access the Argo CD application from the Search page:
- Log in to your Red Hat Advanced Cluster Management hub cluster.
- From the console header, select the Search icon.
-
Filter your query with the following values:
kind:application
andapigroup:argoproj.io
. - Select an application to view. The Application page displays an overview of information for the application.
For more information about search, see Search in the console.
1.2. Registering managed clusters to Red Hat OpenShift GitOps operator
To configure OpenShift GitOps with the Push model, you can register a set of one or more Red Hat Advanced Cluster Management for Kubernetes managed clusters to an instance of OpenShift GitOps operator. After registering, you can deploy applications to those clusters. Set up a continuous OpenShift GitOps environment to automate application consistency across clusters in development, staging, and production environments.
1.2.1. Prerequisites
- You need to install the Red Hat OpenShift GitOps operator on your Red Hat Advanced Cluster Management for Kubernetes.
- Import one or more managed clusters.
- To register managed clusters to OpenShift GitOps, complete the Creating a ManagedClusterSet documentation.
1.2.2. Registering managed clusters to Red Hat OpenShift GitOps
Complete the following steps to register managed clusters to OpenShift GitOps:
-
Create a managed cluster set binding to the namespace where OpenShift GitOps is deployed. For an example of binding the managed cluster to the
openshift-gitops
namespace, see themulticloud-integrations
managedclusterset
binding example. In the Additional resources section, see Creating a ManagedClusterSetBinding resource for more general information about creating aManagedClusterSetBinding
. See Filtering ManagedClusters from ManagedClusterSets for placement information. In the namespace that is used in managed cluster set binding, create a
Placement
custom resource to select a set of managed clusters to register to an OpenShift GitOps operator instance. Use themulticloud-integration
placement example as a template. See Using ManagedClusterSets with Placement for placement information.Notes:
- Only OpenShift Container Platform clusters are registered to an OpenShift GitOps operator instance, not other Kubernetes clusters.
-
In some unstable network scenarios, the managed clusters might be temporarily placed in
unavailable
orunreachable
state. See Configuring placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps for more details.
Create a
GitOpsCluster
custom resource to register the set of managed clusters from the placement decision to the specified instance of OpenShift GitOps. This enables the OpenShift GitOps instance to deploy applications to any of those Red Hat Advanced Cluster Management managed clusters. Use themulticloud-integrations
OpenShift GitOps cluster example.Note: The referenced
Placement
resource must be in the same namespace as theGitOpsCluster
resource. See the following example:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: gitops-cluster-sample namespace: dev spec: argoServer: cluster: local-cluster argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters 1
- 1
- The
placementRef.name
value isall-openshift-clusters
, and is specified as target clusters for the OpenShift GitOps instance that is installed inargoNamespace: openshift-gitops
. TheargoServer.cluster
specification requires thelocal-cluster
value.
- Save your changes. You can now follow the OpenShift GitOps workflow to manage your applications.
1.2.3. Registering non-OpenShift Container Platform clusters to Red Hat OpenShift GitOps
You can now use the Red Hat Advanced Cluster Management GitOpsCluster
resource to register a non-OpenShift Container Platform cluster to a OpenShift GitOps cluster. With this capability, you can deploy application resources to the non-OpenShift Container Platform cluster by using the OpenShift GitOps console. To register a non-OpenShift Container Platform cluster to a OpenShift GitOps cluster, complete the following steps:
Go to the API server URL in the non-OpenShift Container Platform
ManagedCluster
resourcespec
and validate it by running the following command:oc get managedclusters eks-1
Verify that your output resembles the following information:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37m
If the API server URL in the non-OpenShift Container Platform
MangedCluster
resourcespec
is empty, update it manually by completing the following steps:To complete the API server URL, edit the
MangedCluster
resourcespec
by running the following command:oc edit managedclusters eks-1
Verify that your YAML resembles the following file:
spec: managedClusterClientConfigs: - caBundle: ZW1wdHlDQWJ1bmRsZQo= url: https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com
Save the changes then validate that the API server is completed by running the following command:
oc get managedclusters eks-1
- Verify that your output resembles the following information:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37m
To verify that the cluster secret is generated, go to the
openshift-gitops
namespace and confirm that thatGitOpsCluster
resource status reports,successful
.Notes:
With Red Hat Advanced Cluster Management 2.12 or later, the API server URL for all types of non-OpenShift Container Platform
ManagedCluster
resources renders automatically if you use the following importing modes:- Entering your server URL and API token for the existing cluster.
-
Entering the
kubeconfig
file for the existing cluster.
The following cases can make the API server URL empty for one of the
ManagedClusters
resources:- The non-OpenShift Container Platform cluster is imported to the Red Hat Advanced Cluster Management hub cluster before version 2.12.
-
The non-OpenShift Container Platform cluster is manually imported to the Red Hat Advanced Cluster Management hub cluster in version 2.12 through the import mode,
Run import commands
.
1.2.4. Red Hat OpenShift GitOps token
When you integrate with the OpenShift GitOps operator, for every managed cluster that is bound to the OpenShift GitOps namespace through the placement and ManagedClusterSetBinding
custom resources, a secret with a token to access the ManagedCluster
is created in the OpenShift GitOps instance server namespace.
The OpenShift GitOps controller needs this secret to sync resources to the managed cluster. By default, the service account application manager works with the cluster administrator permissions on the managed cluster to generate the OpenShift GitOps cluster secret in the OpenShift GitOps instance server namespace. The default namespace is openshift-gitops
.
If you do not want this default, create a service account with customized permissions on the managed cluster for generating the OpenShift GitOps cluster secret in the OpenShift GitOps instance server namespace. The default namespace is still, openshift-gitops
. For more information, see Creating a customized service account for Argo CD push model.
1.2.5. Additional resources
For more information, see the following resources and examples:
- Configuring application placement tolerations for GitOps
-
multicloud-integrations
managed cluster set binding - Creating a ManagedClusterSet
-
multicloud-integration
placement - Placement overview
-
multicloud-integrations
GitOps cluster -
multicloud-integrations
managed cluster set binding - Creating a ManagedClusterSetBinding resource
- About Red Hat OpenShift GitOps
1.3. Configuring application placement tolerations for GitOps
Red Hat Advanced Cluster Management provides a way for you to register managed clusters that deploy applications to Red Hat OpenShift GitOps.
In some unstable network scenarios, the managed clusters might be temporarily placed in Unavailable
state. If a Placement
resource is being used to facilitate the deployment of applications, add the following tolerations for the Placement
resource to continue to include unavailable clusters. The following example shows a Placement
resource with tolerations:
apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement namespace: ns1 spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists
1.4. Deploying Argo CD with Push and Pull model
Using a Push model, The Argo CD server on the hub cluster deploys the application resources on the managed clusters. For the Pull model, the application resources are propagated by the Propagation controller to the managed clusters by using manifestWork
.
For both models, the same ApplicationSet
CRD is used to deploy the application to the managed cluster.
Required access: Cluster administrator
1.4.1. Prerequisites
View the following prerequisites for the Argo CD Pull model:
Important:
-
If your
openshift-gitops-ArgoCD-application-controller
service account is not assigned as a cluster administrator, the GitOps application controller might not deploy resources. The application status might send an error similar to the following error:
cannot create resource "services" in API group "" in the namespace "mortgage",deployments.apps is forbidden: User "system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller"
-
After you install the
OpenShift Gitops
operator on the managed clusters, you must create theClusterRoleBinding
cluster administrator privileges on the same managed clusters. To add the
ClusterRoleBinding
cluster administrator privileges to your managed clusters, see the following example YAML:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: argo-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin
If you are not a cluster administrator and need to resolve this issue, complete the following steps:
- Create all namespaces on each managed cluster where the Argo CD application will be deployed.
Add the
managed-by
label to each namespace. If an Argo CD application is deployed to multiple namespaces, each namespace should be managed by Argo CD.See the following example with the
managed-by
label:
apiVersion: v1 kind: Namespace metadata: name: mortgage2 labels: argocd.argoproj.io/managed-by: openshift-gitops
-
You must declare all application destination namespaces in the repository for the application and include the
managed-by
label in the namespaces. Refer to Additional resources to learn how to declare a namespace.
See the following requirements to use the Argo CD Pull model:
-
The GitOps operator must be installed on the hub cluster and the target managed clusters in the
openshift-gitops
namespace. - The required hub cluster OpenShift Container Platform GitOps operator must be version 1.9.0 or later.
- The required managed clusters OpenShift Container Platform GitOps operator must be the same version as the hub cluster.
- You need the ApplicationSet controller to propagate the Argo CD application template for a managed cluster.
Every managed cluster must have a cluster secret in the Argo CD server namespace on the hub cluster, which is required by the ArgoCD application set controller to propagate the Argo CD application template for a managed cluster.
To create the cluster secret, create a
gitOpsCluster
resource that contains a reference to aplacement
resource. Theplacement
resource selects all the managed clusters that need to support the Pull model. When the GitOps cluster controller reconciles, it creates the cluster secrets for the managed clusters in the Argo CD server namespace.
1.4.2. Architecture
For both Push and Pull model, the Argo CD ApplicationSet controller on the hub cluster reconciles to create application resources for each target managed cluster. See the following information about architecture for both models:
1.4.2.1. Architecture Push model
- With Push model, OpenShift Container Platform GitOps applies resources directly from a centralized hub cluster to the managed clusters.
- An Argo CD application that is running on the hub cluster communicates with the GitHub repository and deploys the manifests directly to the managed clusters.
- Push model implementation only contains the Argo CD application on the hub cluster, which has credentials for managed clusters. The Argo CD application on the hub cluster can deploy the applications to the managed clusters.
- Important: With a large number of managed clusters that require resource application, consider potential strain on the OpenShift GitOps GitOps controller memory and CPU usage. To optimize resource management, see Configuring resource quota or requests in the Red Hat OpenShift GitOps documentation.
-
By default, the Push model is used to deploy the application unless you add the
apps.open-cluster-management.io/ocm-managed-cluster
andapps.open-cluster-management.io/pull-to-ocm-managed-cluster
annotations to the template section of theApplicationSet
.
1.4.2.2. Architecture Pull model
- Pull model can provide scalability relief compared to the push model by reducing stress on the controller in the hub cluster, but with more requests and status reporting required.
- With Pull model, OpenShift Container Platform GitOps does not apply resources directly from a centralized hub cluster to the managed clusters. The Argo CD Application is propagated from the hub cluster to the managed clusters.
-
Pull model implementation applies OpenShift Cluster Manager registration, placement, and
manifestWork
APIs so that the hub cluster can use the secure communication channel between the hub cluster and the managed cluster to deploy resources. - Each managed cluster individually communicates with the GitHub repository to deploy the resource manifests locally, so you must install and configure GitOps operators on each managed cluster.
-
An Argo CD server must be running on each target managed cluster. The Argo CD application resources are replicated on the managed clusters, which are then deployed by the local Argo CD server. The distributed Argo CD applications on the managed clusters are created with a single Argo CD
ApplicationSet
resource on the hub cluster. -
The managed cluster is determined by the value of the
ocm-managed-cluster
annotation. -
For successful implementation of Pull model, the Argo CD application controller must ignore Push model application resources with the
argocd.argoproj.io/skip-reconcile
annotation in the template section of theApplicationSet
. - For Pull model, the Argo CD Application controller on the managed cluster reconciles to deploy the application.
- The Pull model Resource sync controller on the hub cluster queries the OpenShift Cluster Manager search V2 component on each managed cluster periodically to retrieve the resource list and error messages for each Argo CD application.
-
The Aggregation controller on the hub cluster creates and updates the
MulticlusterApplicationSetReport
from across clusters by using the data from the Resource sync controller, and the status information frommanifestWork
. - The status of the deployments is gathered back to the hub cluster, but not all the detailed information is transmitted. Additional status updates are periodically scraped to provide an overview. The status feedback is not real-time, and each managed cluster GitOps operator needs to communicate with the Git repository, which results in multiple requests.
1.4.3. Creating the ApplicationSet custom resource
The Argo CD ApplicationSet
resource is used to deploy applications on the managed clusters by using the Push or Pull model with a placement
resource in the generator field that is used to get a list of managed clusters.
- For the Pull model, set the destination for the application to the default local Kubernetes server, as displayed in the following example. The application is deployed locally by the application controller on the managed cluster.
Add the annotations that are required to override the default Push model, as displayed in the following example
ApplicationSet
YAML, which uses the Pull model with template annotations:apiVersion: argoproj.io/v1alpha1 kind: `ApplicationSet` metadata: name: guestbook-allclusters-app-set namespace: openshift-gitops spec: generators: - clusterDecisionResource: configMapRef: ocm-placement-generator labelSelector: matchLabels: cluster.open-cluster-management.io/placement: aws-app-placement requeueAfterSeconds: 30 template: metadata: annotations: apps.open-cluster-management.io/ocm-managed-cluster: '{{name}}'1 apps.open-cluster-management.io/ocm-managed-cluster-app-namespace: openshift-gitops argocd.argoproj.io/skip-reconcile: "true" 2 labels: apps.open-cluster-management.io/pull-to-ocm-managed-cluster: "true" 3 name: '{{name}}-guestbook-app' spec: destination: namespace: guestbook server: https://kubernetes.default.svc project: default sources: [ { repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: main path: guestbook } ] syncPolicy: automated: {} syncOptions: - CreateNamespace=true
1.4.4. MulticlusterApplicationSetReport
-
For the Pull model, the
MulticlusterApplicationSetReport
aggregates application status from across your managed clusters. - The report includes the list of resources and the overall status of the application from each managed cluster.
-
A separate report resource is created for each Argo CD ApplicationSet resource. The report is created in the same namespace as the
ApplicationSet
. The report includes the following items:
- A list of resources for the Argo CD application
- The overall sync and health status for each Argo CD application
-
An error message for each cluster where the overall status is
out of sync
orunhealthy
- A summary status all the states of your managed clusters
- The Resource sync controller and the Aggregation controller both run every 10 seconds to create the report.
The two controllers, along with the Propagation controller, run in separate containers in the same
multicluster-integrations
pod, as shown in the following example output:NAMESPACE NAME READY STATUS open-cluster-management multicluster-integrations-7c46498d9-fqbq4 3/3 Running
The following is an example MulticlusterApplicationSetReport
YAML file for the guestbook
application:
apiVersion: apps.open-cluster-management.io/v1alpha1 kind: MulticlusterApplicationSetReport metadata: labels: apps.open-cluster-management.io/hosting-applicationset: openshift-gitops.guestbook-allclusters-app-set name: guestbook-allclusters-app-set namespace: openshift-gitops statuses: clusterConditions: - cluster: cluster1 conditions: - message: 'Failed sync attempt: one or more objects failed to apply, reason: services is forbidden: User "system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller" cannot create resource "services" in API group "" in the namespace "guestbook",deployments.apps is forbidden: User <name> cannot create resource "deployments" in API group "apps" in the namespace "guestboo...' type: SyncError healthStatus: Missing syncStatus: OutOfSync - cluster: pcluster1 healthStatus: Progressing syncStatus: Synced - cluster: pcluster2 healthStatus: Progressing syncStatus: Synced summary: clusters: "3" healthy: "0" inProgress: "2" notHealthy: "3" notSynced: "1" synced: "2"
Note: If a resource fails to deploy, the resource is not included in the resource list. See error messages for information.
1.4.5. Additional resources
- See Configuring an OpenShift cluster by deploying an application with cluster configurations in the Red Hat OpenShift GitOps documentation.
- See Setting up an Argo CD instance in the Red Hat OpenShift GitOps documentation.
1.5. Managing policy definitions with Red Hat OpenShift GitOps
With the ArgoCD
resource, you can use Red Hat OpenShift GitOps to manage policy definitions by granting OpenShift GitOps access to create policies on the Red Hat Advanced Cluster Management hub cluster.
1.5.1. Prerequisite
You must log in to your hub cluster.
Required access: Cluster administrator
Deprecated: PlacementRule
1.5.2. Creating a ClusterRole resource for OpenShift GitOps
To create a ClusterRole
resource for OpenShift GitOps, with access to create, read, update, and delete policies and placements:
Create a
ClusterRole
from the console. YourClusterRole
might resemble the following example:kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin rules: - verbs: - get - list - watch - create - update - patch - delete apiGroups: - policy.open-cluster-management.io resources: - policies - configurationpolicies - certificatepolicies - operatorpolicies - policysets - placementbindings - verbs: - get - list - watch - create - update - patch - delete apiGroups: - apps.open-cluster-management.io resources: - placementrules - verbs: - get - list - watch - create - update - patch - delete apiGroups: - cluster.open-cluster-management.io resources: - placements - placements/status - placementdecisions - placementdecisions/status
Create a
ClusterRoleBinding
object to grant the OpenShift GitOps service account access to theopenshift-gitops-policy-admin
ClusterRole
object. YourClusterRoleBinding
might resemble the following example:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: openshift-gitops-policy-admin
Notes: - When a Red Hat Advanced Cluster Management policy definition is deployed with OpenShift GitOps, a copy of the policy is created in each managed cluster namespace for resolving hub template differences. These copies are called replicated policies. - To prevent OpenShift GitOps from repeatedly deleting this replicated policy or show that the Argo CD
Application
is out of sync, theargocd.argoproj.io/compare-options: IgnoreExtraneous
annotation is automatically set on each replicated policy by the Red Hat Advanced Cluster Management policy framework.-
There are labels and annotations used by Argo CD to track objects. For replicated policies to not show up at all in Argo CD, disable the Argo CD tracking labels and annotations by setting
spec.copyPolicyMetadata
tofalse
on the Red Hat Advanced Cluster Management policy definition.
1.5.3. Integrating the Policy Generator with OpenShift GitOps
You can use OpenShift GitOps to generate policies by using the Policy Generator through GitOps. Since the Policy Generator does not come preinstalled in the OpenShift GitOps container image, you must complete customizations.
1.5.3.1. Prerequisites
- You must install OpenShift GitOps on your Red Hat Advanced Cluster Management hub cluster.
- You must log in to the hub cluster.
1.5.3.2. Accessing the Policy Generator from OpenShift GitOps
To access the Policy Generator from OpenShift GitOps, you must configure the Init Container to copy the Policy Generator binary from the Red Hat Advanced Cluster Management Application Subscription container image. You must also configure OpenShift GitOps by providing the --enable-alpha-plugins
flag when it runs Kustomize.
To create, read, update, and delete policies and placements with the Policy Generator, grant access to the Policy Generator from OpenShift GitOps. Complete the following steps:
Edit the OpenShift GitOps
argocd
object with the following command:oc -n openshift-gitops edit argocd openshift-gitops
To update the Policy Generator to a newer version, add the
registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9
image used by the Init Container to a newer tag. Replace the<version>
parameter with the latest Red Hat Advanced Cluster Management version in yourArgoCD
resource.Your
ArgoCD
resource might resemble the following YAML file:apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: kustomizeBuildOptions: --enable-alpha-plugins repo: env: - name: KUSTOMIZE_PLUGIN_HOME value: /etc/kustomize/plugin initContainers: - args: - -c - cp /policy-generator/PolicyGenerator-not-fips-compliant /policy-generator-tmp/PolicyGenerator command: - /bin/bash image: registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<version> name: policy-generator-install volumeMounts: - mountPath: /policy-generator-tmp name: policy-generator volumeMounts: - mountPath: /etc/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator name: policy-generator volumes: - emptyDir: {} name: policy-generator
Note: Alternatively, you can create a
ConfigurationPolicy
resource that contains theArgoCD
manifest and template the version to match the version set in theMulticlusterHub
:image: '{{ (index (lookup "apps/v1" "Deployment" "open-cluster-management" "multicluster-operators-hub-subscription").spec.template.spec.containers 0).image }}'
If you want to enable the processing of Helm charts within the Kustomize directory before generating policies, set the
POLICY_GEN_ENABLE_HELM
environment variable to"true"
in thespec.repo.env
field:env: - name: POLICY_GEN_ENABLE_HELM value: "true"
To create, read, update, and delete policies and placements, create a
ClusterRoleBinding
object to grant the OpenShift GitOps service account access to Red Hat Advanced Cluster Management hub cluster. YourClusterRoleBinding
might resemble the following resource:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: openshift-gitops-policy-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: openshift-gitops-policy-admin
1.5.4. Configuring policy health checks in OpenShift GitOps
With the ArgoCD
resoure, use OpenShift GitOps for you to define custom logic that determines the current health of specific resource based on the resource state. Define custom health checks to report the policy as healthy only when your policy is compliant. When you add a health check for a resource, you must add it as a group
in the resourceHealthChecks
field.
Important: To verify that you did not download something malicious from the Internet, review every policy before you apply it.
To define health checks for your resource kinds complete the following steps:
To configure the health check for your
CertificatePolicy
resources, edit theArgoCD
resource with the following command:oc -n openshift-gitops edit argocd openshift-gitops
Your
ArgoCD
resource might resemble the following YAML file:apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: resourceHealthChecks: - group: policy.open-cluster-management.io kind: CertificatePolicy check: | hs = {} if obj.status == nil or obj.status.compliant == nil then hs.status = "Progressing" hs.message = "Waiting for the status to be reported" return hs end if obj.status.compliant == "Compliant" then hs.status = "Healthy" hs.message = "All certificates found comply with the policy" return hs else hs.status = "Degraded" hs.message = "At least one certificate does not comply with the policy" return hs end
To add a health check to your
CertificatePolicy
,ConfigurationPolicy
,OperatorPolicy
, andPolicy
resources, download theargocd-policy-healthchecks.yaml
by running the following command:wget https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/stable/CM-Configuration-Management/argocd-policy-healthchecks.yaml
To apply the
argocd-policy-healthchecks.yaml
policy, run the following command:oc apply -f ./argocd-policy-healthchecks.yaml
-
Verify that the health checks work as expected by viewing the Summary tab of the
ArgoCD
resource. View the health details from the Argo CD console.
1.5.5. Additional resources
- See the Understanding OpenShift GitOps documentation.
1.6. Generating a policy to install GitOps Operator
A common use of Red Hat Advanced Cluster Management policies is to install an Operator on one or more managed Red Hat OpenShift Container Platform clusters. Continue reading to learn how to generate policies by using the Policy Generator, and to install the OpenShift Container Platform GitOps Operator with a generated policy:
1.6.1. Generating a policy that installs OpenShift Container Platform GitOps
You can generate a policy that installs OpenShift Container Platform GitOps by using the Policy Generator. The OpenShift Container Platform GitOps operator offers the all namespaces installation mode, which you can view in the following example. Create a Subscription
manifest file called openshift-gitops-subscription.yaml
, similar to the following example:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-operators spec: channel: stable name: openshift-gitops-operator source: redhat-operators sourceNamespace: openshift-marketplace
To pin to a specific version of the operator, add the following parameter and value: spec.startingCSV: openshift-gitops-operator.v<version>
. Replace <version>
with your preferred version.
A PolicyGenerator
configuration file is required. Use the configuration file named policy-generator-config.yaml
to generate a policy to install OpenShift Container Platform GitOps on all OpenShift Container Platform managed clusters. See the following example:
apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: install-openshift-gitops policyDefaults: namespace: policies placement: clusterSelectors: vendor: "OpenShift" remediationAction: enforce policies: - name: install-openshift-gitops manifests: - path: openshift-gitops-subscription.yaml
The last required file is kustomization.yaml
, which requires the following configuration:
generators: - policy-generator-config.yaml
The generated policy might resemble the following file with PlacementRule
(Deprecated):
apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-openshift-gitops namespace: policies spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: vendor operator: In values: - OpenShift --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-openshift-gitops namespace: policies placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-openshift-gitops subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-openshift-gitops --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: install-openshift-gitops namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-openshift-gitops spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-operators spec: channel: stable name: openshift-gitops-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low
Generated policies from manifests in the OpenShift Container Platform documentation is supported. Any configuration guidance from the OpenShift Container Platform documentation can be applied using the Policy Generator.
1.6.2. Using policy dependencies with OperatorGroups
When you install an operator with an OperatorGroup
manifest, the OperatorGroup
must exist on the cluster before the Subscription
is created. Use the policy dependency feature along with the Policy Generator to ensure that the OperatorGroup
policy is compliant before you enforce the Subscription
policy.
Set up policy dependencies by listing the manifests in the order that you want. For example, you might want to create the namespace policy first, create the OperatorGroup
next, and create the Subscription
last.
Enable the policyDefaults.orderManifests
parameter and disable policyDefaults.consolidateManifests
in the Policy Generator configuration manifest to automatically set up dependencies between the manifests.
1.7. Creating a customized service account for Argo CD push model
Create a service account on a managed cluster by creating a managedserviceaccount
resource on the hub cluster. Use the clusterpermission
resource to grant specific permissions to the service account.
Creating a customzied service account for use with the Argo CD push model includes the following benefits:
- An Application manager add-on runs on each managed cluster. By default, the Argo CD controller uses the service account Application manager to push these resources to the managed cluster.
- The Application manager service account has a large set of permissions because the application subscription add-on uses the Application manager service to deploy applications on the managed cluster. Do not use the Application manager service account if you want a limited set of permissions.
- You can specify a different service account that you want the Argo CD push model to use. When the Argo CD controller pushes resources from the centralized hub cluster to the managed cluster, you can use a different service account than the default Application manager. By using a different service account, you can control the permissions that are granted to this service account.
-
The service account must exist on the managed cluster. To facilitate the creation of the service account with the associated permissions, use the
managedserviceaccount
resource and the newclusterpermission
resource on the centralized hub cluster.
After completing all the following procedures, you can grant cluster permissions to your managed service account. Having the cluster permissions, your managed service account has the necessary permissions to deploy your application resources on the managed clusters. Complete the following procedures:
- Section 1.7.1, “Creating a managed service account”
- Section 1.7.2, “Creating a cluster permission”
- Section 1.7.3, “Using a managed service account in the GitOpsCluster resource”
- Section 1.7.4, “Creating an Argo CD application”
- Section 1.7.5, “Using policy to create managed service accounts and cluster permissions”
1.7.1. Creating a managed service account
The managedserviceaccount
custom resource on the hub provides a convenient way to create serviceaccounts
on the managed clusters. When a managedserviceccount
custom resource is created in the <managed_cluster>
namespace on the hub cluster, a serviceccount
is created on the managed cluster.
To create a managed service account, see Enabling managedserviceaccount add-ons.
1.7.2. Creating a cluster permission
When the service account is created, it does not have any permission associated to it. To grant permissions to the new service account, use the clusterpermission
resource. The clusterpermission
resource is created in the managed cluster namespace on the hub. It provides a convenient way to create roles, cluster roles resources on the managed clusters, and bind them to a service account through a rolebinding
or clusterrolebinding
resource.
To grant the
<managed-sa-sample>
service account permission to the sample mortgage application that is deployed to the mortgage namespace on<managed cluster>
, create a YAML with the following content:apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: <clusterpermission-msa-subject-sample> namespace: <managed cluster> spec: roles: - namespace: default rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "create", "update", "delete", "patch"] - apiGroups: [""] resources: ["configmaps", "secrets", "pods", "podtemplates", "persistentvolumeclaims", "persistentvolumes"] verbs: ["get", "update", "list", "create", "delete", "patch"] - apiGroups: ["storage.k8s.io"] resources: ["*"] verbs: ["list"] - namespace: mortgage rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "create", "update", "delete", "patch"] - apiGroups: [""] resources: ["configmaps", "secrets", "pods", "services", "namespace"] verbs: ["get", "update", "list", "create", "delete", "patch"] clusterRole: rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "list"] roleBindings: - namespace: default roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> - namespace: mortgage roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> clusterRoleBinding: subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample>
-
Save the YAML file in a file called,
cluster-permission.yaml
. -
Run
oc apply -f cluster-permission.yaml
. -
The sample
<clusterpermission>
creates a role called<clusterpermission-msa-subject-sample>
in the mortgage namespace. If one does not already exist, create a namespacemortgage
. -
Review the resources that are created on the
<managed cluster>
.
After you create the sample, <clusterpermission>
, the following resources are created in the sample managed cluster:
-
One role called
<clusterpermission-msa-subject-sample>
in the default namespace. -
One roleBinding called
<clusterpermission-msa-subject-sample>
in the default namespace for binding the role to the managed service account. -
One role called
<clusterpermission-msa-subject-sample>
in the mortgage namespace. -
One roleBinding called
<clusterpermission-msa-subject-sample>
in the mortgage namespace for binding the role to the managed service account. -
One clusterRole called
<clusterpermission-msa-subject-sample>
. -
One clusterRoleBinding called
<clusterpermission-msa-subject-sample>
for binding the clusterRole to the managed service account.
1.7.3. Using a managed service account in the GitOpsCluster resource
The GitOpsCluster
resource uses placement to import selected managed clusters into the Argo CD, including the creation of the Argo CD cluster secrets which contains information used to access the clusters. By default, the Argo CD cluster secret uses the application manager service account to access the managed clusters.
-
To update the
GitOpsCluster
resource to use the managed service account, add themanagedServiceAccountRef
property with the name of the managed service account. Save the following sample as a
gitops.yaml
file to create aGitOpsCluster
custom resource:apiVersion: apps.open-cluster-management.io/v1beta1 kind: GitOpsCluster metadata: name: argo-acm-importer namespace: openshift-gitops spec: managedServiceAccountRef: <managed-sa-sample> argoServer: cluster: notused argoNamespace: openshift-gitops placementRef: kind: Placement apiVersion: cluster.open-cluster-management.io/v1beta1 name: all-openshift-clusters namespace: openshift-gitops
-
Run
oc apply -f gitops.yaml
to apply the file. Go to the
openshift-gitops
namespace and verify that there is a new Argo CD cluster secret with the name<managed cluster-managed-sa-sample-cluster-secret>
. Run the following command:oc get secrets -n openshift-gitops <managed cluster-managed-sa-sample-cluster-secret>
See the following output to verify:
NAME TYPE DATA AGE <managed cluster-managed-sa-sample-cluster-secret> Opaque 3 4m2s
1.7.4. Creating an Argo CD application
Deploy an Argo CD application from the Argo CD console by using the pushing model. The Argo CD application is deployed with the managed service account, <managed-sa-sample>
.
- Log into the Argo CD console.
- Click Create a new application.
- Choose the cluster URL.
-
Go to your Argo CD application and verify that it has the given permissions, like roles and cluster roles, that you propagated to
<managed cluster>
.
1.7.5. Using policy to create managed service accounts and cluster permissions
When the GitOpsCluster resource is updated with the `managedServiceAccountRef`, each managed cluster in the placement of this GitOpsCluster needs to have the service account. If you have several managed clusters, it becomes tedious for you to create the managed service account and cluster permission for each managed cluster. You can simply this process by using a policy to create the managed service account and cluster permission for all your managed clusters
When you apply the managedServiceAccount
and clusterPermission
resources to the hub cluster, the placement of this policy is bound to the local cluster. Replicate those resources to the managed cluster namespace for all of the managed clusters in the placement of the GitOpsCluster resource.
Using a policy to create the managedServiceAccount
and clusterPermission
resources include the following attributes:
-
Updating the
managedServiceAccount
andclusterPermission
object templates in the policy results in updates to all of themanagedServiceAccount
andclusterPermission
resources in all of the managed clusters. -
Updating directly to the
managedServiceAccount
andclusterPermission
resources becomes reverted back to the original state because it is enforced by the policy. If the placement decision for the GitOpsCluster placement changes, the policy manages the creation and deletion of the resources in the managed cluster namespaces.
- To create a policy for a YAML to create a managed service account and cluster permission, create a YAML with the following content:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-gitops namespace: openshift-gitops annotations: policy.open-cluster-management.io/standards: NIST-CSF policy.open-cluster-management.io/categories: PR.PT Protective Technology policy.open-cluster-management.io/controls: PR.PT-3 Least Functionality spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gitops-sub spec: pruneObjectBehavior: None remediationAction: enforce severity: low object-templates-raw: | {{ range $placedec := (lookup "cluster.open-cluster-management.io/v1beta1" "PlacementDecision" "openshift-gitops" "" "cluster.open-cluster-management.io/placement=aws-app-placement").items }} {{ range $clustdec := $placedec.status.decisions }} - complianceType: musthave objectDefinition: apiVersion: authentication.open-cluster-management.io/v1alpha1 kind: ManagedServiceAccount metadata: name: <managed-sa-sample> namespace: {{ $clustdec.clusterName }} spec: rotation: {} - complianceType: musthave objectDefinition: apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: <clusterpermission-msa-subject-sample> namespace: {{ $clustdec.clusterName }} spec: roles: - namespace: default rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "create", "update", "delete"] - apiGroups: [""] resources: ["configmaps", "secrets", "pods", "podtemplates", "persistentvolumeclaims", "persistentvolumes"] verbs: ["get", "update", "list", "create", "delete"] - apiGroups: ["storage.k8s.io"] resources: ["*"] verbs: ["list"] - namespace: mortgage rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "create", "update", "delete"] - apiGroups: [""] resources: ["configmaps", "secrets", "pods", "services", "namespace"] verbs: ["get", "update", "list", "create", "delete"] clusterRole: rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "list"] roleBindings: - namespace: default roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> - namespace: mortgage roleRef: kind: Role subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> clusterRoleBinding: subject: apiGroup: authentication.open-cluster-management.io kind: ManagedServiceAccount name: <managed-sa-sample> {{ end }} {{ end }} --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-gitops namespace: openshift-gitops placementRef: name: lc-app-placement kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: policy-gitops kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: lc-app-placement namespace: openshift-gitops spec: numberOfClusters: 1 predicates: - requiredClusterSelector: labelSelector: matchLabels: name: local-cluster
-
Save the YAML file in a file called,
policy.yaml
. -
Run
oc apply -f policy.yaml
. -
In the object template of the policy, it iterates through the placement decision of the GitOpsCluster associated placement and applies the following
managedServiceAccount
andclusterPermission
templates: