Ce contenu n'est pas disponible dans la langue sélectionnée.
GitOps
Use GitOps to deploy and manage your applications.
Abstract
Chapter 1. GitOps overview Copier lienLien copié sur presse-papiers!
Red Hat OpenShift Container Platform GitOps and Argo CD is integrated with Red Hat Advanced Cluster Management for Kubernetes, with advanced features compared to the original Application Lifecycle Channel and Subscription model.
GitOps integration with Argo CD development is active, as well as the large community that contributes feature enhancements and updates to Argo CD. By utilizing the OpenShift Container Platform GitOps Operator, you can use the latest advancements in Argo CD development and receive support from the GitOps Operator subscription.
See the following topics to learn more about Red Hat Advanced Cluster Management for Kubernetes integration with OpenShift Container Platform GitOps and Argo CD:
- GitOps console
- Registering managed clusters to Red Hat OpenShift GitOps operator
- Configuring application placement tolerations for GitOps
- Deploying Argo CD with the Push and Pull model
- Generating a policy to install GitOps Operator
- Managing policy definitions with OpenShift Container Platform GitOps (Argo CD)
1.1. GitOps console Copier lienLien copié sur presse-papiers!
Learn more about integrated OpenShift Container Platform GitOps console features. Create and view applications, such as ApplicationSet, and Argo CD types. An ApplicationSet
represents Argo applications that are generated from the controller.
- You click Launch resource in Search to search for related resources.
-
Use Search to find application resources by the component
kind
for each resource.
Important: Available actions are based on your assigned role. Learn about access requirements from the Role-based access control documentation.
1.1.1. Prerequisites Copier lienLien copié sur presse-papiers!
See the following prerequisites and requirements:
-
For an Argo CD
ApplicationSet
to be created, you need to enableAutomatically sync when cluster state changes
from theSync policy
. -
For Flux with the
kustomization
controller, find Kubernetes resources with the labelkustomize.toolkit.fluxcd.io/name=<app_name>
. -
For Flux with the
helm
controller, find Kubernetes resources with the labelhelm.toolkit.fluxcd.io/name=<app_name>
. -
You need GitOps cluster resources and the GitOps operator installed to create an
ApplicationSet
. Without these prerequisites, you will see no Argo server options in the console to create anApplicationSet
.
1.1.2. Querying Argo CD applications Copier lienLien copié sur presse-papiers!
When you search for an Argo CD application, you are directed to the Applications page. Complete the following steps to access the Argo CD application from the Search page:
- Log in to your Red Hat Advanced Cluster Management hub cluster.
- From the console header, select the Search icon.
-
Filter your query with the following values:
kind:application
andapigroup:argoproj.io
. - Select an application to view. The Application page displays an overview of information for the application.
For more information about search, see Search in the console.
1.2. Registering managed clusters to Red Hat OpenShift GitOps operator Copier lienLien copié sur presse-papiers!
To configure OpenShift GitOps with the Push model, you can register a set of one or more Red Hat Advanced Cluster Management for Kubernetes managed clusters to an instance of OpenShift GitOps operator. After registering, you can deploy applications to those clusters. Set up a continuous OpenShift GitOps environment to automate application consistency across clusters in development, staging, and production environments.
1.2.1. Prerequisites Copier lienLien copié sur presse-papiers!
- You need to install the Red Hat OpenShift GitOps operator on your Red Hat Advanced Cluster Management for Kubernetes.
- Import one or more managed clusters.
- To register managed clusters to OpenShift GitOps, complete the Creating a ManagedClusterSet documentation.
1.2.2. Registering managed clusters to Red Hat OpenShift GitOps Copier lienLien copié sur presse-papiers!
Complete the following steps to register managed clusters to OpenShift GitOps:
Create a managed cluster set by binding it to the namespace where OpenShift GitOps is deployed.
-
For an example of binding the managed cluster to the
openshift-gitops
namespace, see themulticloud-integrations
managedclusterset
binding example. -
For more general information about creating a
ManagedClusterSetBinding
, go to the Additional resources section and see Creating a ManagedClusterSetBinding resource . -
For placement information, see Selecting
ManagedClusters
fromManagedClusterSets
.
-
For an example of binding the managed cluster to the
In the namespace that is used in managed cluster set binding, create a
Placement
custom resource to select a set of managed clusters to register to an OpenShift GitOps operator instance. Use themulticloud-integration
placement example as a template. See Using ManagedClusterSets with Placement for placement information.Notes:
- Only OpenShift Container Platform clusters are registered to an OpenShift GitOps operator instance, not other Kubernetes clusters.
-
In some unstable network scenarios, the managed clusters might be temporarily placed in
unavailable
orunreachable
state. See Configuring placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps for more details.
Create a
GitOpsCluster
custom resource to register the set of managed clusters from the placement decision to the specified instance of OpenShift GitOps. This enables the OpenShift GitOps instance to deploy applications to any of those Red Hat Advanced Cluster Management managed clusters. Use themulticloud-integrations
OpenShift GitOps cluster example.Note: The referenced
Placement
resource must be in the same namespace as theGitOpsCluster
resource. See the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
placementRef.name
value isall-openshift-clusters
, and is specified as target clusters for the OpenShift GitOps instance that is installed inargoNamespace: openshift-gitops
. TheargoServer.cluster
specification requires thelocal-cluster
value.
- Save your changes. You can now follow the OpenShift GitOps workflow to manage your applications.
1.2.3. Registering non-OpenShift Container Platform clusters to Red Hat OpenShift GitOps Copier lienLien copié sur presse-papiers!
You can now use the Red Hat Advanced Cluster Management GitOpsCluster
resource to register a non-OpenShift Container Platform cluster to a OpenShift GitOps cluster. With this capability, you can deploy application resources to the non-OpenShift Container Platform cluster by using the OpenShift GitOps console. To register a non-OpenShift Container Platform cluster to a OpenShift GitOps cluster, complete the following steps:
Go to the API server URL in the non-OpenShift Container Platform
ManagedCluster
resourcespec
and validate it by running the following command:oc get managedclusters eks-1
oc get managedclusters eks-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your output resembles the following information:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37m
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API server URL in the non-OpenShift Container Platform
MangedCluster
resourcespec
is empty, update it manually by completing the following steps:To complete the API server URL, edit the
MangedCluster
resourcespec
by running the following command:oc edit managedclusters eks-1
oc edit managedclusters eks-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your YAML resembles the following file:
spec: managedClusterClientConfigs: - caBundle: ZW1wdHlDQWJ1bmRsZQo= url: https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com
spec: managedClusterClientConfigs: - caBundle: ZW1wdHlDQWJ1bmRsZQo= url: https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the changes then validate that the API server is completed by running the following command:
oc get managedclusters eks-1
oc get managedclusters eks-1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that your output resembles the following information:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37m
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE eks-1 true https://5E336C922AB16684A332C10535B8D407.gr7.us-east-2.eks.amazonaws.com True True 37m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the cluster secret is generated, go to the
openshift-gitops
namespace and confirm that thatGitOpsCluster
resource status reports,successful
.Notes:
With Red Hat Advanced Cluster Management 2.12 or later, the API server URL for all types of non-OpenShift Container Platform
ManagedCluster
resources renders automatically if you use the following importing modes:- Entering your server URL and API token for the existing cluster.
-
Entering the
kubeconfig
file for the existing cluster.
The following cases can make the API server URL empty for one of the
ManagedClusters
resources:- The non-OpenShift Container Platform cluster is imported to the Red Hat Advanced Cluster Management hub cluster before version 2.12.
-
The non-OpenShift Container Platform cluster is manually imported to the Red Hat Advanced Cluster Management hub cluster in version 2.12 through the import mode,
Run import commands
.
1.2.4. Red Hat OpenShift GitOps token Copier lienLien copié sur presse-papiers!
When you integrate with the OpenShift GitOps operator, for every managed cluster that is bound to the OpenShift GitOps namespace through the placement and ManagedClusterSetBinding
custom resources, a secret with a token to access the ManagedCluster
is created in the OpenShift GitOps instance server namespace.
The OpenShift GitOps controller needs this secret to sync resources to the managed cluster. By default, the service account application manager works with the cluster administrator permissions on the managed cluster to generate the OpenShift GitOps cluster secret in the OpenShift GitOps instance server namespace. The default namespace is openshift-gitops
.
If you do not want this default, create a service account with customized permissions on the managed cluster for generating the OpenShift GitOps cluster secret in the OpenShift GitOps instance server namespace. The default namespace is still, openshift-gitops
. For more information, see Creating a customized service account for Argo CD push model.
1.2.5. Additional resources Copier lienLien copié sur presse-papiers!
For more information, see the following resources and examples:
- Configuring application placement tolerations for GitOps
-
multicloud-integrations
managed cluster set binding - Creating a ManagedClusterSet
-
multicloud-integration
placement - Placement overview
-
multicloud-integrations
GitOps cluster -
multicloud-integrations
managed cluster set binding - Creating a ManagedClusterSetBinding resource
- About Red Hat OpenShift GitOps
1.3. Configuring application placement tolerations for GitOps Copier lienLien copié sur presse-papiers!
Red Hat Advanced Cluster Management provides a way for you to register managed clusters that deploy applications to Red Hat OpenShift GitOps.
In some unstable network scenarios, the managed clusters might be temporarily placed in Unavailable
state. If a Placement
resource is being used to facilitate the deployment of applications, add the following tolerations for the Placement
resource to continue to include unavailable clusters. The following example shows a Placement
resource with tolerations:
1.4. Deploying Argo CD with Push and Pull model Copier lienLien copié sur presse-papiers!
Using a Push model, The Argo CD server on the hub cluster deploys the application resources on the managed clusters. For the Pull model, the application resources are propagated by the Propagation controller to the managed clusters by using manifestWork
.
For both models, the same ApplicationSet
CRD is used to deploy the application to the managed cluster.
Required access: Cluster administrator
1.4.1. Prerequisites Copier lienLien copié sur presse-papiers!
View the following prerequisites for the Argo CD Pull model:
Important:
-
If your
openshift-gitops-ArgoCD-application-controller
service account is not assigned as a cluster administrator, the GitOps application controller might not deploy resources. The application status might send an error similar to the following error:
cannot create resource "services" in API group "" in the namespace "mortgage",deployments.apps is forbidden: User "system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller"
cannot create resource "services" in API group "" in the namespace
"mortgage",deployments.apps is forbidden: User
"system:serviceaccount:openshift-gitops:openshift-gitops-Argo CD-application-controller"
-
After you install the
OpenShift Gitops
operator on the managed clusters, you must create theClusterRoleBinding
cluster administrator privileges on the same managed clusters. To add the
ClusterRoleBinding
cluster administrator privileges to your managed clusters, see the following example YAML:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are not a cluster administrator and need to resolve this issue, complete the following steps:
- Create all namespaces on each managed cluster where the Argo CD application will be deployed.
Add the
managed-by
label to each namespace. If an Argo CD application is deployed to multiple namespaces, each namespace should be managed by Argo CD.See the following example with the
managed-by
label:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must declare all application destination namespaces in the repository for the application and include the
managed-by
label in the namespaces. Refer to Additional resources to learn how to declare a namespace.
See the following requirements to use the Argo CD Pull model:
-
The GitOps operator must be installed on the hub cluster and the target managed clusters in the
openshift-gitops
namespace. - The required hub cluster OpenShift Container Platform GitOps operator must be version 1.9.0 or later.
- The required managed clusters OpenShift Container Platform GitOps operator must be the same version as the hub cluster.
- You need the ApplicationSet controller to propagate the Argo CD application template for a managed cluster.
Every managed cluster must have a cluster secret in the Argo CD server namespace on the hub cluster, which is required by the ArgoCD application set controller to propagate the Argo CD application template for a managed cluster.
To create the cluster secret, create a
gitOpsCluster
resource that contains a reference to aplacement
resource. Theplacement
resource selects all the managed clusters that need to support the Pull model. When the GitOps cluster controller reconciles, it creates the cluster secrets for the managed clusters in the Argo CD server namespace.
1.4.2. Architecture Copier lienLien copié sur presse-papiers!
For both Push and Pull model, the Argo CD ApplicationSet controller on the hub cluster reconciles to create application resources for each target managed cluster. See the following information about architecture for both models:
1.4.2.1. Architecture Push model Copier lienLien copié sur presse-papiers!
- With Push model, OpenShift Container Platform GitOps applies resources directly from a centralized hub cluster to the managed clusters.
- An Argo CD application that is running on the hub cluster communicates with the GitHub repository and deploys the manifests directly to the managed clusters.
- Push model implementation only contains the Argo CD application on the hub cluster, which has credentials for managed clusters. The Argo CD application on the hub cluster can deploy the applications to the managed clusters.
- Important: With a large number of managed clusters that require resource application, consider potential strain on the OpenShift GitOps GitOps controller memory and CPU usage. To optimize resource management, see Configuring resource quota or requests in the Red Hat OpenShift GitOps documentation.
-
By default, the Push model is used to deploy the application unless you add the
apps.open-cluster-management.io/ocm-managed-cluster
andapps.open-cluster-management.io/pull-to-ocm-managed-cluster
annotations to the template section of theApplicationSet
.
1.4.2.2. Architecture Pull model Copier lienLien copié sur presse-papiers!
- Pull model can provide scalability relief compared to the push model by reducing stress on the controller in the hub cluster, but with more requests and status reporting required.
- With Pull model, OpenShift Container Platform GitOps does not apply resources directly from a centralized hub cluster to the managed clusters. The Argo CD Application is propagated from the hub cluster to the managed clusters.
-
Pull model implementation applies OpenShift Cluster Manager registration, placement, and
manifestWork
APIs so that the hub cluster can use the secure communication channel between the hub cluster and the managed cluster to deploy resources. - Each managed cluster individually communicates with the GitHub repository to deploy the resource manifests locally, so you must install and configure GitOps operators on each managed cluster.
-
An Argo CD server must be running on each target managed cluster. The Argo CD application resources are replicated on the managed clusters, which are then deployed by the local Argo CD server. The distributed Argo CD applications on the managed clusters are created with a single Argo CD
ApplicationSet
resource on the hub cluster. -
The managed cluster is determined by the value of the
ocm-managed-cluster
annotation. -
For successful implementation of Pull model, the Argo CD application controller must ignore Push model application resources with the
argocd.argoproj.io/skip-reconcile
annotation in the template section of theApplicationSet
. - For Pull model, the Argo CD Application controller on the managed cluster reconciles to deploy the application.
- The Pull model Resource sync controller on the hub cluster queries the OpenShift Cluster Manager search V2 component on each managed cluster periodically to retrieve the resource list and error messages for each Argo CD application.
-
The Aggregation controller on the hub cluster creates and updates the
MulticlusterApplicationSetReport
from across clusters by using the data from the Resource sync controller, and the status information frommanifestWork
. - The status of the deployments is gathered back to the hub cluster, but not all the detailed information is transmitted. Additional status updates are periodically scraped to provide an overview. The status feedback is not real-time, and each managed cluster GitOps operator needs to communicate with the Git repository, which results in multiple requests.
1.4.3. Creating the ApplicationSet custom resource Copier lienLien copié sur presse-papiers!
The Argo CD ApplicationSet
resource is used to deploy applications on the managed clusters by using the Push or Pull model with a placement
resource in the generator field that is used to get a list of managed clusters.
- For the Pull model, set the destination for the application to the default local Kubernetes server, as displayed in the following example. The application is deployed locally by the application controller on the managed cluster.
Add the annotations that are required to override the default Push model, as displayed in the following example
ApplicationSet
YAML, which uses the Pull model with template annotations:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.4. MulticlusterApplicationSetReport Copier lienLien copié sur presse-papiers!
-
For the Pull model, the
MulticlusterApplicationSetReport
aggregates application status from across your managed clusters. - The report includes the list of resources and the overall status of the application from each managed cluster.
-
A separate report resource is created for each Argo CD ApplicationSet resource. The report is created in the same namespace as the
ApplicationSet
. The report includes the following items:
- A list of resources for the Argo CD application
- The overall sync and health status for each Argo CD application
-
An error message for each cluster where the overall status is
out of sync
orunhealthy
- A summary status all the states of your managed clusters
- The Resource sync controller and the Aggregation controller both run every 10 seconds to create the report.
The two controllers, along with the Propagation controller, run in separate containers in the same
multicluster-integrations
pod, as shown in the following example output:NAMESPACE NAME READY STATUS open-cluster-management multicluster-integrations-7c46498d9-fqbq4 3/3 Running
NAMESPACE NAME READY STATUS open-cluster-management multicluster-integrations-7c46498d9-fqbq4 3/3 Running
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The following is an example MulticlusterApplicationSetReport
YAML file for the guestbook
application:
Note: If a resource fails to deploy, the resource is not included in the resource list. See error messages for information.
1.4.5. Additional resources Copier lienLien copié sur presse-papiers!
- See Configuring an OpenShift cluster by deploying an application with cluster configurations in the Red Hat OpenShift GitOps documentation.
- See Setting up an Argo CD instance in the Red Hat OpenShift GitOps documentation.
1.5. Managing policy definitions with Red Hat OpenShift GitOps Copier lienLien copié sur presse-papiers!
With the ArgoCD
resource, you can use Red Hat OpenShift GitOps to manage policy definitions by granting OpenShift GitOps access to create policies on the Red Hat Advanced Cluster Management hub cluster.
1.5.1. Prerequisites Copier lienLien copié sur presse-papiers!
You must log in to your hub cluster.
Required access: Cluster administrator
Deprecated: PlacementRule
1.5.2. Creating a ClusterRole resource for OpenShift GitOps Copier lienLien copié sur presse-papiers!
To create a ClusterRole
resource for OpenShift GitOps, with access to create, read, update, and delete policies and placements:
Create a
ClusterRole
from the console. YourClusterRole
might resemble the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterRoleBinding
object to grant the OpenShift GitOps service account access to theopenshift-gitops-policy-admin
ClusterRole
object. YourClusterRoleBinding
might resemble the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Notes: - When a Red Hat Advanced Cluster Management policy definition is deployed with OpenShift GitOps, a copy of the policy is created in each managed cluster namespace for resolving hub template differences. These copies are called replicated policies. - To prevent OpenShift GitOps from repeatedly deleting this replicated policy or show that the Argo CD
Application
is out of sync, theargocd.argoproj.io/compare-options: IgnoreExtraneous
annotation is automatically set on each replicated policy by the Red Hat Advanced Cluster Management policy framework.-
There are labels and annotations used by Argo CD to track objects. For replicated policies to not show up at all in Argo CD, disable the Argo CD tracking labels and annotations by setting
spec.copyPolicyMetadata
tofalse
on the Red Hat Advanced Cluster Management policy definition.
1.5.3. Integrating the Policy Generator with OpenShift GitOps Copier lienLien copié sur presse-papiers!
You can use OpenShift GitOps to generate policies by using the Policy Generator through GitOps. Since the Policy Generator does not come preinstalled in the OpenShift GitOps container image, you must complete customizations.
1.5.3.1. Prerequisites Copier lienLien copié sur presse-papiers!
- You must install OpenShift GitOps on your Red Hat Advanced Cluster Management hub cluster.
- You must log in to the hub cluster.
1.5.3.2. Accessing the Policy Generator from OpenShift GitOps Copier lienLien copié sur presse-papiers!
To access the Policy Generator from OpenShift GitOps, you must configure the Init Container to copy the Policy Generator binary from the Red Hat Advanced Cluster Management Application Subscription container image. You must also configure OpenShift GitOps by providing the --enable-alpha-plugins
flag when it runs Kustomize.
To create, read, update, and delete policies and placements with the Policy Generator, grant access to the Policy Generator from OpenShift GitOps. Complete the following steps:
Edit the OpenShift GitOps
argocd
object with the following command:oc -n openshift-gitops edit argocd openshift-gitops
oc -n openshift-gitops edit argocd openshift-gitops
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To update the Policy Generator to a newer version, add the
registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9
image used by the Init Container to a newer tag. Replace the<version>
parameter with the latest Red Hat Advanced Cluster Management version in yourArgoCD
resource.Your
ArgoCD
resource might resemble the following YAML file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note: Alternatively, you can create a
ConfigurationPolicy
resource that contains theArgoCD
manifest and template the version to match the version set in theMulticlusterHub
:image: '{{ (index (lookup "apps/v1" "Deployment" "open-cluster-management" "multicluster-operators-hub-subscription").spec.template.spec.containers 0).image }}'
image: '{{ (index (lookup "apps/v1" "Deployment" "open-cluster-management" "multicluster-operators-hub-subscription").spec.template.spec.containers 0).image }}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to enable the processing of Helm charts within the Kustomize directory before generating policies, set the
POLICY_GEN_ENABLE_HELM
environment variable to"true"
in thespec.repo.env
field:env: - name: POLICY_GEN_ENABLE_HELM value: "true"
env: - name: POLICY_GEN_ENABLE_HELM value: "true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create, read, update, and delete policies and placements, create a
ClusterRoleBinding
object to grant the OpenShift GitOps service account access to Red Hat Advanced Cluster Management hub cluster. YourClusterRoleBinding
might resemble the following resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.4. Configuring policy health checks in OpenShift GitOps Copier lienLien copié sur presse-papiers!
Use OpenShift GitOps with the ArgoCD
resource to define a custom logic that sets the health status of a resource based on the resource state by adding to the resourceHealthChecks
field. For example, you can define custom health checks that only report a policy as healthy if your policy is compliant.
Important: To verify that you did not download something malicious from the Internet, review every policy before you apply it.
To define health checks for your resource kinds complete the following steps:
Add a health check to your
CertificatePolicy
,ConfigurationPolicy
,OperatorPolicy
, andPolicy
resources by downloading theargocd-policy-healthchecks.yaml
. Run the following command:wget https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/stable/CM-Configuration-Management/argocd-policy-healthchecks.yaml
wget https://raw.githubusercontent.com/open-cluster-management-io/policy-collection/main/stable/CM-Configuration-Management/argocd-policy-healthchecks.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
argocd-policy-healthchecks.yaml
policy by going to Governance > Create policy in the console and pasting the content into the YAML editor.Note: You can add Placement information in the YAML editor to determine which clusters have the policy active.
-
Verify that the health checks work as expected by viewing the Summary tab of the
ArgoCD
resource. View the health details from the Argo CD console.
1.5.5. Additional resources Copier lienLien copié sur presse-papiers!
- See the Understanding OpenShift GitOps documentation.
1.6. Generating a policy to install GitOps Operator Copier lienLien copié sur presse-papiers!
A common use of Red Hat Advanced Cluster Management policies is to install an Operator on one or more managed Red Hat OpenShift Container Platform clusters. Continue reading to learn how to generate policies by using the Policy Generator, and to install the OpenShift Container Platform GitOps Operator with a generated policy:
1.6.1. Generating a policy that installs OpenShift Container Platform GitOps Copier lienLien copié sur presse-papiers!
You can generate a policy that installs OpenShift Container Platform GitOps by using the Policy Generator. The OpenShift Container Platform GitOps operator offers the all namespaces installation mode, which you can view in the following example. Create a Subscription
manifest file called openshift-gitops-subscription.yaml
, similar to the following example:
To pin to a specific version of the operator, add the following parameter and value: spec.startingCSV: openshift-gitops-operator.v<version>
. Replace <version>
with your preferred version.
A PolicyGenerator
configuration file is required. Use the configuration file named policy-generator-config.yaml
to generate a policy to install OpenShift Container Platform GitOps on all OpenShift Container Platform managed clusters. See the following example:
The last required file is kustomization.yaml
, which requires the following configuration:
generators: - policy-generator-config.yaml
generators:
- policy-generator-config.yaml
The generated policy might resemble the following file with PlacementRule
(Deprecated):
Generated policies from manifests in the OpenShift Container Platform documentation is supported. Any configuration guidance from the OpenShift Container Platform documentation can be applied using the Policy Generator.
1.6.2. Using policy dependencies with OperatorGroups Copier lienLien copié sur presse-papiers!
When you install an operator with an OperatorGroup
manifest, the OperatorGroup
must exist on the cluster before the Subscription
is created. Use the policy dependency feature along with the Policy Generator to ensure that the OperatorGroup
policy is compliant before you enforce the Subscription
policy.
Set up policy dependencies by listing the manifests in the order that you want. For example, you might want to create the namespace policy first, create the OperatorGroup
next, and create the Subscription
last.
Enable the policyDefaults.orderManifests
parameter and disable policyDefaults.consolidateManifests
in the Policy Generator configuration manifest to automatically set up dependencies between the manifests.
1.7. Creating a customized service account for Argo CD push model Copier lienLien copié sur presse-papiers!
Create a service account on a managed cluster by creating a managedserviceaccount
resource on the hub cluster. Use the clusterpermission
resource to grant specific permissions to the service account.
Creating a customzied service account for use with the Argo CD push model includes the following benefits:
- An Application manager add-on runs on each managed cluster. By default, the Argo CD controller uses the service account Application manager to push these resources to the managed cluster.
- The Application manager service account has a large set of permissions because the application subscription add-on uses the Application manager service to deploy applications on the managed cluster. Do not use the Application manager service account if you want a limited set of permissions.
- You can specify a different service account that you want the Argo CD push model to use. When the Argo CD controller pushes resources from the centralized hub cluster to the managed cluster, you can use a different service account than the default Application manager. By using a different service account, you can control the permissions that are granted to this service account.
-
The service account must exist on the managed cluster. To facilitate the creation of the service account with the associated permissions, use the
managedserviceaccount
resource and the newclusterpermission
resource on the centralized hub cluster.
After completing all the following procedures, you can grant cluster permissions to your managed service account. Having the cluster permissions, your managed service account has the necessary permissions to deploy your application resources on the managed clusters. Complete the following procedures:
- Section 1.7.1, “Creating a managed service account”
- Section 1.7.2, “Creating a cluster permission”
- Section 1.7.3, “Using a managed service account in the GitOpsCluster resource”
- Section 1.7.4, “Creating an Argo CD application”
- Section 1.7.5, “Using policy to create managed service accounts and cluster permissions”
1.7.1. Creating a managed service account Copier lienLien copié sur presse-papiers!
The managedserviceaccount
custom resource on the hub provides a convenient way to create serviceaccounts
on the managed clusters. When a managedserviceccount
custom resource is created in the <managed_cluster>
namespace on the hub cluster, a serviceccount
is created on the managed cluster.
To create a managed service account, see Enabling managedserviceaccount add-ons.
1.7.2. Creating a cluster permission Copier lienLien copié sur presse-papiers!
When the service account is created, it does not have any permission associated to it. To grant permissions to the new service account, use the clusterpermission
resource. The clusterpermission
resource is created in the managed cluster namespace on the hub. It provides a convenient way to create roles, cluster roles resources on the managed clusters, and bind them to a service account through a rolebinding
or clusterrolebinding
resource.
To grant the
<managed-sa-sample>
service account permission to the sample mortgage application that is deployed to the mortgage namespace on<managed cluster>
, create a YAML with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the YAML file in a file called,
cluster-permission.yaml
. -
Run
oc apply -f cluster-permission.yaml
. -
The sample
<clusterpermission>
creates a role called<clusterpermission-msa-subject-sample>
in the mortgage namespace. If one does not already exist, create a namespacemortgage
. -
Review the resources that are created on the
<managed cluster>
.
After you create the sample, <clusterpermission>
, the following resources are created in the sample managed cluster:
-
One role called
<clusterpermission-msa-subject-sample>
in the default namespace. -
One roleBinding called
<clusterpermission-msa-subject-sample>
in the default namespace for binding the role to the managed service account. -
One role called
<clusterpermission-msa-subject-sample>
in the mortgage namespace. -
One roleBinding called
<clusterpermission-msa-subject-sample>
in the mortgage namespace for binding the role to the managed service account. -
One clusterRole called
<clusterpermission-msa-subject-sample>
. -
One clusterRoleBinding called
<clusterpermission-msa-subject-sample>
for binding the clusterRole to the managed service account.
1.7.3. Using a managed service account in the GitOpsCluster resource Copier lienLien copié sur presse-papiers!
The GitOpsCluster
resource uses placement to import selected managed clusters into the Argo CD, including the creation of the Argo CD cluster secrets which contains information used to access the clusters. By default, the Argo CD cluster secret uses the application manager service account to access the managed clusters.
-
To update the
GitOpsCluster
resource to use the managed service account, add themanagedServiceAccountRef
property with the name of the managed service account. Save the following sample as a
gitops.yaml
file to create aGitOpsCluster
custom resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Run
oc apply -f gitops.yaml
to apply the file. Go to the
openshift-gitops
namespace and verify that there is a new Argo CD cluster secret with the name<managed cluster-managed-sa-sample-cluster-secret>
. Run the following command:oc get secrets -n openshift-gitops <managed cluster-managed-sa-sample-cluster-secret>
oc get secrets -n openshift-gitops <managed cluster-managed-sa-sample-cluster-secret>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the following output to verify:
NAME TYPE DATA AGE <managed cluster-managed-sa-sample-cluster-secret> Opaque 3 4m2s
NAME TYPE DATA AGE <managed cluster-managed-sa-sample-cluster-secret> Opaque 3 4m2s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.4. Creating an Argo CD application Copier lienLien copié sur presse-papiers!
Deploy an Argo CD application from the Argo CD console by using the pushing model. The Argo CD application is deployed with the managed service account, <managed-sa-sample>
.
- Log into the Argo CD console.
- Click Create a new application.
- Choose the cluster URL.
-
Go to your Argo CD application and verify that it has the given permissions, like roles and cluster roles, that you propagated to
<managed cluster>
.
1.7.5. Using policy to create managed service accounts and cluster permissions Copier lienLien copié sur presse-papiers!
When the GitOpsCluster resource is updated with the `managedServiceAccountRef`, each managed cluster in the placement of this GitOpsCluster needs to have the service account. If you have several managed clusters, it becomes tedious for you to create the managed service account and cluster permission for each managed cluster. You can simply this process by using a policy to create the managed service account and cluster permission for all your managed clusters
When the GitOpsCluster resource is updated with the `managedServiceAccountRef`, each managed cluster in the placement of this GitOpsCluster needs to have the service account. If you have several managed clusters, it becomes tedious for you to create the managed service account and cluster permission for each managed cluster. You can simply this process by using a policy to create the managed service account and cluster permission for all your managed clusters
When you apply the managedServiceAccount
and clusterPermission
resources to the hub cluster, the placement of this policy is bound to the local cluster. Replicate those resources to the managed cluster namespace for all of the managed clusters in the placement of the GitOpsCluster resource.
Using a policy to create the managedServiceAccount
and clusterPermission
resources include the following attributes:
-
Updating the
managedServiceAccount
andclusterPermission
object templates in the policy results in updates to all of themanagedServiceAccount
andclusterPermission
resources in all of the managed clusters. -
Updating directly to the
managedServiceAccount
andclusterPermission
resources becomes reverted back to the original state because it is enforced by the policy. If the placement decision for the GitOpsCluster placement changes, the policy manages the creation and deletion of the resources in the managed cluster namespaces.
- To create a policy for a YAML to create a managed service account and cluster permission, create a YAML with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the YAML file in a file called,
policy.yaml
. -
Run
oc apply -f policy.yaml
. -
In the object template of the policy, it iterates through the placement decision of the GitOpsCluster associated placement and applies the following
managedServiceAccount
andclusterPermission
templates: