Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Updating GitOps ZTP


You can update the GitOps Zero Touch Provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters.

Note

You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements.

Important

Using PolicyGenTemplate CRs to manage and deploy policies to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs.

For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation.

3.1. Overview of the GitOps ZTP update process

You can update GitOps Zero Touch Provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters.

Note

Any changes to policy settings, including adding recommended content, results in updated policies that must be rolled out to the managed clusters and reconciled.

At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows:

  1. Label all existing clusters with the ztp-done label.
  2. Stop the ArgoCD applications.
  3. Install the new GitOps ZTP tools.
  4. Update required content and optional changes in the Git repository.
  5. Update and restart the application configuration.

3.2. Preparing for the upgrade

Use the following procedure to prepare your site for the GitOps Zero Touch Provisioning (ZTP) upgrade.

Procedure

  1. Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP.
  2. Extract the argocd/deployment directory by using the following commands:

    $ mkdir -p ./update
    $ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.17 extract /home/ztp --tar | tar x -C ./update

    The /update directory contains the following subdirectories:

    • update/extra-manifest: contains the source CR files that the SiteConfig CR uses to generate the extra manifest configMap.
    • update/source-crs: contains the source CR files that the PolicyGenerator or PolicyGentemplate CR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies.
    • update/argocd/deployment: contains patches and YAML files to apply on the hub cluster for use in the next step of this procedure.
    • update/argocd/example: contains example SiteConfig and PolicyGenerator or PolicyGentemplate files that represent the recommended configuration.
  3. Update the clusters-app.yaml and policies-app.yaml files to reflect the name of your applications and the URL, branch, and path for your Git repository.

    If the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade.

  4. Diff the changes between the configuration and deployment source CRs in the /update folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository.

    Important

    When you update GitOps ZTP to the latest version, you must apply the changes from the update/argocd/deployment directory to your site repository. Do not use older versions of the argocd/deployment/ files.

3.3. Labeling the existing clusters

To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the ztp-done label.

Note

This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with ztp-done.

Procedure

  1. Find a label selector that lists the managed clusters that were deployed with GitOps Zero Touch Provisioning (ZTP), such as local-cluster!=true:

    $ oc get managedcluster -l 'local-cluster!=true'
  2. Ensure that the resulting list contains all the managed clusters that were deployed with GitOps ZTP, and then use that selector to add the ztp-done label:

    $ oc label managedcluster -l 'local-cluster!=true' ztp-done=

3.4. Stopping the existing GitOps ZTP applications

Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available.

Use the application files from the deployment directory. If you used custom names for the applications, update the names in these files first.

Procedure

  1. Perform a non-cascaded delete on the clusters application to leave all generated resources in place:

    $ oc delete -f update/argocd/deployment/clusters-app.yaml
  2. Perform a cascaded delete on the policies application to remove all previous policies:

    $ oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge
    $ oc delete -f update/argocd/deployment/policies-app.yaml

3.5. Required changes to the Git repository

When upgrading the ztp-site-generate container from an earlier release of GitOps Zero Touch Provisioning (ZTP) to 4.10 or later, there are additional requirements for the contents of the Git repository. Existing content in the repository must be updated to reflect these changes.

Note

The following procedure assumes you are using PolicyGenerator resources instead of PolicyGentemplate resources for cluster policies management.

  • Make required changes to PolicyGenerator files:

    All PolicyGenerator files must be created in a Namespace prefixed with ztp. This ensures that the GitOps ZTP application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally.

  • Add the kustomization.yaml file to the repository:

    All SiteConfig and PolicyGenerator CRs must be included in a kustomization.yaml file under their respective directory trees. For example:

    ├── acmpolicygenerator
    │   ├── site1-ns.yaml
    │   ├── site1.yaml
    │   ├── site2-ns.yaml
    │   ├── site2.yaml
    │   ├── common-ns.yaml
    │   ├── common-ranGen.yaml
    │   ├── group-du-sno-ranGen-ns.yaml
    │   ├── group-du-sno-ranGen.yaml
    │   └── kustomization.yaml
    └── siteconfig
        ├── site1.yaml
        ├── site2.yaml
        └── kustomization.yaml
    Note

    The files listed in the generator sections must contain either SiteConfig or {policy-gen-cr} CRs only. If your existing YAML files contain other CRs, for example, Namespace, these other CRs must be pulled out into separate files and listed in the resources section.

    The PolicyGenerator kustomization file must contain all PolicyGenerator YAML files in the generator section and Namespace CRs in the resources section. For example:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    
    generators:
    - acm-common-ranGen.yaml
    - acm-group-du-sno-ranGen.yaml
    - site1.yaml
    - site2.yaml
    
    resources:
    - common-ns.yaml
    - acm-group-du-sno-ranGen-ns.yaml
    - site1-ns.yaml
    - site2-ns.yaml

    The SiteConfig kustomization file must contain all SiteConfig YAML files in the generator section and any other CRs in the resources:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    
    generators:
    - site1.yaml
    - site2.yaml
  • Remove the pre-sync.yaml and post-sync.yaml files.

    In OpenShift Container Platform 4.10 and later, the pre-sync.yaml and post-sync.yaml files are no longer required. The update/deployment/kustomization.yaml CR manages the policies deployment on the hub cluster.

    Note

    There is a set of pre-sync.yaml and post-sync.yaml files under both the SiteConfig and {policy-gen-cr} trees.

  • Review and incorporate recommended changes

    Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform.

    Review the reference SiteConfig and PolicyGenerator CRs applicable to the types of cluster in your network. These examples can be found in the argocd/example directory extracted from the GitOps ZTP container.

3.6. Installing the new GitOps ZTP applications

Using the extracted argocd/deployment directory, and after ensuring that the applications point to your site Git repository, apply the full contents of the deployment directory. Applying the full contents of the directory ensures that all necessary resources for the applications are correctly configured.

Procedure

  1. To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment.

    1. Select the multicluster-operators-subscription image that matches your RHACM version.

      Table 3.1. multicluster-operators-subscription image versions
      OpenShift Container Platform versionRHACM versionMCE versionMCE RHEL versionMCE image

      4.14, 4.15, 4.16

      2.8, 2.9

      2.8, 2.9

      RHEL 8

      registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v2.8

      registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v2.9

      4.14, 4.15, 4.16

      2.10

      2.10

      RHEL 9

      registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10

      Important

      The version of the multicluster-operators-subscription image should match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images.

    2. Add the following configuration to the out/argocd/deployment/argocd-openshift-gitops-patch.json file:

      {
        "args": [
          "-c",
          "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" 1
        ],
        "command": [
          "/bin/bash"
        ],
        "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", 2 3
        "name": "policy-generator-install",
        "imagePullPolicy": "Always",
        "volumeMounts": [
          {
            "mountPath": "/.config",
            "name": "kustomize"
          }
        ]
      }
      1
      Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version.
      2
      Match the multicluster-operators-subscription image to the RHACM version.
      3
      In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment.
    3. Patch the ArgoCD instance. Run the following command:

      $ oc patch argocd openshift-gitops \
      -n openshift-gitops --type=merge \
      --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json
  2. In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command:

    $ oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json
  3. Apply the pipeline configuration to your hub cluster by running the following command:

    $ oc apply -k out/argocd/deployment

3.7. Rolling out the GitOps ZTP configuration changes

If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the Non-Compliant state. With the GitOps Zero Touch Provisioning (ZTP) version 4.10 and later ztp-site-generate container, these policies are set to inform mode and are not pushed to the managed clusters without an additional step by the user. This ensures that potentially disruptive changes to the clusters can be managed in terms of when the changes are made, for example, during a maintenance window, and how many clusters are updated concurrently.

To roll out the changes, create one or more ClusterGroupUpgrade CRs as detailed in the TALM documentation. The CR must contain the list of Non-Compliant policies that you want to push out to the managed clusters as well as a list or selector of which clusters should be included in the update.

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.