Argo Rollouts


Red Hat OpenShift GitOps 1.13

Enter a short description here.

Red Hat OpenShift Documentation Team

Abstract

A short overview and summary of the book's subject and purpose, traditionally no more than one paragraph long.

Chapter 1. Argo Rollouts overview

In the GitOps context, progressive delivery is a process of releasing application updates in a controlled and gradual manner. Progressive delivery reduces the risk of a release by exposing the new version of an application update only to a subset of users initially. The process involves continuously observing and analyzing this new application version to verify whether its behavior matches the requirements and expectations set. The verifications continue as the process gradually exposes the application update to a broader and wider audience.

OpenShift Container Platform provides some progressive delivery capability by using routes to split traffic between different services, but this typically requires manual intervention and management.

With Argo Rollouts, as a cluster administrator, you can automate the progressive deployment delivery and manage the progressive deployment of applications hosted on Kubernetes and OpenShift Container Platform clusters. Argo Rollouts is a controller with custom resource definitions (CRDs) that provides advanced deployment capabilities such as blue-green, canary, canary analysis, and experimentation.

1.1. Why use Argo Rollouts?

As a cluster administrator, managing and coordinating advanced deployment strategies in traditional infrastructure often involves long maintenance windows. Automation with tools like OpenShift Container Platform and Red Hat OpenShift GitOps can reduce these windows, but setting up these strategies can still be challenging.

Use Argo Rollouts to simplify progressive delivery by allowing application teams to define their rollout strategy declaratively. Teams no longer need to define multiple deployments and services or create automation for traffic shaping and integration of tests.

You can use Argo Rollouts for the following reasons:

  • Your users can more easily adopt progressive delivery in end-user environments.
  • With the available structure and guidelines of Argo Rollouts, your teams do not have to learn about traffic managers and complex infrastructure.
  • During an update, depending on your deployment strategy, you can optimize the existing traffic-shaping abilities of the deployed application versions by gradually shifting traffic to the new version.
  • You can combine Argo Rollouts with a metric provider like Prometheus to do metric-based and policy-driven rollouts and rollbacks based on the parameters set.
  • Your end-user environments get the Red Hat OpenShift GitOps Operator’s security and help to manage the resources, cost, and time effectively.
  • Your existing users who use Argo CD with security and automated deployments get feedback early in the process that they can use to avoid problems that impact them.

1.1.1. Benefits of Argo Rollouts

Using Argo Rollouts as a default workload in Red Hat OpenShift GitOps provides the following benefits:

  • Automated progressive delivery as part of the GitOps workflow
  • Advanced deployment capabilities
  • Optimize the existing advanced deployment strategies such as blue-green or canary
  • Zero downtime updates for deployments
  • Fine-grained, weighted traffic shifting
  • Able to test without any new traffic hitting the production environment
  • Automated rollbacks and promotions
  • Manual judgment
  • Customizable metric queries and analysis of business key performance indicators (KPIs)
  • Integration with ingress controller and Red Hat OpenShift Service Mesh for advanced traffic routing
  • Integration with metric providers for deployment strategy analysis
  • Usage of multiple providers

1.2. About RolloutManager custom resources and specification

To use Argo Rollouts, you must install Red Hat OpenShift GitOps Operator on the cluster, and then create and submit a RolloutManager custom resource (CR) to the Operator in the namespace of your choice. You can scope the RolloutManager CR for single or multiple namespaces. The Operator creates an argo-rollouts instance with the following namespace-scoped supporting resources:

  • Argo Rollouts controller
  • Argo Rollouts metrics service
  • Argo Rollouts service account
  • Argo Rollouts roles
  • Argo Rollouts role bindings
  • Argo Rollouts secret

You can specify the command arguments, environment variables, a custom image name, and so on for the Argo Rollouts controller resource in the spec of the RolloutsManager CR. The RolloutManager CR spec defines the desired state of Argo Rollouts.

Example: RolloutManager CR

apiVersion: argoproj.io/v1alpha1
kind: RolloutManager
metadata:
  name: argo-rollout
  labels:
    example: basic
spec: {}

1.2.1. Argo Rollouts controller

With the Argo Rollouts controller resource, you can manage the progressive application delivery in your namespace. The Argo Rollouts controller resource monitors the cluster for events, and reacts whenever there is a change in any resource related to Argo Rollouts. The controller reads all the rollout details and brings the cluster to the same state as described in the rollout definition.

1.3. Argo Rollouts architecture overview

Argo Rollouts support is enabled on a cluster by installing the Red Hat OpenShift GitOps Operator and configuring a RolloutManager custom resource (CR) instance.

After a RolloutManager CR is created, the Red Hat OpenShift GitOps Operator installs Argo Rollouts into that same namespace. This step includes the installation of the Argo Rollouts controller and the resources required for handling Argo Rollouts, such as CRs, roles, role bindings, and configuration data.

The Argo Rollouts controller can be installed in two different modes:

  • Cluster-scoped mode (default): The controller oversees resources throughout all namespaces within the cluster.
  • Namespace-scoped mode: The controller monitors resources within the namespace where Argo Rollouts is deployed.

The architecture of Argo Rollouts is structured into components and resources. Components are used to manage resources. For example, the AnalysisRun controller manages the AnalysisRun CR.

Argo Rollouts include several mechanisms to gather analysis metrics to verify that a new application version is deployed:

  • Prometheus metrics: The AnalysisTemplate CR is configured to connect to Prometheus instances to evaluate the success or failure of one or more metrics.
  • Kubernetes job metrics: Argo Rollouts support the Kubernetes Job resource to run analysis on resource metrics. You can verify a successful deployment of an application based on the successful run of Kubernetes jobs.

1.3.1. Argo Rollouts components

Argo Rollouts consists of several components that enable users to practice progressive delivery in OpenShift Container Platform.

Table 1.1. Argo Rollouts components
NameDescription

Argo Rollouts controller

The Argo Rollouts Controller is an alternative to the standard Deployment resource and coexists alongside it. This controller only responds to changes in the Argo Rollouts resources and manages the Rollout CR. The Argo Rollouts Controller does not modify standard deployment resources.

AnalysisRun controller

The AnalysisRun controller manages and performs analysis for AnalysisRun and AnalysisTemplate CRs. It connects a rollout to the metrics provider and defines thresholds for metrics that determine if a deployment update is successful for your application.

Experiment controller

The Experiment controller runs analysis on short-lived replica sets, and manages the Experiment custom resource. The controller can also be integrated with the Rollout resource by specifying the experiment step in the canary deployment strategy field.

Service and Ingress controller

The Service controller manages the Service resources and the Ingress controller manages the Ingress resources modified by Argo Rollouts. These controllers inject additional metadata annotations in the application instances for traffic management.

Argo Rollouts CLI and UI

Argo Rollouts supports an oc/kubectl plugin called Argo Rollouts CLI. You can use it to interact with resources, such as rollouts, analyses, and experiments, from the command line. It can perform operations, such as pause, promote, or retry. The Argo Rollouts CLI plugin can start a local web UI dashboard in the browser to enhance the experience of visualizing the Argo Rollouts resources.

1.3.2. Argo Rollouts resources

Argo Rollout components manage several resources to enable progressive delivery:

  • Rollouts-specific resources: For example, Rollout, AnalysisRun, or Experiment.
  • Kubernetes networking resources: For example, Service, Ingress, or Route for network traffic shaping. Argo Rollouts integrate with these resources, which are referred to as traffic management.

These resources are essential for customizing the deployment of applications through the Rollout CR.

Argo Rollouts support the following actions:

  • Route percentage-based traffic for canary deployments.
  • Forward incoming user traffic by using Service and Ingress resources to the correct application version.
  • Use multiple mechanisms to collect analysis metrics to validate the deployment of a new version of an application.
Table 1.2. Argo Rollouts resources
NameDescription

Rollout

This CR enables the deployment of applications by using canary or blue-green deployment strategies. It replaces the in-built Kubernetes Deployment resource.

AnalysisRun

This CR is used to perform an analysis and aggregate the results of analysis to guide the user toward the successful deployment delivery of an application. The AnalysisRun CR is an instance of the AnalysisTemplate CR.

AnalysisTemplate

The AnalysisTemplate CR is a template file that provides instructions on how to query metrics. The result of these instructions is attached to a rollout in the form of the AnalysisRun CR. The AnalysisTemplate CR can be defined globally on the cluster or on a specific rollout. You can link a list of AnalysisTemplate to be used on replica sets by creating an Experiment custom resource.

Experiment

The Experiment CR is used to run short-lived analysis on an application during its deployment to ensure the application is deployed correctly. The Experiment CR can be used independently or run as part of the Rollout CR.

Service and Ingress

Argo Rollouts natively support routing traffic by services and ingresses by using the Service and Ingress controllers.

Route and VirtualService

The OpenShift Route and Red Hat OpenShift Service Mesh VirtualService resources are used to perform traffic splitting across different application versions.

1.4. Argo Rollouts CLI overview

You can use the Argo Rollouts CLI, which is an optional plugin, to manage and monitor Argo Rollouts resources directly, bypassing the need to use the OpenShift Container Platform web console or the CLI (oc).

With the Argo Rollouts CLI plugin, you can perform the following actions:

  • Make changes to an Argo Rollouts image.
  • Monitor the progress of an Argo Rollouts promotion.
  • Proceed with the promotion steps in a canary deployment.
  • Terminate a failed Argo Rollouts deployment.

The Argo Rollouts CLI plugin directly integrates with oc and kubectl commands.

1.5. Additional resources

Chapter 2. Using Argo Rollouts for progressive deployment delivery

To use Argo Rollouts and manage progressive delivery, after you install the {gitops-titel} Operator on the cluster, you can create and configure a RolloutManager custom resource (CR) instance in the namespace of your choice. You can scope the RolloutManager CR for single or multiple namespaces.

2.1. Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.
  • Red Hat OpenShift GitOps 1.9.0 or a newer version is installed in your cluster.

2.2. Creating a RolloutManager custom resource

To manage progressive delivery of deployments by using Argo Rollouts in Red Hat OpenShift GitOps, you must create and configure a RolloutManager custom resource (CR) in the namespace of your choice. By default, any new argo-rollouts instance has permission to manage resources only in the namespace where it is deployed, but you can use Argo Rollouts in multiple namespaces as required.

Prerequisites

  • Red Hat OpenShift GitOps 1.9.0 or a newer version is installed in your cluster.

Procedure

  1. Log in to the OpenShift Container Platform web console as a cluster administrator.
  2. In the Administrator perspective, click OperatorsInstalled Operators.
  3. Create or select the project where you want to create and configure a RolloutManager custom resource (CR) from the Project drop-down menu.
  4. Select Red Hat OpenShift GitOps from the installed operators.
  5. In the Details tab, under the Provided APIs section, click Create instance in the RolloutManager pane.
  6. On the Create RolloutManager page, select the YAML view and use the default YAML or edit it according to your requirements:

    Example: RolloutManager CR

    apiVersion: argoproj.io/v1alpha1
    kind: RolloutManager
    metadata:
      name: argo-rollout
      labels:
        example: basic
    spec: {}

  7. Click Create.
  8. In the RolloutManager tab, under the RolloutManagers section, verify that the Status field of the RolloutManager instance shows as Phase: Available.
  9. In the left navigation pane, verify the creation of the namespace-scoped supporting resources:

    • Click WorkloadsDeployments to verify that the argo-rollouts deployment is available with the Status showing as 1 of 1 pods running.
    • Click WorkloadsSecrets to verify that the argo-rollouts-notification-secret secret is available.
    • Click NetworkingServices to verify that the argo-rollouts-metrics service is available.
    • Click User ManagementRoles to verify that the argo-rollouts role and argo-rollouts-aggregate-to-admin, argo-rollouts-aggregate-to-edit, and argo-rollouts-aggregate-to-view cluster roles are available.
    • Click User ManagementRoleBindings to verify that the argo-rollouts role binding is available.

2.3. Deleting a RolloutManager custom resource

Uninstalling the Red Hat OpenShift GitOps Operator does not remove the resources that were created during installation. You must manually delete the RolloutManager custom resource (CR) before you uninstall the Red Hat OpenShift GitOps Operator.

Prerequisites

  • Red Hat OpenShift GitOps 1.9.0 or a newer version is installed in your cluster.
  • A RolloutManager CR exists in your namespace.

Procedure

  1. Log in to the OpenShift Container Platform web console as a cluster administrator.
  2. In the Administrator perspective, click OperatorsInstalled Operators.
  3. Click the Project drop-down menu and select the project that contains the RolloutManager CR.
  4. Select Red Hat OpenShift GitOps from the installed operators.
  5. Click the RolloutManager tab to find RolloutManager instances under the RolloutManagers section.
  6. Click the instance.
  7. Click ActionsDelete RolloutManager from the drop-down menu, and click Delete to confirm in the dialog box.
  8. In the RolloutManager tab, under the RolloutManagers section, verify that the RolloutManager instance is not available anymore.
  9. In the left navigation pane, verify the deletion of the namespace-scoped supporting resources:

    • Click WorkloadsDeployments to verify that the argo-rollouts deployment is deleted.
    • Click WorkloadsSecrets to verify that the argo-rollouts-notification-secret secret is deleted.
    • Click NetworkingServices to verify that the argo-rollouts-metrics service is deleted.
    • Click User ManagementRoles to verify that the argo-rollouts role and argo-rollouts-aggregate-to-admin, argo-rollouts-aggregate-to-edit, and argo-rollouts-aggregate-to-view cluster roles are deleted.
    • Click User ManagementRoleBindings to verify that the argo-rollouts role binding is deleted.

2.4. Installing Argo Rollouts CLI on Linux

You can install the Argo Rollouts CLI on Linux.

Prerequisites

  • You have installed the OpenShift Container Platform CLI (oc).

Procedure

  1. Download the latest version of the Argo Rollouts CLI binary, kubectl-argo-rollouts, by running the following command:

    $ curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64
  2. Ensure that the kubectl-argo-rollouts binary is executable by running the following command:

    $ chmod +x ./kubectl-argo-rollouts-linux-amd64
  3. Move the kubectl-argo-rollouts binary to the system path by running the following command:

    # mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts
    Important

    Ensure that you have superuser privileges to run this command.

  4. Verify that the plugin is installed correctly by running the following command and receiving the similar output:

    $ oc argo rollouts version

    Example output

    kubectl-argo-rollouts: v1.6.6+737ca89
      BuildDate: 2024-02-13T15:39:31Z 1
      GitCommit: 737ca89b42e4791e96e05b438c2b8540737a2a1a
      GitTreeState: clean
      GoVersion: go1.20.14 2
      Compiler: gc
      Platform: linux/amd64 3

    1
    The build date information of the Argo Rollouts binary.
    2
    The version of the Go language used for building the Argo Rollouts binary.
    3
    The platform used for building the Argo Rollouts binary.

2.5. Installing Argo Rollouts CLI on Mac OS

If you are a macOS user, you can install the Argo Rollouts CLI by using the Homebrew package manager.

Prerequisites

  • You have installed the Homebrew (brew) package manager.

Procedure

  • Run the following command to install the Argo Rollouts CLI:

    $ brew install argoproj/tap/kubectl-argo-rollouts

2.6. Additional resources

Chapter 3. Getting started with Argo Rollouts

Argo Rollouts supports canary and blue-green deployment strategies. This guide provides instructions with examples using a canary deployment strategy to help you deploy, update, promote and manually abort rollouts.

With a canary-based deployment strategy, you split traffic between two application versions:

  • Canary version: A new version of an application where you gradually route the traffic.
  • Stable version: The current version of an application. After the canary version is stable and has all the user traffic directed to it, it becomes the new stable version. The previous stable version is discarded.

3.1. Prerequisites

  • You have logged in to the OpenShift Container Platform cluster as an administrator.
  • You have access to the OpenShift Container Platform web console.
  • You have installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster.
  • You have installed Argo Rollouts on your OpenShift Container Platform cluster.
  • You have installed the Argo Rollouts CLI on your system.

3.2. Deploying a rollout

As a cluster administrator, you can configure Argo Rollouts to progressively route a subset of user traffic to a new application version. Then you can test whether the application is deployed and working.

The following example procedure creates a rollouts-demo rollout and service. The rollout then routes 20% of traffic to a canary version of the application, waits for a manual promotion, and then performs multiple automated promotions until it routes the entire traffic to the new application version.

Procedure

  1. In the Administrator perspective of the web console, click OperatorsInstalled OperatorsRed Hat OpenShift GitOpsRollout.
  2. Create or select the project in which you want to create and configure a Rollout custom resource (CR) from the Project drop-down menu.
  3. Click Create Rollout and enter the following configuration in the YAML view:

    apiVersion: argoproj.io/v1alpha1
    kind: Rollout
    metadata:
      name: rollouts-demo
    spec:
      replicas: 5
      strategy:
        canary: 1
          steps: 2
          - setWeight: 20 3
          - pause: {}  4
          - setWeight: 40
          - pause: {duration: 45}  5
          - setWeight: 60
          - pause: {duration: 20}
          - setWeight: 80
          - pause: {duration: 10}
      revisionHistoryLimit: 2
      selector:
        matchLabels:
          app: rollouts-demo
      template: 6
        metadata:
          labels:
            app: rollouts-demo
        spec:
          containers:
          - name: rollouts-demo
            image: argoproj/rollouts-demo:blue
            ports:
            - name: http
              containerPort: 8080
              protocol: TCP
            resources:
              requests:
                memory: 32Mi
                cpu: 5m
    1
    The deployment strategy that the rollout must use.
    2
    Specify the steps for the rollout. This example gradually routes 20%, 40%, 60%, and 80% of traffic to the canary version.
    3
    The percentage of traffic that must be directed to the canary version. A value of 20 means that 20% of traffic is directed to the canary version.
    4
    Specify to the Argo Rollouts controller to pause indefinitely until it finds a request for promotion.
    5
    Specify to the Argo Rollouts controller to pause for a duration of 45 seconds. You can set the duration value in seconds (s), minutes (m), or hours (h). For example, you can specify 1h for an hour. If no value is specified, the duration value defaults to seconds.
    6
    Specifies the pods that are to be created.
  4. Click Create.

    Note

    To ensure that the rollout becomes available quickly on creation, the Argo Rollouts controller automatically treats the argoproj/rollouts-demo:blue initial container image specified in the .spec.template.spec.containers.image field as a stable version. In the initial instance, the creation of the Rollout resource routes all of the traffic towards the stable version of the application and skips the part where the traffic is sent to the canary version. However, for all subsequent application upgrades with the modifications to the .spec.template.spec.containers.image field, the Argo Rollouts controller performs the canary steps, as usual.

  5. Verify that your rollout was created correctly by running the following command:

    $ oc argo rollouts list rollouts -n <namespace> 1
    1
    Specify the namespace where the Rollout resource is defined.

    Example output

    NAME           STRATEGY   STATUS        STEP  SET-WEIGHT  READY  DESIRED  UP-TO-DATE  AVAILABLE
    rollouts-demo  Canary     Healthy       8/8   100         5/5    5        5           5

  6. Create the Kubernetes services that targets the rollouts-demo rollout.

    1. In the Administrator perspective of the web console, click NetworkingServices.
    2. Click Create Service and enter the following configuration in the YAML view:

      apiVersion: v1
      kind: Service
      metadata:
        name: rollouts-demo
      spec:
        ports: 1
        - port: 80
          targetPort: http
          protocol: TCP
          name: http
      
        selector: 2
          app: rollouts-demo
      1
      Specifies the name of the port used by the application for running inside the container.
      2
      Ensure that the contents of the selector field are the same as in the Rollout custom resource (CR).
    3. Click Create.

      Rollouts automatically update the created service with pod template hash of the canary ReplicaSet. For example, rollouts-pod-template-hash: 687d76d795.

  7. Watch the progression of your rollout by running the following command:

    $ oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1
    1
    Specify the namespace where the Rollout resource is defined.

    Example output

    Name:            rollouts-demo
    Namespace:       spring-petclinic
    Status:          ✔ Healthy
    Strategy:        Canary
      Step:          8/8
      SetWeight:     100
      ActualWeight:  100
    Images:          argoproj/rollouts-demo:blue (stable)
    Replicas:
      Desired:       5
      Current:       5
      Updated:       5
      Ready:         5
      Available:     5
    
    NAME                                       KIND        STATUS     AGE    INFO
    ⟳ rollouts-demo                            Rollout     ✔ Healthy  4m50s
    └──# revision:1
       └──⧉ rollouts-demo-687d76d795           ReplicaSet  ✔ Healthy  4m50s  stable
          ├──□ rollouts-demo-687d76d795-75k57  Pod         ✔ Running  4m49s  ready:1/1
          ├──□ rollouts-demo-687d76d795-bv5zf  Pod         ✔ Running  4m49s  ready:1/1
          ├──□ rollouts-demo-687d76d795-jsxg8  Pod         ✔ Running  4m49s  ready:1/1
          ├──□ rollouts-demo-687d76d795-rsgtv  Pod         ✔ Running  4m49s  ready:1/1
          └──□ rollouts-demo-687d76d795-xrmrj  Pod         ✔ Running  4m49s  ready:1/1

    After the rollout has been created, you can verify that the Status field of the rollout shows Phase: Healthy.

  8. In the Rollout tab, under the Rollouts section, verify that the Status field of the rollouts-demo rollout shows as Phase: Healthy.

    Tip

    Alternatively, you can verify that the rollout is healthy by running the following command:

    $ oc argo rollouts status rollouts-demo -n <namespace> 1
    1
    Specify the namespace where the Rollout resource is defined.

    Example output

    Healthy

You are now ready to perform a canary deployment, with the next update of the Rollout CR.

3.3. Updating the rollout

When you update the Rollout custom resource (CR) with modifications to the .spec.template.spec fields, for example, the container image version, then new pods are created through the ReplicaSet by using the updated container image version.

Procedure

  1. Simulate the new canary version of the application by modifying the container image deployed in the rollout.

    1. In the Administrator perspective of the web console, go to OperatorsInstalled OperatorsRed Hat OpenShift GitOpsRollout.
    2. Select the existing rollouts-demo rollout and modify the .spec.template.spec.containers.image value from argoproj/rollouts-demo:blue to argoproj/rollouts-demo:yellow in the YAML view.
    3. Click Save and then click Reload.

      The container image deployed in the rollout is modified and the rollout initiates a new canary deployment.

      Note

      As per the setWeight property defined in the .spec.strategy.canary.steps field of the Rollout CR, initially 20% of traffic to the route reaches the canary version and the rollout is paused indefinitely until a request for promotion is received.

      Example route with 20% of traffic directed to the canary version and rollout is paused indefinitely until a request for promotion is specified in the subsequent step

      spec:
        replicas: 5
        strategy:
          canary: 1
            steps: 2
            - setWeight: 20 3
            - pause: {}  4
        # (...)

      1
      The deployment strategy that the rollout must use.
      2
      The steps for the rollout. This example gradually routes 20%, 40%, 60%, and 80% of traffic to the canary version.
      3
      The percentage of traffic that must be directed to the canary version. A value of 20 means that 20% of traffic is directed to the canary version.
      4
      Specification to the Argo Rollouts controller to pause indefinitely until it finds a request for promotion.
  2. Watch the progression of your rollout by running the following command:

    $ oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1
    1
    Specify the namespace where the Rollout CR is defined.

    Example output

    Name:            rollouts-demo
    Namespace:       spring-petclinic
    Status:          ॥ Paused
    Message:         CanaryPauseStep
    Strategy:        Canary
      Step:          1/8
      SetWeight:     20
      ActualWeight:  20
    Images:          argoproj/rollouts-demo:blue (stable)
                     argoproj/rollouts-demo:yellow (canary)
    Replicas:
      Desired:       5
      Current:       5
      Updated:       1
      Ready:         5
      Available:     5
    
    NAME                                       KIND        STATUS     AGE    INFO
    ⟳ rollouts-demo                            Rollout     ॥ Paused   9m51s
    ├──# revision:2
    │  └──⧉ rollouts-demo-6cf78c66c5           ReplicaSet  ✔ Healthy  99s    canary
    │     └──□ rollouts-demo-6cf78c66c5-zrgd4  Pod         ✔ Running  98s    ready:1/1
    └──# revision:1
       └──⧉ rollouts-demo-687d76d795           ReplicaSet  ✔ Healthy  9m51s  stable
          ├──□ rollouts-demo-687d76d795-75k57  Pod         ✔ Running  9m50s  ready:1/1
          ├──□ rollouts-demo-687d76d795-jsxg8  Pod         ✔ Running  9m50s  ready:1/1
          ├──□ rollouts-demo-687d76d795-rsgtv  Pod         ✔ Running  9m50s  ready:1/1
          └──□ rollouts-demo-687d76d795-xrmrj  Pod         ✔ Running  9m50s  ready:1/1

    The rollout is now in a paused status, because there is no pause duration specified in the rollout’s update strategy configuration.

  3. Repeat the previous step to test the newly deployed version of application and ensure that it is working as expected. For example, verify the application by interacting with the application through the browser and try running tests or observing container logs.

    The rollout will remain paused until you advance it to the next step.

After you verify that the new version of the application is working as expected, you can decide whether to continue with promotion or to abort the rollout. Accordingly, follow the instructions in "Promoting the rollout" or "Manually aborting the rollout".

3.4. Promoting the rollout

Because your rollout is now in a paused status, as a cluster administrator, you must now manually promote the rollout to allow it to progress to the next step.

Procedure

  1. Simulate another new canary version of the application by running the following command in the Argo Rollouts CLI:

    $ oc argo rollouts promote rollouts-demo -n <namespace> 1
    1
    Specify the namespace where the Rollout resource is defined.

    Example output

    rollout 'rollouts-demo' promoted

    This increases the traffic weight to 40% in the canary version.

  2. Verify that the rollout progresses through the rest of the steps, by running the following command:

    $ oc argo rollouts get rollout rollouts-demo -n <namespace> --watch 1
    1
    Specify the namespace where the Rollout resource is defined.

    Because the rest of the steps as defined in the Rollout CR have set durations, for example, pause: {duration: 45}, the Argo Rollouts controller waits that duration and then automatically moves to the next step.

    After all steps are completed successfully, the new ReplicaSet object is marked as the stable replica set.

    Example output

    Name:            rollouts-demo
    Namespace:       spring-petclinic
    Status:          ✔ Healthy
    Strategy:        Canary
      Step:          8/8
      SetWeight:     100
      ActualWeight:  100
    Images:          argoproj/rollouts-demo:yellow (stable)
    Replicas:
      Desired:       5
      Current:       5
      Updated:       5
      Ready:         5
      Available:     5
    
    NAME                                       KIND        STATUS        AGE   INFO
    ⟳ rollouts-demo                            Rollout     ✔ Healthy     14m
    ├──# revision:2
    │  └──⧉ rollouts-demo-6cf78c66c5           ReplicaSet  ✔ Healthy     6m5s  stable
    │     ├──□ rollouts-demo-6cf78c66c5-zrgd4  Pod         ✔ Running     6m4s  ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-g9kd5  Pod         ✔ Running     2m4s  ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-2ptpp  Pod         ✔ Running     78s   ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-tmk6c  Pod         ✔ Running     58s   ready:1/1
    │     └──□ rollouts-demo-6cf78c66c5-zv6lx  Pod         ✔ Running     47s   ready:1/1
    └──# revision:1
       └──⧉ rollouts-demo-687d76d795           ReplicaSet  • ScaledDown  14m

3.5. Manually aborting the rollout

When using a canary deployment, the rollout deploys an initial canary version of the application. You can verify it either manually or programmatically. After you verify the canary version and promote it to stable, the new stable version is made available to all users.

However, sometimes bugs, errors, or deployment issues are discovered in the canary version, and you might want to abort the canary rollout and rollback to a stable version of your application.

Aborting a canary rollout deletes the resources of the new canary version and restores the previous stable version of your application. All network traffic such as ingress, route, or virtual service that was being directed to the canary returns to the original stable version.

The following example procedure deploys a new red canary version of your application, and then aborts it before it is fully promoted to stable.

Procedure

  1. Update the container image version and and modify the .spec.template.spec.containers.image value from argoproj/rollouts-demo:yellow to argoproj/rollouts-demo:red by running the following command in the Argo Rollouts CLI:

    $ oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:red -n <namespace> 1
    1
    Specify the namespace where the Rollout custom resource (CR) is defined.

    Example output

    rollout "rollouts-demo" image updated

    The container image deployed in the rollout is modified and the rollout initiates a new canary deployment.

  2. Wait for the rollout to reach the paused status.
  3. Verify that the rollout deploys the rollouts-demo:red canary version and reaches the paused status by running the following command:

    $ oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1
    1
    Specify the namespace where the Rollout CR is defined.

    Example output

    Name:            rollouts-demo
    Namespace:       spring-petclinic
    Status:          ॥ Paused
    Message:         CanaryPauseStep
    Strategy:        Canary
      Step:          1/8
      SetWeight:     20
      ActualWeight:  20
    Images:          argoproj/rollouts-demo:red (canary)
                     argoproj/rollouts-demo:yellow (stable)
    Replicas:
      Desired:       5
      Current:       5
      Updated:       1
      Ready:         5
      Available:     5
    
    NAME                                       KIND        STATUS        AGE    INFO
    ⟳ rollouts-demo                            Rollout     ॥ Paused      17m
    ├──# revision:3
    │  └──⧉ rollouts-demo-5747959bdb           ReplicaSet  ✔ Healthy     75s    canary
    │     └──□ rollouts-demo-5747959bdb-fdrsg  Pod         ✔ Running     75s    ready:1/1
    ├──# revision:2
    │  └──⧉ rollouts-demo-6cf78c66c5           ReplicaSet  ✔ Healthy     9m45s  stable
    │     ├──□ rollouts-demo-6cf78c66c5-zrgd4  Pod         ✔ Running     9m44s  ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-2ptpp  Pod         ✔ Running     4m58s  ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-tmk6c  Pod         ✔ Running     4m38s  ready:1/1
    │     └──□ rollouts-demo-6cf78c66c5-zv6lx  Pod         ✔ Running     4m27s  ready:1/1
    └──# revision:1
       └──⧉ rollouts-demo-687d76d795           ReplicaSet  • ScaledDown  17m

  4. Abort the update of the rollout by running the following command:

    $ oc argo rollouts abort rollouts-demo -n <namespace> 1
    1
    Specify the namespace where the Rollout CR is defined.

    Example output

    rollout 'rollouts-demo' aborted

    The Argo Rollouts controller deletes the canary resources of the application, and rolls back to the stable version.

  5. Verify that after aborting the rollout, now the canary ReplicaSet is scaled to 0 replicas by running the following command:

    $ oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1
    1
    Specify the namespace where the Rollout CR is defined.

    Example output

    Name:            rollouts-demo
    Namespace:       spring-petclinic
    Status:          ✖ Degraded
    Message:         RolloutAborted: Rollout aborted update to revision 3
    Strategy:        Canary
      Step:          0/8
      SetWeight:     0
      ActualWeight:  0
    Images:          argoproj/rollouts-demo:yellow (stable)
    Replicas:
      Desired:       5
      Current:       5
      Updated:       0
      Ready:         5
      Available:     5
    
    NAME                                       KIND        STATUS        AGE    INFO
    ⟳ rollouts-demo                            Rollout     ✖ Degraded    24m
    ├──# revision:3
    │  └──⧉ rollouts-demo-5747959bdb           ReplicaSet  • ScaledDown  7m38s  canary
    ├──# revision:2
    │  └──⧉ rollouts-demo-6cf78c66c5           ReplicaSet  ✔ Healthy     16m    stable
    │     ├──□ rollouts-demo-6cf78c66c5-zrgd4  Pod         ✔ Running     16m    ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-2ptpp  Pod         ✔ Running     11m    ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-tmk6c  Pod         ✔ Running     11m    ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-zv6lx  Pod         ✔ Running     10m    ready:1/1
    │     └──□ rollouts-demo-6cf78c66c5-mlbsh  Pod         ✔ Running     4m47s  ready:1/1
    └──# revision:1
       └──⧉ rollouts-demo-687d76d795           ReplicaSet  • ScaledDown  24m

    The rollout status is marked as Degraded indicating that even though the application has rolled back to the previous stable version, yellow, the rollout is not currently at the wanted version, red, that was set within the .spec.template.spec.containers.image field.

    Note

    The Degraded status does not reflect the health of the application. It only indicates that there is a mismatch between the wanted and running container image versions.

  6. Update the container image version to the previous stable version, yellow, and modify the .spec.template.spec.containers.image value by running the following command:

    $ oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow -n <namespace> 1
    1
    Specify the namespace where the Rollout CR is defined.

    Example output

    rollout "rollouts-demo" image updated

    The rollout skips the analysis and promotion steps, rolls back to the previous stable version, yellow, and fast-tracks the deployment of the stable ReplicaSet.

  7. Verify that the rollout status is immediately marked as Healthy by running the following command:

    $ oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1
    1
    Specify the namespace where the Rollout CR is defined.

    Example output

    Name:            rollouts-demo
    Namespace:       spring-petclinic
    Status:          ✔ Healthy
    Strategy:        Canary
      Step:          8/8
      SetWeight:     100
      ActualWeight:  100
    Images:          argoproj/rollouts-demo:yellow (stable)
    Replicas:
      Desired:       5
      Current:       5
      Updated:       5
      Ready:         5
      Available:     5
    
    NAME                                       KIND        STATUS        AGE  INFO
    ⟳ rollouts-demo                            Rollout     ✔ Healthy     63m
    ├──# revision:4
    │  └──⧉ rollouts-demo-6cf78c66c5           ReplicaSet  ✔ Healthy     55m  stable
    │     ├──□ rollouts-demo-6cf78c66c5-zrgd4  Pod         ✔ Running     55m  ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-2ptpp  Pod         ✔ Running     50m  ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-tmk6c  Pod         ✔ Running     50m  ready:1/1
    │     ├──□ rollouts-demo-6cf78c66c5-zv6lx  Pod         ✔ Running     50m  ready:1/1
    │     └──□ rollouts-demo-6cf78c66c5-mlbsh  Pod         ✔ Running     44m  ready:1/1
    ├──# revision:3
    │  └──⧉ rollouts-demo-5747959bdb           ReplicaSet  • ScaledDown  46m
    └──# revision:1
       └──⧉ rollouts-demo-687d76d795           ReplicaSet  • ScaledDown  63m

3.6. Additional resources

Chapter 4. Routing traffic by using Argo Rollouts

You can progressively route a subset of user traffic to a new application version by using Argo Rollouts and its traffic-splitting mechanisms. Then you can test whether the application is deployed and working.

With Openshift Routes, you can configure Argo Rollouts to reduce or increase the amount of traffic by directing it to various applications in a cluster environment based on your requirements.

You can use OpenShift Routes to split traffic between two application versions:

  • Canary version: A new version of an application where you gradually route the traffic.
  • Stable version: The current version of an application. After the canary version is stable and has all the user traffic directed to it, it becomes the new stable version. The previous stable version is discarded.

4.1. Prerequisites

4.2. Configuring Argo Rollouts to route traffic by using OpenShift Routes

You can use OpenShift Routes to configure Argo Rollouts to create routes, rollouts, and services.

The following example procedure creates a route, a rollout, and two services. It then gradually routes an increasing percentage of traffic to a canary version of the application before that canary state is marked as successful and becomes the new stable version.

Prerequisites

  • You have logged in to the OpenShift Container Platform cluster as an administrator.
  • You have installed the Red Hat OpenShift GitOps on your OpenShift Container Platform cluster.
  • You have installed Argo Rollouts on your OpenShift Container Platform cluster. For more information, see "Creating a RolloutManager custom resource".
  • You have installed the Red Hat OpenShift GitOps CLI on your system. For more information, see "Installing the GitOps CLI".
  • You have installed the Argo Rollouts CLI on your system. For more information, see "Argo Rollouts CLI overview".

Procedure

  1. Create a Route object.

    1. In the Administrator perspective of the web console, click NetworkingRoutes.
    2. Click Create Route.
    3. On the Create Route page, click YAML view and add the following snippet: The following example creates a route called rollouts-demo-route:

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: rollouts-demo-route
      spec:
        port:
          targetPort: http 1
        tls: 2
          insecureEdgeTerminationPolicy: Redirect
          termination: edge
        to:
          kind: Service
          name: argo-rollouts-stable-service 3
          weight: 100 4
      
        alternateBackends:
          - kind: Service
            name: argo-rollouts-canary-service 5
            weight: 0 6
      1
      Specifies the name of the port used by the application for running inside the container.
      2
      Specifies the TLS configuration used to secure the route.
      3
      The name of the targeted stable service.
      4
      This field is automatically modified to stable weight by Route Rollout plugin.
      5
      The name of the targeted canary service.
      6
      This field is automatically modified to canary weight by Route Rollout plugin.
    4. Click Create to create the route. It is then displayed on the Routes page.
  2. Create the services, canary and stable, to be referenced in the route.

    1. In the Administrator perspective of the web console, click NetworkingServices.
    2. Click Create Service.
    3. On the Create Service page, click YAML view and add the following snippet: The following example creates a canary service called argo-rollouts-canary-service. Canary traffic is directed to this service.

      apiVersion: v1
      kind: Service
      metadata:
        name: argo-rollouts-canary-service
      spec:
        ports: 1
        - port: 80
          targetPort: http
          protocol: TCP
          name: http
      
        selector: 2
          app: rollouts-demo
      1
      Specifies the name of the port used by the application for running inside the container.
      2
      Ensure that the contents of the selector field are the same as in stable service and Rollout custom resource (CR).
      Important

      Ensure that the name of the canary service specified in the Route object matches with the name of the canary service specified in the Service object.

    4. Click Create to create the canary service.

      Rollouts automatically update the created service with pod template hash of the canary ReplicaSet. For example, rollouts-pod-template-hash: 7bf84f9696.

    5. Repeat these steps to create the stable service: The following example creates a stable service called argo-rollouts-stable-service. Stable traffic is directed to this service.

      apiVersion: v1
      kind: Service
      metadata:
        name: argo-rollouts-stable-service
      spec:
        ports: 1
        - port: 80
          targetPort: http
          protocol: TCP
          name: http
      
        selector: 2
          app: rollouts-demo
      1
      Specifies the name of the port used by the application for running inside the container.
      2
      Ensure that the contents of the selector field are the same as in canary service and Rollout CR .
      Important

      Ensure that the name of the stable service specified in the Route object matches with the name of the stable service specified in the Service object.

    6. Click Create to create the stable service.

      Rollouts automatically update the created service with pod template hash of the stable ReplicaSet. For example, rollouts-pod-template-hash: 1b6a7733.

  3. Create the Rollout CR to reference the Route and Service objects.

    1. In the Administrator perspective of the web console, go to OperatorsInstalled OperatorsRed Hat OpenShift GitOpsRollout.
    2. On the Create Rollout page, click YAML view and add the following snippet: The following example creates a Rollout CR called rollouts-demo:

      apiVersion: argoproj.io/v1alpha1
      kind: Rollout
      metadata:
        name: rollouts-demo
      spec:
        template: 1
          metadata:
            labels:
              app: rollouts-demo
          spec:
            containers:
            - name: rollouts-demo
              image: argoproj/rollouts-demo:blue
              ports:
              - name: http
                containerPort: 8080
                protocol: TCP
              resources:
                requests:
                  memory: 32Mi
                  cpu: 5m
      
        revisionHistoryLimit: 2
        replicas: 5
        strategy:
          canary:
            canaryService: argo-rollouts-canary-service 2
            stableService: argo-rollouts-stable-service 3
            trafficRouting:
              plugins:
                argoproj-labs/openshift:
                  routes:
                    - rollouts-demo-route  4
            steps: 5
            - setWeight: 30
            - pause: {}
            - setWeight: 60
            - pause: {}
        selector: 6
          matchLabels:
            app: rollouts-demo
      1
      Specifies the pods that are to be created.
      2
      This value must match the name of the created canary Service.
      3
      This value must match the name of the created stable Service.
      4
      This value must match the name of the created Route CR.
      5
      Specify the steps for the rollout. This example gradually routes 30%, 60%, and 100% of traffic to the canary version.
      6
      Ensure that the contents of the selector field are the same as in canary and stable service.
    3. Click Create.
    4. In the Rollout tab, under the Rollout section, verify that the Status field of the rollout shows Phase: Healthy.
  4. Verify that the route is directing 100% of the traffic towards the stable version of the application.

    Note

    When the first instance of the Rollout resource is created, the rollout regulates the amount of traffic to be directed towards the stable and canary application versions. In the initial instance, the creation of the Rollout resource routes all of the traffic towards the stable version of the application and skips the part where the traffic is sent to the canary version.

    1. Go to NetworkingRoutes and look for the Route resource you want to verify.
    2. Select the YAML tab and view the following snippet:

      Example: Route

      kind: Route
      metadata:
        name: rollouts-demo-route
      spec:
        alternateBackends:
        - kind: Service
          name: argo-rollouts-canary-service
          weight: 0 1
        # (...)
        to:
          kind: Service
          name: argo-rollouts-stable-service
          weight: 100 2

      1
      A value of 0 means that 0% of traffic is directed to the canary version.
      2
      A value of 100 means that 100% of traffic is directed to the stable version.
  5. Simulate the new canary version of the application by modifying the container image deployed in the rollout.

    1. In the Administrator perspective of the web console, go to OperatorsInstalled OperatorsRed Hat OpenShift GitOpsRollout.
    2. Select the existing Rollout and modify the .spec.template.spec.containers.image value from argoproj/rollouts-demo:blue to argoproj/rollouts-demo:yellow.

      As a result, the container image deployed in the rollout is modified and the rollout initiates a new canary deployment.

      Note

      As per the setWeight property defined in the .spec.strategy.canary.steps field of the Rollout resource, initially 30% of traffic to the route reaches the canary version and 70% of traffic is directed towards the stable version. The rollout is paused after 30% of traffic is directed to the canary version.

      Example route with 30% of traffic directed to the canary version and 70% directed to the stable version.

      spec:
        alternateBackends:
        - kind: Service
          name: argo-rollouts-canary-service
          weight: 30
        # (...)
        to:
          kind: Service
          name: argo-rollouts-stable-service
          weight: 70

  6. Simulate another new canary version of the application by running the following command in the Argo Rollouts CLI:

    $ oc argo rollouts promote rollouts-demo -n <namespace> 1
    1
    Specify the namespace where the Rollout resource is defined.

    This increases the traffic weight to 60% in the canary version and 40% in the stable version.

    Example route with 60% of traffic directed to the canary version and 40% directed to the stable version.

    spec:
      alternateBackends:
      - kind: Service
        name: argo-rollouts-canary-service
        weight: 60
      # (...)
      to:
        kind: Service
        name: argo-rollouts-stable-service
        weight: 40

  7. Increase the traffic weight in the canary version to 100% and discard the traffic in the old stable version of the application by running the following command:

    $ oc argo rollouts promote rollouts-demo -n <namespace> 1
    1
    Specify the namespace where the Rollout resource is defined.

    Example route with 0% of traffic directed to the canary version and 100% directed to the stable version.

    spec:
      # (...)
      to:
        kind: Service
        name: argo-rollouts-stable-service
        weight: 100

4.3. Additional resources

Chapter 5. Enabling support for a namespace-scoped Argo Rollouts installation

Red Hat OpenShift GitOps enables support for two modes of Argo Rollouts installations:

  • Cluster-scoped installation (default): The Argo Rollouts custom resources (CRs) defined in any namespace are reconciled by the Argo Rollouts instance. As a result, you can use Argo Rollouts CR across any namespace on the cluster.
  • Namespace-scoped installation: The Argo Rollouts instance is installed in a specific namespace and only handles an Argo Rollouts CR within the same namespace. This installation mode includes the following benefits:

    • This mode does not require cluster-wide ClusterRole or ClusterRoleBinding permissions. You can install and use Argo Rollouts within a single namespace without requiring cluster permissions.
    • This mode provides security benefits by limiting the cluster scope of a single Argo Rollouts instance to a specific namespace.
Note

To prevent unintended privilege escalation, Red Hat OpenShift GitOps allows only one mode of Argo Rollout installation at a time.

To switch between cluster-scoped and namespace-scoped Argo Rollouts installations, complete the following steps.

5.1. Configuring a namespace-scoped Argo Rollouts installation

To configure a namespace-scoped instance of Argo Rollouts installation, complete the following steps.

Prerequisites

  • You are logged in to the Red Hat OpenShift GitOps cluster as an administrator.
  • You have installed Red Hat OpenShift GitOps on your Red Hat OpenShift GitOps cluster.

Procedure

  1. In the Administrator perspective of the web console, go to AdministrationCustomResourceDefinitions.
  2. Search for Subscription and click the Subscription CRD.
  3. Click the Instances tab and then click the openshift-gitops-operator subscription.
  4. Click the YAML tab and edit the YAML file.

    1. Specify the NAMESPACE_SCOPED_ARGO_ROLLOUTS environment variable, with the value set to true in the .spec.config.env property.

      Example of configuring the namespace-scoped Argo Rollouts installation

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: openshift-gitops-operator
      spec:
        # (...)
        config:
          env:
            - name: NAMESPACE_SCOPED_ARGO_ROLLOUTS
              value: 'true' 1

      1
      The value set to 'true' enables namespace-scoped installation. If the value is set to 'false' or not specified the installation defaults to cluster-scoped mode.
    2. Click Save.

      The Red Hat OpenShift GitOps Operator facilitates the reconciliation of the Argo Rollouts custom resource within a namespace-scoped installation.

  5. Verify that the Red Hat OpenShift GitOps Operator has enabled the namespace-scoped Argo Rollouts installation by viewing the logs of the GitOps container:

    1. In the Administrator perspective of the web console, go to WorkloadsPods.
    2. Click the openshift-gitops-operator-controller-manager pod, and then click the Logs tab.
    3. Look for the following log statement: Running in namespaced-scoped mode. This statement indicates that the Red Hat OpenShift GitOps Operator has enabled the namespace-scoped Argo Rollouts installation.
  6. Create a RolloutManager resource to complete the namespace-scoped Argo Rollouts installation:

    1. Go to OperatorsInstalled OperatorsRed Hat OpenShift GitOps, and click the RolloutManager tab.
    2. Click Create RolloutManager.
    3. Select YAML view and enter the following snippet:

      Example RolloutManager CR for a namespace-scoped Argo Rollouts installation

      apiVersion: argoproj.io/v1alpha1
      kind: RolloutManager
      metadata:
        name: rollout-manager
        namespace: my-application 1
      spec:
        namespaceScoped: true

      1
      Specify the name of the project where you want to install the namespace-scoped Argo Rollouts instance.
    4. Click Create.

      After the RolloutManager CR is created, Red Hat OpenShift GitOps begins to install the namespace-scoped Argo Rollouts instance into the selected namespace.

  7. Verify that the namespace-scoped installation is successful.

    1. In the RolloutManager tab, under the RolloutManagers section, ensure that the Status field of the RolloutManager instance is Phase: Available.
    2. Examine the following output in the YAML tab under the RolloutManagers section to ensure that the installation is successful:

      Example of namespace-scoped Argo Rollouts installation YAML file

      spec:
        namespaceScoped: true
      status:
        conditions:
          lastTransitionTime: '2024-07-10T14:20:5z`
          message: ''
          reason: Success
          status: 'True' 1
          type: 'Reconciled'
        phase: Available
        rolloutController: Available

      1
      This status indicates that the namespace-scoped Argo Rollouts installation is enabled successfully.

      If you try to install a namespace-specific Argo Rollouts instance while a cluster-scoped installation already exists on the cluster, an error message is displayed:

      Example of an incorrect installation with an error message

      spec:
        namespaceScoped: true
      status:
        conditions:
         lastTransitionTime: '2024-07-10T14:10:7z`
         message: 'when Subscription has environment variable NAMESPACE_SCOPED_ARGO_ROLLOUTS set to False, there may not exist any namespace-scoped RolloutManagers: only a single cluster-scoped RolloutManager is supported'
         reason: InvalidRolloutManagerScope
         status: 'False' 1
         type: 'Reconciled'
        phase: Failure
        rolloutController: Failure

      1
      This status indicates that the namespace-scoped Argo Rollouts installation is not enabled successfully. The installation defaults to cluster-scoped mode.

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.