Chapter 9. Deployments


9.1. How Deployments Work

9.1.1. What Is a Deployment?

OpenShift Container Platform deployments provide fine-grained management over common user applications. They are described using three separate API objects:

  • A deployment configuration, which describes the desired state of a particular component of the application as a pod template.
  • One or more replication controllers, which contain a point-in-time record of the state of a deployment configuration as a pod template.
  • One or more pods, which represent an instance of a particular version of an application.
Important

Users do not need to manipulate replication controllers or pods owned by deployment configurations. The deployment system ensures changes to deployment configurations are propagated appropriately. If the existing deployment strategies are not suited for your use case and you have the need to run manual steps during the lifecycle of your deployment, then you should consider creating a custom strategy.

When you create a deployment configuration, a replication controller is created representing the deployment configuration’s pod template. If the deployment configuration changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new replication controller.

Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally.

Features provided by the deployment system:

  • A deployment configuration, which is a template for running applications.
  • Triggers that drive automated deployments in response to events.
  • User-customizable strategies to transition from the previous version to the new version. A strategy runs inside a pod commonly referred as the deployment process.
  • A set of hooks for executing custom behavior in different points during the lifecycle of a deployment.
  • Versioning of your application in order to support rollbacks either manually or automatically in case of deployment failure.
  • Manual replication scaling and autoscaling.

9.1.2. Creating a Deployment Configuration

Deployment configurations are deploymentConfig OpenShift Container Platform API resources which can be managed with the oc command like any other resource. The following is an example of a deploymentConfig resource:

kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
  name: "frontend"
spec:
  template: 1
    metadata:
      labels:
        name: "frontend"
    spec:
      containers:
        - name: "helloworld"
          image: "openshift/origin-ruby-sample"
          ports:
            - containerPort: 8080
              protocol: "TCP"
  replicas: 5 2
  triggers:
    - type: "ConfigChange" 3
    - type: "ImageChange" 4
      imageChangeParams:
        automatic: true
        containerNames:
          - "helloworld"
        from:
          kind: "ImageStreamTag"
          name: "origin-ruby-sample:latest"
  strategy: 5
    type: "Rolling"
  paused: false 6
  revisionHistoryLimit: 2 7
  minReadySeconds: 0 8
1
The pod template of the frontend deployment configuration describes a simple Ruby application.
2
There will be 5 replicas of frontend.
3
A configuration change trigger causes a new replication controller to be created any time the pod template changes.
4
An image change trigger trigger causes a new replication controller to be created each time a new version of the origin-ruby-sample:latest image stream tag is available.
5
The Rolling strategy is the default way of deploying your pods. May be omitted.
6
Pause a deployment configuration. This disables the functionality of all triggers and allows for multiple changes on the pod template before actually rolling it out.
7
Revision history limit is the limit of old replication controllers you want to keep around for rolling back. May be omitted. If omitted, old replication controllers will not be cleaned up.
8
Minimum seconds to wait (after the readiness checks succeed) for a pod to be considered available. The default value is 0.

9.2. Basic Deployment Operations

9.2.1. Starting a Deployment

You can start a new deployment process manually using the web console, or from the CLI:

$ oc rollout latest dc/<name>
Note

If a deployment process is already in progress, the command will display a message and a new replication controller will not be deployed.

9.2.2. Viewing a Deployment

To get basic information about all the available revisions of your application:

$ oc rollout history dc/<name>

This will show details about all recently created replication controllers for the provided deployment configuration, including any currently running deployment process.

You can view details specific to a revision by using the --revision flag:

$ oc rollout history dc/<name> --revision=1

For more detailed information about a deployment configuration and its latest revision:

$ oc describe dc <name>
Note

The web console shows deployments in the Browse tab.

9.2.3. Retrying a Deployment

If the current revision of your deployment configuration failed to deploy, you can restart the deployment process with:

$ oc rollout retry dc/<name>

If the latest revision of it was deployed successfully, the command will display a message and the deployment process will not be retried.

Note

Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller will have the same configuration it had when it failed.

9.2.4. Rolling Back a Deployment

Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.

To rollback to the last successful deployed revision of your configuration:

$ oc rollout undo dc/<name>

The deployment configuration’s template will be reverted to match the deployment revision specified in the undo command, and a new replication controller will be started. If no revision is specified with --to-revision, then the last successfully deployed revision will be used.

Image change triggers on the deployment configuration are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers:

$ oc set triggers dc/<name> --auto
Note

Deployment configurations also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations.

9.2.5. Executing Commands Inside a Container

You can add a command to a container, which modifies the container’s startup behavior by overruling the image’s ENTRYPOINT. This is different from a lifecycle hook, which instead can be run once per deployment at a specified time.

Add the command parameters to the spec field of the deployment configuration. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist).

...
spec:
  containers:
    -
    name: <container_name>
    image: 'image'
    command:
      - '<command>'
    args:
      - '<argument_1>'
      - '<argument_2>'
      - '<argument_3>'
...

For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments:

...
spec:
  containers:
    -
    name: example-spring-boot
    image: 'image'
    command:
      - java
    args:
      - '-jar'
      - /opt/app-root/springboots2idemo.jar
...

9.2.6. Viewing Deployment Logs

To stream the logs of the latest revision for a given deployment configuration:

$ oc logs -f dc/<name>

If the latest revision is running or failed, oc logs will return the logs of the process that is responsible for deploying your pods. If it is successful, oc logs will return the logs from a pod of your application.

You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually:

$ oc logs --version=1 dc/<name>

For more options on retrieving logs see:

$ oc logs --help

9.2.7. Setting Deployment Triggers

A deployment configuration can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster.

Warning

If no triggers are defined on a deployment configuration, a ConfigChange trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.

9.2.7.1. Configuration Change Trigger

The ConfigChange trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration.

Note

If a ConfigChange trigger is defined on a deployment configuration, the first replication controller will be automatically created soon after the deployment configuration itself is created and it is not paused.

A ConfigChange Trigger

triggers:
  - type: "ConfigChange"

9.2.7.2. ImageChange Trigger

The ImageChange trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed).

An ImageChange Trigger

triggers:
  - type: "ImageChange"
    imageChangeParams:
      automatic: true 1
      from:
        kind: "ImageStreamTag"
        name: "origin-ruby-sample:latest"
        namespace: "myproject"
      containerNames:
        - "helloworld"

1
If the imageChangeParams.automatic field is set to false, the trigger is disabled.

With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the deployment configuration’s helloworld container, a new replication controller is created using the new image for the helloworld container.

Note

If an ImageChange trigger is defined on a deployment configuration (with a ConfigChange trigger and automatic=false, or with automatic=true) and the ImageStreamTag pointed by the ImageChange trigger does not exist yet, then the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the ImageStreamTag.

9.2.7.2.1. Using the Command Line

The oc set triggers command can be used to set a deployment trigger for a deployment configuration. For the example above, you can set the ImageChangeTrigger by using the following command:

$ oc set triggers dc/frontend --from-image=myproject/origin-ruby-sample:latest -c helloworld

For more information, see:

$ oc set triggers --help

9.2.8. Setting Deployment Resources

A deployment is completed by a deployment pod. By default, a deployment pod consumes unbounded node resources on the compute node where it is scheduled. In most cases, the unbound resource consumption does not cause a problem, because deployment pods consume low resources and run for a short period of time. If a project specifies default container limits, the resources used by a deployment pod, along with any other pods, count against those limits.

You can limit the resources used by a deployment pod through the deployment strategy in the deployment configuration. Resource limits for a deployment pod can be used with the Recreate, Rolling, or Custom deployment strategy.

Note

The ability to limit ephemeral storage is available only if an administrator enables the ephemeral storage technology preview. This feature is disabled by default.

In the following example, each of resources, cpu, memory, and ephemeral-storage is optional:

type: "Recreate"
resources:
  limits:
    cpu: "100m" 1
    memory: "256Mi" 2
    ephemeral-storage: "1Gi" 3
1
cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3).
2
memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20).
3
ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). The ephemeral-storage parameter is available only if an administrator enables the ephemeral storage technology preview.

However, if a quota has been defined for your project, one of the following two items is required:

  • A resources section set with an explicit requests:

      type: "Recreate"
      resources:
        requests: 1
          cpu: "100m"
          memory: "256Mi"
          ephemeral-storage: "1Gi"
    1
    The requests object contains the list of resources that correspond to the list of resources in the quota.

    See Quotas and Limit Ranges to learn more about compute resources and the differences between requests and limits.

  • A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process.

Otherwise, deploy pod creation will fail, citing a failure to satisfy quota.

9.2.9. Manual Scaling

In addition to rollbacks, you can exercise fine-grained control over the number of replicas from the web console, or by using the oc scale command. For example, the following command sets the replicas in the deployment configuration frontend to 3.

$ oc scale dc frontend --replicas=3

The number of replicas eventually propagates to the desired and current state of the deployment configured by the deployment configuration frontend.

Note

Pods can also be autoscaled using the oc autoscale command. See Pod Autoscaling for more details.

9.2.10. Assigning Pods to Specific Nodes

You can use node selectors in conjunction with labeled nodes to control pod placement.

Note

OpenShift Container Platform administrators can assign labels during cluster installation, or added to a node after installation.

Cluster administrators can set the default node selector for your project in order to restrict pod placement to specific nodes. As an OpenShift Container Platform developer, you can set a node selector on a pod configuration to restrict nodes even further.

To add a node selector when creating a pod, edit the pod configuration, and add the nodeSelector value. This can be added to a single pod configuration, or in a pod template:

apiVersion: v1
kind: Pod
spec:
  nodeSelector:
    disktype: ssd
...

Pods created when the node selector is in place are assigned to nodes with the specified labels.

The labels specified here are used in conjunction with the labels added by a cluster administrator.

For example, if a project has the type=user-node and region=east labels added to a project by the cluster administrator, and you add the above disktype: ssd label to a pod, the pod will only ever be scheduled on nodes that have all three labels.

Note

Labels can only be set to one value, so setting a node selector of region=west in a pod configuration that has region=east as the administrator-set default, results in a pod that will never be scheduled.

9.2.11. Running a Pod with a Different Service Account

You can run a pod with a service account other than the default:

  1. Edit the deployment configuration:

    $ oc edit dc/<deployment_config>
  2. Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use:

    spec:
      securityContext: {}
      serviceAccount: <service_account>
      serviceAccountName: <service_account>

9.2.12. Adding Secrets to Deployment Configurations from the Web Console

Add a secret to your deployment configuration so that it can access a private repository.

  1. Create a new OpenShift Container Platform project.
  2. Create a secret that contains credentials for accessing a private image repository.
  3. Create a deployment configuration.
  4. On the deployment configuration editor page or in the fromimage page of the web console, set the Pull Secret.
  5. Click the Save button.

9.3. Deployment Strategies

9.3.1. What Are Deployment Strategies?

A deployment strategy is a way to change or upgrade an application. The aim is to make the change without downtime in a way that the user barely notices the improvements.

The most common strategy is to use a blue-green deployment. The new version (the blue version) is brought up for testing and evaluation, while the users still use the stable version (the green version). When ready, the users are switched to the blue version. If a problem arises, you can switch back to the green version.

A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users.

A canary deployment tests the new version but when a problem is detected it quickly falls back to the previous version. This can be done with both of the above strategies.

The route based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations may need to be scaled.

There are things to consider when choosing a deployment strategy.

  • Long running connections need to be handled gracefully.
  • Database conversions can get tricky and will need to be done and rolled back along with the application.
  • If the application is a hybrid of microservices and traditional components downtime may be needed to complete the transition.
  • You need the infrastructure to do this.
  • If you have a non-isolated test environment, you can break both new and old versions.

Since the end user usually accesses the application through a route handled by a router, the deployment strategy can focus on deployment configuration features or routing features.

Strategies that focus on the deployment configuration impact all routes that use the application. Strategies that use router features target individual routes.

Many deployment strategies are supported through the deployment configuration and some additional strategies are supported through router features. The deployment configuration-based strategies are discussed in this section.

The Rolling strategy is the default strategy used if no strategy is specified on a deployment configuration.

A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the deployment configuration will retry to run the pod until it times out. The default timeout is 10m, a value set in TimeoutSeconds in dc.spec.strategy.*params.

9.3.2. Rolling Strategy

A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted.

9.3.2.1. Canary Deployments

All rolling deployments in OpenShift Container Platform are canary deployments; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the deployment configuration will be automatically rolled back. The readiness check is part of the application code, and may be as sophisticated as necessary to ensure the new instance is ready to be used. If you need to implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy.

9.3.2.2. When to Use a Rolling Deployment

  • When you want to take no downtime during an application update.
  • When your application supports having old code and new code running at the same time.

A rolling deployment means you to have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility.

The following is an example of the Rolling strategy:

strategy:
  type: Rolling
  rollingParams:
    updatePeriodSeconds: 1 1
    intervalSeconds: 1 2
    timeoutSeconds: 120 3
    maxSurge: "20%" 4
    maxUnavailable: "10%" 5
    pre: {} 6
    post: {}
1
The time to wait between individual pod updates. If unspecified, this value defaults to 1.
2
The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1.
3
The time to wait for a scaling event before giving up. Optional; the default is 600. Here, giving up means automatically rolling back to the previous complete deployment.
4
maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure.
5
maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure.
6
pre and post are both lifecycle hooks.

The Rolling strategy will:

  1. Execute any pre lifecycle hook.
  2. Scale up the new replication controller based on the surge count.
  3. Scale down the old replication controller based on the max unavailable count.
  4. Repeat this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero.
  5. Execute any post lifecycle hook.
Important

When scaling down, the Rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure.

The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10%) or an absolute value (e.g., 2). The default value for both is 25%.

These parameters allow the deployment to be tuned for availability and speed. For example:

  • maxUnavailable=0 and maxSurge=20% ensures full capacity is maintained during the update and rapid scale up.
  • maxUnavailable=10% and maxSurge=0 performs an update using no extra capacity (an in-place update).
  • maxUnavailable=10% and maxSurge=10% scales up and down quickly with some potential for capacity loss.

Generally, if you want fast rollouts, use maxSurge. If you need to take into account resource quota and can accept partial unavailability, use maxUnavailable.

9.3.2.3. Rolling Example

Rolling deployments are the default in OpenShift Container Platform. To see a rolling update, follow these steps:

  1. Create an application based on the example deployment images found in DockerHub:

    $ oc new-app openshift/deployment-example

    If you have the router installed, make the application available via a route (or use the service IP directly)

    $ oc expose svc/deployment-example

    Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image.

  2. Scale the deployment configuration up to three replicas:

    $ oc scale dc/deployment-example --replicas=3
  3. Trigger a new deployment automatically by tagging a new version of the example as the latest tag:

    $ oc tag deployment-example:v2 deployment-example:latest
  4. In your browser, refresh the page until you see the v2 image.
  5. If you are using the CLI, the following command will show you how many pods are on version 1 and how many are on version 2. In the web console, you should see the pods slowly being added to v2 and removed from v1.

    $ oc describe dc deployment-example

During the deployment process, the new replication controller is incrementally scaled up. Once the new pods are marked as ready (by passing their readiness check), the deployment process will continue. If the pods do not become ready, the process will abort, and the deployment configuration will be rolled back to its previous version.

9.3.3. Recreate Strategy

The Recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.

The following is an example of the Recreate strategy:

strategy:
  type: Recreate
  recreateParams: 1
    pre: {} 2
    mid: {}
    post: {}
1
recreateParams are optional.
2
pre, mid, and post are lifecycle hooks.

The Recreate strategy will:

  1. Execute any pre lifecycle hook.
  2. Scale down the previous deployment to zero.
  3. Execute any mid lifecycle hook.
  4. Scale up the new deployment.
  5. Execute any post lifecycle hook.
Important

During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.

9.3.3.1. When to Use a Recreate Deployment

  • When you must run migrations or other data transformations before your new code starts.
  • When you do not support having new and old versions of your application code running at the same time.
  • When you want to use a RWO volume, which is not supported being shared between multiple replicas.

A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time.

9.3.4. Custom Strategy

The Custom strategy allows you to provide your own deployment behavior.

The following is an example of the Custom strategy:

strategy:
  type: Custom
  customParams:
    image: organization/strategy
    command: [ "command", "arg1" ]
    environment:
      - name: ENV_1
        value: VALUE_1

In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image’s Dockerfile. The optional environment variables provided are added to the execution environment of the strategy process.

Additionally, OpenShift Container Platform provides the following environment variables to the deployment process:

Environment VariableDescription

OPENSHIFT_DEPLOYMENT_NAME

The name of the new deployment (a replication controller).

OPENSHIFT_DEPLOYMENT_NAMESPACE

The name space of the new deployment.

The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.

Learn more about advanced deployment strategies.

Alternatively, use customParams to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image, but the default OpenShift Container Platform deployer image will be used instead:

strategy:
  type: Rolling
  customParams:
    command:
    - /bin/sh
    - -c
    - |
      set -e
      openshift-deploy --until=50%
      echo Halfway there
      openshift-deploy
      echo Complete

This will result in following deployment:

Started deployment #2
--> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling custom-deployment-2 up to 1
--> Reached 50% (currently 50%)
Halfway there
--> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling custom-deployment-1 down to 1
    Scaling custom-deployment-2 up to 2
    Scaling custom-deployment-1 down to 0
--> Success
Complete

If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication.

9.3.5. Lifecycle Hooks

The Recreate and Rolling strategies support lifecycle hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:

The following is an example of a pre lifecycle hook:

pre:
  failurePolicy: Abort
  execNewPod: {} 1

Every hook has a failurePolicy, which defines the action the strategy should take when a hook failure is encountered:

Abort

The deployment process will be considered a failure if the hook fails.

Retry

The hook execution should be retried until it succeeds.

Ignore

Any hook failure should be ignored and the deployment should proceed.

Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field.

9.3.5.1. Pod-based Lifecycle Hook

Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a deployment configuration.

The following simplified example deployment configuration uses the Rolling strategy. Triggers and some other minor details are omitted for brevity:

kind: DeploymentConfig
apiVersion: v1
metadata:
  name: frontend
spec:
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
        - name: helloworld
          image: openshift/origin-ruby-sample
  replicas: 5
  selector:
    name: frontend
  strategy:
    type: Rolling
    rollingParams:
      pre:
        failurePolicy: Abort
        execNewPod:
          containerName: helloworld 1
          command: [ "/usr/bin/command", "arg1", "arg2" ] 2
          env: 3
            - name: CUSTOM_VAR1
              value: custom_value1
          volumes:
            - data 4
1
The helloworld name refers to spec.template.spec.containers[0].name.
2
This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image.
3
env is an optional set of environment variables for the hook container.
4
volumes is an optional set of volume references for the hook container.

In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod will have the following properties:

  • The hook command will be /usr/bin/command arg1 arg2.
  • The hook container will have the CUSTOM_VAR1=custom_value1 environment variable.
  • The hook failure policy is Abort, meaning the deployment process will fail if the hook fails.
  • The hook pod will inherit the data volume from the deployment configuration pod.

9.3.5.2. Using the Command Line

The oc set deployment-hook command can be used to set the deployment hook for a deployment configuration. For the example above, you can set the pre-deployment hook with the following command:

$ oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \
  -v data --failure-policy=abort -- /usr/bin/command arg1 arg2

9.4. Advanced Deployment Strategies

9.4.1. Advanced Deployment Strategies

Deployment strategies provide a way for the application to evolve. Some strategies use the deployment configuration to make changes that are seen by users of all routes that resolve to the application. Other strategies, such as the ones described here, use router features to impact specific routes.

9.4.2. Blue-Green Deployment

Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the green version) to the newer version (the blue version). You can use a rolling strategy or switch services in a route.

Note

Since many applications depend on persistent data, you will need to have an application that supports N-1 compatibility, which means you share data and implement live migration between your database, store, or disk by creating two copies of your data layer.

Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version.

9.4.2.1. Using a Blue-Green Deployment

Blue-Green deployments use two deployment configurations. Both are running, and the one in production depends on the service the route specifies, with each deployment configuration exposed to a different service. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new, blue, version is live.

If necessary, you can roll back to the older, green, version by switching service back to the previous version.

Using a Route and Two Services

This example sets up two deployment configurations; one for the stable version (the green version) and the other for the newer version (the blue version).

A route points to a service, and can be changed to point to a different service at any time. As a developer, you can test the new version of your code by connecting to the new service before your production traffic is routed to it.

Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications.

  1. Create two copies of the example application:

    $ oc new-app openshift/deployment-example:v1 --name=example-green
    $ oc new-app openshift/deployment-example:v2 --name=example-blue

    This creates two independent application components: one running the v1 image under the example-green service, and one using the v2 image under the example-blue service.

  2. Create a route that points to the old service:

    $ oc expose svc/example-green --name=bluegreen-example
  3. Browse to the application at example-green.<project>.<router_domain> to verify you see the v1 image.
  4. Edit the route and change the service name to example-blue:

    $ oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-blue"}}}'
  5. To verify that the route has changed, refresh the browser until you see the v2 image.

9.4.3. A/B Deployment

The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Since you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the previous version. As you adjust the request load on each version, the number of pods in each service may need to be scaled as well to provide the expected performance.

In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user’s reaction to the different versions to inform design decisions.

For this to be effective, both the old and new versions need to be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions need N-1 compatibility to properly work together.

OpenShift Container Platform supports N-1 compatibility through the web console as well as the command line interface.

9.4.3.1. Load Balancing for A/B Testing

The user sets up a route with multiple services. Each service handles a version of the application.

Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights. The weight for each service is distributed to the service’s endpoints so that the sum of the endpoint weights is the service weight.

The route can have up to four services. The weight for the service can be between 0 and 256. When the weight is 0, the service does not participate in load-balancing but continues to serve existing persistent connections. When the service weight is not 0, each endpoint has a minimum weight of 1. Because of this, a service with a lot of endpoints can end up with higher weight than desired. In this case, reduce the number of pods to get the desired load balance weight. See the Alternate Backends and Weights section for more information.

The web console allows users to set the weighting and show balance between them:

Visualization of Alternate Back Ends in the Web Console

To set up the A/B environment:

  1. Create the two applications and give them different names. Each creates a deployment configuration. The applications are versions of the same program; one is usually the current production version and the other the proposed new version:

    $ oc new-app openshift/deployment-example1 --name=ab-example-a
    $ oc new-app openshift/deployment-example2 --name=ab-example-b
  2. Expose the deployment configuration to create a service:

    $ oc expose dc/ab-example-a --name=ab-example-A
    $ oc expose dc/ab-example-b --name=ab-example-B

    At this point both applications are deployed and are running and have services.

  3. Make the application available externally via a route. You can expose either service at this point, it may be convenient to expose the current production version and latter modify the route to add the new version.

    $ oc expose svc/ab-example-A

    Browse to the application at ab-example.<project>.<router_domain> to verify that you see the desired version.

  4. When you deploy the route, the router will balance the traffic according to the weights specified for the services. At this point there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights will bring the A/B setup to life. This can be done by the oc set route-backends command or by editing the route.

    Setting the oc set route-backend to 0 means the service does not participate in load-balancing but continues to serve existing persistent connections.

    Note

    Changes to the route just change the portion of traffic to the various services. You may need to scale the deployment configurations to adjust the number of pods to handle the anticipated loads.

    To edit the route, run:

    $ oc edit route <route-name>
    ...
    metadata:
      name: route-alternate-service
      annotations:
        haproxy.router.openshift.io/balance: roundrobin
    spec:
      host: ab-example.my-project.my-domain
      to:
        kind: Service
        name: ab-example-A
        weight: 10
      alternateBackends:
      - kind: Service
        name: ab-example-B
        weight: 15
    ...
9.4.3.1.1. Managing Weights Using the Web Console
  1. Navigate to the Route details page (Applications/Routes).
  2. Select Edit from the Actions menu.
  3. Check Split traffic across multiple services.
  4. The Service Weights slider sets the percentage of traffic sent to each service.

    Service Weights

    For traffic split between more than two services, the relative weights are specified by integers between 0 and 256 for each service.

    Weights specified by integers

    Traffic weightings are shown on the Overview in the expanded rows of the applications between which traffic is split.

9.4.3.1.2. Managing Weights Using the CLI

This command manages the services and corresponding weights load balanced by the route.

$ oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]

For example, the following sets ab-example-A as the primary service with weight=198 and ab-example-B as the first alternate service with a weight=2:

$ oc set route-backends web ab-example-A=198 ab-example-B=2

This means 99% of traffic will be sent to service ab-example-A and 1% to service ab-example-B.

This command does not scale the deployment configurations. You may need to do that to have enough pods to handle the request load.

The command with no flags displays the current configuration.

$ oc set route-backends web
NAME                    KIND     TO           WEIGHT
routes/web              Service  ab-example-A 198 (99%)
routes/web              Service  ab-example-B 2   (1%)

The --adjust flag allows you to alter the weight of an individual service relative to itself or to the primary service. Specifying a percentage will adjust the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends their weights will be kept proportional to the changed.

$ oc set route-backends web --adjust ab-example-A=200 ab-example-B=10
$ oc set route-backends web --adjust ab-example-B=5%
$ oc set route-backends web --adjust ab-example-B=+15%

The --equal flag sets the weight of all services to 100

$ oc set route-backends web --equal

The --zero flag sets the weight of all services to 0. All requests will return with a 503 error.

Note

Not all routers may support multiple or weighted backends.

9.4.3.1.3. One Service, Multiple Deployment Configurations

If you have the router installed, make the application available via a route (or use the service IP directly):

$ oc expose svc/ab-example

Browse to the application at ab-example.<project>.<router_domain> to verify you see the v1 image.

  1. Create a second shard based on the same source image as the first shard but different tagged version, and set a unique value:

    $ oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE="shard B" COLOR="red"
  2. Edit the newly created shard to set a label ab-example=true that will be common to all shards:

    $ oc edit dc/ab-example-b

    In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-b label. Save and exit the editor.

  3. Trigger a re-deployment of the second shard to pick up the new labels:

    $ oc rollout latest dc/ab-example-b
  4. At this point, both sets of pods are being served under the route. However, since both browsers (by leaving a connection open) and the router (by default, through a cookie) will attempt to preserve your connection to a back-end server, you may not see both shards being returned to you. To force your browser to one or the other shard, use the scale command:

    $ oc scale dc/ab-example-a --replicas=0

    Refreshing your browser should show v2 and shard B (in red).

    $ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0

    Refreshing your browser should show v1 and shard A (in blue).

    If you trigger a deployment on either shard, only the pods in that shard will be affected. You can easily trigger a deployment by changing the SUBTITLE environment variable in either deployment config oc edit dc/ab-example-a or oc edit dc/ab-example-b. You can add additional shards by repeating steps 5-7.

    Note

    These steps will be simplified in future versions of OpenShift Container Platform.

9.4.4. Proxy Shard / Traffic Splitter

In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard, which forwards or splits the traffic it receives to a separate service or application running elsewhere.

In the simplest configuration, the proxy would forward requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes.

While an implementation is beyond the scope of this example, any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities.

9.4.5. N-1 Compatibility

Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem.

This can take many forms — data stored on disk, in a database, in a temporary cache, or that is part of a user’s browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it.

For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional.

One way to validate N-1 compatibility is to use an A/B deployment. Run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment.

9.4.6. Graceful Termination

OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit.

On shutdown, OpenShift Container Platform will send a TERM signal to the processes in the container. Application code, on receiving SIGTERM, should stop accepting new connections. This will ensure that load balancers route traffic to other active instances. The application code should then wait until all open connections are closed (or gracefully terminate individual connections at the next opportunity) before exiting.

After the graceful termination period expires, a process that has not exited will be sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and may be customized per application as necessary.

9.5. Kubernetes Deployments Support

9.5.1. Deployments Object Type

Kubernetes provides a first-class object type in OpenShift Container Platform called deployments . This object type (referred to here as Kubernetes deployments for distinction) serves as a descendant of the deployment configuration object type.

Like deployment configurations, Kubernetes deployments describe the desired state of a particular component of an application as a pod template. Kubernetes deployments create replica sets (an iteration of replication controllers), which orchestrate pod lifecycles.

For example, this definition of a Kubernetes deployment creates a replica set to bring up one hello-openshift pod:

Example Kubernetes Deployment Definition hello-openshift-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-openshift
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-openshift
  template:
    metadata:
      labels:
        app: hello-openshift
    spec:
      containers:
      - name: hello-openshift
        image: openshift/hello-openshift:latest
        ports:
        - containerPort: 80

After saving the definition to a local file, you could then use it to create a Kubernetes deployment:

$ oc create -f hello-openshift-deployment.yaml

You can use the CLI to inspect and operate on Kubernetes deployments and replica sets like other object types, as described in Common Operations, like get and describe. For the object type, use deployments or deploy for Kubernetes deployments and replicasets or rs for replica sets.

See the Kubernetes documentation for more details about Deployments and Replica Sets, substituting oc for kubectl in CLI usage examples.

9.5.2. Kubernetes Deployments Versus Deployment Configurations

Because deployment configurations existed in OpenShift Container Platform prior to deployments being added in Kubernetes 1.2, the latter object type naturally diverges slightly from the former. The long-term goal in OpenShift Container Platform is to reach full feature parity in Kubernetes deployments and switch to using them as a single object type that provides fine-grained management over applications.

Kubernetes deployments are supported to ensure upstream projects and examples that use the new object type can run smoothly on OpenShift Container Platform. Given the current feature set of Kubernetes deployments, you may want to use them instead of deployment configurations in OpenShift Container Platform if you do not plan to use any of the following in particular:

The following sections go into more details on the differences between the two object types to further help you decide when you might want to use Kubernetes deployments over deployment configurations.

9.5.2.1. Deployment Configuration-Specific Features

9.5.2.1.1. Automatic Rollbacks

Kubernetes deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. This feature should be added soon.

9.5.2.1.2. Triggers

Kubernetes deployments have an implicit ConfigChange trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment:

$ oc rollout pause deployments/<name>

At the moment, Kubernetes deployments do not support ImageChange triggers. A generic triggering mechanism has been proposed upstream, but it is unknown if and when it may be accepted. Eventually, a OpenShift Container Platform-specific mechanism could be implemented to layer on top of Kubernetes deployments, but it would be more desirable for it to exist as part of the Kubernetes core.

9.5.2.1.3. Lifecycle Hooks

Kubernetes deployments do not support any lifecycle hooks.

9.5.2.1.4. Custom Strategies

Kubernetes deployments do not yet support user-specified Custom deployment strategies yet.

9.5.2.1.5. Canary Deployments

Kubernetes deployments do not yet run canaries as part of a new rollout.

9.5.2.1.6. Test Deployments

Kubernetes deployments do not support running test tracks.

9.5.2.2. Kubernetes Deployment-Specific Features

9.5.2.2.1. Rollover

The deployment process for Kubernetes deployments is driven by a controller loop, in contrast to deployment configurations which use deployer pods for every new rollout. This means that a Kubernetes deployment can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one.

Deployment configurations can have at most one deployer pod running, otherwise multiple deployers end up fighting with each other trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this translates to faster rapid rollouts for Kubernetes deployments.

9.5.2.2.2. Proportional Scaling

Because the Kubernetes deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a deployment, it is able to scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set.

Deployment configurations cannot be scaled when a rollout is ongoing because the deployment configuration controller will end up fighting with the deployer process about the size of the new replication controller.

9.5.2.2.3. Pausing Mid-rollout

Kubernetes deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. On the other hand, you cannot pause deployer pods currently, so if you try to pause a deployment configuration in the middle of a rollout, the deployer process will not be affected and will continue until it finishes.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.