Chapter 5. Deployments
5.1. Understanding Deployments and DeploymentConfigs
Deployments and DeploymentConfigs in OpenShift Container Platform are API objects that provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects:
- A DeploymentConfig or a Deployment, either of which describes the desired state of a particular component of the application as a Pod template.
- DeploymentConfigs involve one or more ReplicationControllers, which contain a point-in-time record of the state of a DeploymentConfig as a Pod template. Similarly, Deployments involve one or more ReplicaSets, a successor of ReplicationControllers.
- One or more Pods, which represent an instance of a particular version of an application.
5.1.1. Building blocks of a deployment
Deployments and DeploymentConfigs are enabled by the use of native Kubernetes API objects ReplicationControllers and ReplicaSets, respectively, as their building blocks.
Users do not have to manipulate ReplicationControllers, ReplicaSets, or Pods owned by DeploymentConfigs or Deployments. The deployment systems ensures changes are propagated appropriately.
If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a Custom deployment strategy.
The following sections provide further details on these objects.
5.1.1.1. ReplicationControllers
A ReplicationController ensures that a specified number of replicas of a Pod are running at all times. If Pods exit or are deleted, the ReplicationController acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount.
A ReplicationController configuration consists of:
- The number of replicas desired (which can be adjusted at runtime).
- A Pod definition to use when creating a replicated Pod.
- A selector for identifying managed Pods.
A selector is a set of labels assigned to the Pods that are managed by the ReplicationController. These labels are included in the Pod definition that the ReplicationController instantiates. The ReplicationController uses the selector to determine how many instances of the Pod are already running in order to adjust as needed.
The ReplicationController does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler.
The following is an example definition of a ReplicationController:
apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always
5.1.1.2. ReplicaSets
Similar to a ReplicationController, a ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. The difference between a ReplicaSet and a ReplicationController is that a ReplicaSet supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements.
Only use ReplicaSets if you require custom update orchestration or do not require updates at all. Otherwise, use Deployments. ReplicaSets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their ReplicaSets automatically, provide declarative updates to pods, and do not have to manually manage the ReplicaSets that they create.
The following is an example ReplicaSet
definition:
apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always
- 1
- A label query over a set of resources. The result of
matchLabels
andmatchExpressions
are logically conjoined. - 2
- Equality-based selector to specify resources with labels that match the selector.
- 3
- Set-based selector to filter keys. This selects all resources with key equal to
tier
and value equal tofrontend
.
5.1.2. DeploymentConfigs
Building on ReplicationControllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfigs. In the simplest case, a DeploymentConfig creates a new ReplicationController and lets it start up Pods.
However, OpenShift Container Platform deployments from DeploymentConfigs also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the ReplicationController.
The DeploymentConfig deployment system provides the following capabilities:
- A DeploymentConfig, which is a template for running applications.
- Triggers that drive automated deployments in response to events.
- User-customizable deployment strategies to transition from the previous version to the new version. A strategy runs inside a Pod commonly referred as the deployment process.
- A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment.
- Versioning of your application in order to support rollbacks either manually or automatically in case of deployment failure.
- Manual replication scaling and autoscaling.
When you create a DeploymentConfig, a ReplicationController is created representing the DeploymentConfig’s Pod template. If the DeploymentConfig changes, a new ReplicationController is created with the latest Pod template, and a deployment process runs to scale down the old ReplicationController and scale up the new one.
Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM
signal, you can ensure that running user connections are given a chance to complete normally.
The OpenShift Container Platform DeploymentConfig
object defines the following details:
-
The elements of a
ReplicationController
definition. - Triggers for creating a new deployment automatically.
- The strategy for transitioning between deployments.
- Lifecycle hooks.
Each time a deployment is triggered, whether manually or automatically, a deployer Pod manages the deployment (including scaling down the old ReplicationController, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the Deployment in order to retain its logs of the Deployment. When a deployment is superseded by another, the previous ReplicationController is retained to enable easy rollback if needed.
Example DeploymentConfig definition
apiVersion: v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3
- 1
- A
ConfigChange
trigger causes a new Deployment to be created any time the ReplicationController template changes. - 2
- An
ImageChange
trigger causes a new Deployment to be created each time a new version of the backing image is available in the named imagestream. - 3
- The default
Rolling
strategy makes a downtime-free transition between Deployments.
5.1.3. Deployments
Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployments. Deployments serve as a descendant of the OpenShift Container Platform-specific DeploymentConfig.
Like DeploymentConfigs, Deployments describe the desired state of a particular component of an application as a Pod template. Deployments create ReplicaSets, which orchestrate Pod lifecycles.
For example, the following Deployment definition creates a ReplicaSet to bring up one hello-openshift
Pod:
Deployment definition
apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80
5.1.4. Comparing Deployments and DeploymentConfigs
Both Kubernetes Deployments and OpenShift Container Platform-provided DeploymentConfigs are supported in OpenShift Container Platform; however, it is recommended to use Deployments unless you need a specific feature or behavior provided by DeploymentConfigs.
The following sections go into more detail on the differences between the two object types to further help you decide which type to use.
5.1.4.1. Design
One important difference between Deployments and DeploymentConfigs is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfigs prefer consistency, whereas Deployments take availability over consistency.
For DeploymentConfigs, if a node running a deployer Pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding Pod. This means that you can not delete the Pod to unstick the rollout, as the kubelet is responsible for deleting the associated Pod.
However, Deployments rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same Deployment at the same time, but this issue will be reconciled shortly after the failure occurs.
5.1.4.2. DeploymentConfigs-specific features
Automatic rollbacks
Currently, Deployments do not support automatically rolling back to the last successfully deployed ReplicaSet in case of a failure.
Triggers
Deployments have an implicit ConfigChange
trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment:
$ oc rollout pause deployments/<name>
Lifecycle hooks
Deployments do not yet support any lifecycle hooks.
Custom strategies
Deployments do not support user-specified Custom deployment strategies yet.
5.1.4.3. Deployments-specific features
Rollover
The deployment process for Deployments is driven by a controller loop, in contrast to DeploymentConfigs which use deployer pods for every new rollout. This means that a Deployment can have as many active ReplicaSets as possible, and eventually the deployment controller will scale down all old ReplicaSets and scale up the newest one.
DeploymentConfigs can have at most one deployer pod running, otherwise multiple deployers end up conflicting while trying to scale up what they think should be the newest ReplicationController. Because of this, only two ReplicationControllers can be active at any point in time. Ultimately, this translates to faster rapid rollouts for Deployments.
Proportional scaling
Because the Deployment controller is the sole source of truth for the sizes of new and old ReplicaSets owned by a Deployment, it is able to scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each ReplicaSet.
DeploymentConfigs cannot be scaled when a rollout is ongoing because the DeploymentConfig controller will end up having issues with the deployer process about the size of the new ReplicationController.
Pausing mid-rollout
Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. On the other hand, you cannot pause deployer pods currently, so if you try to pause a DeploymentConfig in the middle of a rollout, the deployer process will not be affected and will continue until it finishes.
5.2. Managing deployment processes
5.2.1. Managing DeploymentConfigs
DeploymentConfigs can be managed from the OpenShift Container Platform web console’s Workloads page or using the oc
CLI. The following procedures show CLI usage unless otherwise stated.
5.2.1.1. Starting a deployment
You can start a rollout to begin the deployment process of your application.
Procedure
To start a new deployment process from an existing DeploymentConfig, run the following command:
$ oc rollout latest dc/<name>
NoteIf a deployment process is already in progress, the command displays a message and a new ReplicationController will not be deployed.
5.2.1.2. Viewing a deployment
You can view a deployment to get basic information about all the available revisions of your application.
Procedure
To show details about all recently created ReplicationControllers for the provided DeploymentConfig, including any currently running deployment process, run the following command:
$ oc rollout history dc/<name>
To view details specific to a revision, add the
--revision
flag:$ oc rollout history dc/<name> --revision=1
For more detailed information about a deployment configuration and its latest revision, use the
oc describe
command:$ oc describe dc <name>
5.2.1.3. Retrying a deployment
If the current revision of your DeploymentConfig failed to deploy, you can restart the deployment process.
Procedure
To restart a failed deployment process:
$ oc rollout retry dc/<name>
If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not be retried.
NoteRetrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted ReplicationController has the same configuration it had when it failed.
5.2.1.4. Rolling back a deployment
Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.
Procedure
To rollback to the last successful deployed revision of your configuration:
$ oc rollout undo dc/<name>
The DeploymentConfig’s template is reverted to match the deployment revision specified in the undo command, and a new ReplicationController is started. If no revision is specified with
--to-revision
, then the last successfully deployed revision is used.Image change triggers on the DeploymentConfig are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete.
To re-enable the image change triggers:
$ oc set triggers dc/<name> --auto
DeploymentConfigs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations.
5.2.1.5. Executing commands inside a container
You can add a command to a container, which modifies the container’s startup behavior by overruling the image’s ENTRYPOINT
. This is different from a lifecycle hook, which instead can be run once per deployment at a specified time.
Procedure
Add the
command
parameters to thespec
field of the DeploymentConfig. You can also add anargs
field, which modifies thecommand
(or theENTRYPOINT
ifcommand
does not exist).spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'
For example, to execute the
java
command with the-jar
and/opt/app-root/springboots2idemo.jar
arguments:spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar
5.2.1.6. Viewing deployment logs
Procedure
To stream the logs of the latest revision for a given DeploymentConfig:
$ oc logs -f dc/<name>
If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a Pod of your application.
You can also view logs from older failed deployment processes, if and only if these processes (old ReplicationControllers and their deployer Pods) exist and have not been pruned or deleted manually:
$ oc logs --version=1 dc/<name>
5.2.1.7. Deployment triggers
A DeploymentConfig can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster.
If no triggers are defined on a DeploymentConfig, a ConfigChange
trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.
ConfigChange deployment triggers
The ConfigChange
trigger results in a new ReplicationController whenever configuration changes are detected in the Pod template of the DeploymentConfig.
If a ConfigChange
trigger is defined on a DeploymentConfig, the first ReplicationController is automatically created soon after the DeploymentConfig itself is created and it is not paused.
ConfigChange deployment trigger
triggers: - type: "ConfigChange"
ImageChange deployment triggers
The ImageChange
trigger results in a new ReplicationController whenever the content of an imagestreamtag changes (when a new version of the image is pushed).
ImageChange deployment trigger
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true 1
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
namespace: "myproject"
containerNames:
- "helloworld"
- 1
- If the
imageChangeParams.automatic
field is set tofalse
, the trigger is disabled.
With the above example, when the latest
tag value of the origin-ruby-sample
imagestream changes and the new image value differs from the current image specified in the DeploymentConfig’s helloworld
container, a new ReplicationController is created using the new image for the helloworld
container.
If an ImageChange
trigger is defined on a DeploymentConfig (with a ConfigChange
trigger and automatic=false
, or with automatic=true
) and the ImageStreamTag
pointed by the ImageChange
trigger does not exist yet, then the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the ImageStreamTag
.
5.2.1.7.1. Setting deployment triggers
Procedure
You can set deployment triggers for a DeploymentConfig using the
oc set triggers
command. For example, to set aImageChangeTrigger
, use the following command:$ oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name>
5.2.1.8. Setting deployment resources
This resource is available only if a cluster administrator has enabled the ephemeral storage technology preview. This feature is disabled by default.
A deployment is completed by a Pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, Pods consume unbounded node resources. However, if a project specifies default container limits, then Pods consume resources up to those limits.
You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the Recreate, Rolling, or Custom deployment strategies.
Procedure
In the following example, each of
resources
,cpu
,memory
, andephemeral-storage
is optional:type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3
- 1
cpu
is in CPU units:100m
represents 0.1 CPU units (100 * 1e-3).- 2
memory
is in bytes:256Mi
represents 268435456 bytes (256 * 2 ^ 20).- 3
ephemeral-storage
is in bytes:1Gi
represents 1073741824 bytes (2 ^ 30). This applies only if your cluster administrator enabled the ephemeral storage technology preview.
However, if a quota has been defined for your project, one of the following two items is required:
A
resources
section set with an explicitrequests
:type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi"
- 1
- The
requests
object contains the list of resources that correspond to the list of resources in the quota.
-
A limit range defined in your project, where the defaults from the
LimitRange
object apply to Pods created during the deployment process.
To set deployment resources, choose one of the above options. Otherwise, deploy Pod creation fails, citing a failure to satisfy quota.
5.2.1.9. Scaling manually
In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them.
Pods can also be autoscaled using the oc autoscale
command.
Procedure
To manually scale a DeploymentConfig, use the
oc scale
command. For example, the following command sets the replicas in thefrontend
DeploymentConfig to3
.$ oc scale dc frontend --replicas=3
The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig
frontend
.
5.2.1.10. Accessing private repositories from DeploymentConfigs
You can add a Secret to your DeploymentConfig so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method.
Procedure
- Create a new project.
- From the Workloads page, create a Secret that contains credentials for accessing a private image repository.
- Create a DeploymentConfig.
- On the DeploymentConfig editor page, set the Pull Secret and save your changes.
5.2.1.11. Assigning pods to specific nodes
You can use node selectors in conjunction with labeled nodes to control Pod placement.
Cluster administrators can set the default node selector for a project in order to restrict Pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further.
Procedure
To add a node selector when creating a pod, edit the Pod configuration, and add the
nodeSelector
value. This can be added to a single Pod configuration, or in a Pod template:apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd ...
Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator.
For example, if a project has the
type=user-node
andregion=east
labels added to a project by the cluster administrator, and you add the abovedisktype: ssd
label to a Pod, the Pod is only ever scheduled on nodes that have all three labels.NoteLabels can only be set to one value, so setting a node selector of
region=west
in a Pod configuration that hasregion=east
as the administrator-set default, results in a Pod that will never be scheduled.
5.2.1.12. Running a Pod with a different service account
You can run a Pod with a service account other than the default.
Procedure
Edit the DeploymentConfig:
$ oc edit dc/<deployment_config>
Add the
serviceAccount
andserviceAccountName
parameters to thespec
field, and specify the service account you want to use:spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>
5.3. Using DeploymentConfig strategies
A deployment strategy is a way to change or upgrade an application. The aim is to make the change without downtime in a way that the user barely notices the improvements.
Because the end user usually accesses the application through a route handled by a router, the deployment strategy can focus on DeploymentConfig features or routing features. Strategies that focus on the DeploymentConfig impact all routes that use the application. Strategies that use router features target individual routes.
Many deployment strategies are supported through the DeploymentConfig, and some additional strategies are supported through router features. DeploymentConfig strategies are discussed in this section.
Choosing a deployment strategy
Consider the following when choosing a deployment strategy:
- Long-running connections must be handled gracefully.
- Database conversions can be complex and must be done and rolled back along with the application.
- If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition.
- You must have the infrastructure to do this.
- If you have a non-isolated test environment, you can break both new and old versions.
A deployment strategy uses readiness checks to determine if a new Pod is ready for use. If a readiness check fails, the DeploymentConfig retries to run the Pod until it times out. The default timeout is 10m
, a value set in TimeoutSeconds
in dc.spec.strategy.*params
.
5.3.1. Rolling strategy
A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. The Rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig.
A rolling deployment typically waits for new pods to become ready
via a readiness check
before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted.
When to use a Rolling deployment:
- When you want to take no downtime during an application update.
- When your application supports having old code and new code running at the same time.
A Rolling deployment means you to have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility.
Example Rolling strategy definition
strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {}
- 1
- The time to wait between individual Pod updates. If unspecified, this value defaults to
1
. - 2
- The time to wait between polling the deployment status after update. If unspecified, this value defaults to
1
. - 3
- The time to wait for a scaling event before giving up. Optional; the default is
600
. Here, giving up means automatically rolling back to the previous complete deployment. - 4
maxSurge
is optional and defaults to25%
if not specified. See the information below the following procedure.- 5
maxUnavailable
is optional and defaults to25%
if not specified. See the information below the following procedure.- 6
pre
andpost
are both lifecycle hooks.
The Rolling strategy:
-
Executes any
pre
lifecycle hook. - Scales up the new ReplicationController based on the surge count.
- Scales down the old ReplicationController based on the max unavailable count.
- Repeats this scaling until the new ReplicationController has reached the desired replica count and the old ReplicationController has been scaled to zero.
-
Executes any
post
lifecycle hook.
When scaling down, the Rolling strategy waits for Pods to become ready so it can decide whether further scaling would affect availability. If scaled up Pods never become ready, the deployment process will eventually time out and result in a deployment failure.
The maxUnavailable
parameter is the maximum number of Pods that can be unavailable during the update. The maxSurge
parameter is the maximum number of Pods that can be scheduled above the original number of Pods. Both parameters can be set to either a percentage (e.g., 10%
) or an absolute value (e.g., 2
). The default value for both is 25%
.
These parameters allow the deployment to be tuned for availability and speed. For example:
-
maxUnavailable*=0
andmaxSurge*=20%
ensures full capacity is maintained during the update and rapid scale up. -
maxUnavailable*=10%
andmaxSurge*=0
performs an update using no extra capacity (an in-place update). -
maxUnavailable*=10%
andmaxSurge*=10%
scales up and down quickly with some potential for capacity loss.
Generally, if you want fast rollouts, use maxSurge
. If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable
.
5.3.1.1. Canary deployments
All Rolling deployments in OpenShift Container Platform are canary deployments; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig will be automatically rolled back.
The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a Custom deployment or using a blue-green deployment strategy.
5.3.1.2. Creating a Rolling deployment
Rolling deployments are the default type in OpenShift Container Platform. You can create a Rolling deployment using the CLI.
Procedure
Create an application based on the example deployment images found in DockerHub:
$ oc new-app openshift/deployment-example
If you have the router installed, make the application available via a route (or use the service IP directly)
$ oc expose svc/deployment-example
-
Browse to the application at
deployment-example.<project>.<router_domain>
to verify you see thev1
image. Scale the DeploymentConfig up to three replicas:
$ oc scale dc/deployment-example --replicas=3
Trigger a new deployment automatically by tagging a new version of the example as the
latest
tag:$ oc tag deployment-example:v2 deployment-example:latest
-
In your browser, refresh the page until you see the
v2
image. When using the CLI, the following command shows how many Pods are on version 1 and how many are on version 2. In the web console, the Pods are progressively added to v2 and removed from v1:
$ oc describe dc deployment-example
During the deployment process, the new ReplicationController is incrementally scaled up. After the new Pods are marked as ready
(by passing their readiness check), the deployment process continues.
If the Pods do not become ready, the process aborts, and the DeploymentConfig rolls back to its previous version.
5.3.2. Recreate strategy
The Recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.
Example Recreate strategy definition
strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}
The Recreate strategy:
-
Executes any
pre
lifecycle hook. - Scales down the previous deployment to zero.
-
Executes any
mid
lifecycle hook. - Scales up the new deployment.
-
Executes any
post
lifecycle hook.
During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.
When to use a Recreate deployment:
- When you must run migrations or other data transformations before your new code starts.
- When you do not support having new and old versions of your application code running at the same time.
- When you want to use a RWO volume, which is not supported being shared between multiple replicas.
A Recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time.
5.3.3. Custom strategy
The Custom strategy allows you to provide your own deployment behavior.
Example Custom strategy definition
strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1
In the above example, the organization/strategy
container image provides the deployment behavior. The optional command
array overrides any CMD
directive specified in the image’s Dockerfile
. The optional environment variables provided are added to the execution environment of the strategy process.
Additionally, OpenShift Container Platform provides the following environment variables to the deployment process:
Environment variable | Description |
---|---|
| The name of the new deployment (a ReplicationController). |
| The name space of the new deployment. |
The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.
Alternatively, use customParams
to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy
binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Container Platform deployer image is used instead:
strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete
This results in following deployment:
Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete
If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication.
5.3.4. Lifecycle hooks
The Rolling and Recreate strategies support lifecycle hooks, or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:
Example pre
lifecycle hook
pre:
failurePolicy: Abort
execNewPod: {} 1
- 1
execNewPod
is a Pod-based lifecycle hook.
Every hook has a failurePolicy
, which defines the action the strategy should take when a hook failure is encountered:
| The deployment process will be considered a failure if the hook fails. |
| The hook execution should be retried until it succeeds. |
| Any hook failure should be ignored and the deployment should proceed. |
Hooks have a type-specific field that describes how to execute the hook. Currently, Pod-based hooks are the only supported hook type, specified by the execNewPod
field.
Pod-based lifecycle hook
Pod-based lifecycle hooks execute hook code in a new Pod derived from the template in a DeploymentConfig.
The following simplified example DeploymentConfig uses the Rolling strategy. Triggers and some other minor details are omitted for brevity:
kind: DeploymentConfig apiVersion: v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4
- 1
- The
helloworld
name refers tospec.template.spec.containers[0].name
. - 2
- This
command
overrides anyENTRYPOINT
defined by theopenshift/origin-ruby-sample
image. - 3
env
is an optional set of environment variables for the hook container.- 4
volumes
is an optional set of volume references for the hook container.
In this example, the pre
hook will be executed in a new Pod using the openshift/origin-ruby-sample
image from the helloworld
container. The hook Pod has the following properties:
-
The hook command is
/usr/bin/command arg1 arg2
. -
The hook container has the
CUSTOM_VAR1=custom_value1
environment variable. -
The hook failure policy is
Abort
, meaning the deployment process fails if the hook fails. -
The hook Pod inherits the
data
volume from the DeploymentConfig Pod.
5.3.4.1. Setting lifecycle hooks
You can set lifecycle hooks, or deployment hooks, for a DeploymentConfig using the CLI.
Procedure
Use the
oc set deployment-hook
command to set the type of hook you want:--pre
,--mid
, or--post
. For example, to set a pre-deployment hook:$ oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ -v data --failure-policy=abort -- /usr/bin/command arg1 arg2
5.4. Using route-based deployment strategies
Deployment strategies provide a way for the application to evolve. Some strategies use DeploymentConfigs to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with DeploymentConfigs to impact specific routes.
The most common route-based strategy is to use a blue-green deployment. The new version (the blue version) is brought up for testing and evaluation, while the users still use the stable version (the green version). When ready, the users are switched to the blue version. If a problem arises, you can switch back to the green version.
A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users.
A canary deployment tests the new version but when a problem is detected it quickly falls back to the previous version. This can be done with both of the above strategies.
The route-based deployment strategies do not scale the number of Pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled.
5.4.1. Proxy shards and traffic splitting
In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard, which forwards or splits the traffic it receives to a separate service or application running elsewhere.
In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes.
Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale
command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities.
5.4.2. N-1 compatibility
Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem.
This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user’s browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it.
For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional.
One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment.
5.4.3. Graceful termination
OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit.
On shutdown, OpenShift Container Platform sends a TERM
signal to the processes in the container. Application code, on receiving SIGTERM
, stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed (or gracefully terminate individual connections at the next opportunity) before exiting.
After the graceful termination period expires, a process that has not exited is sent the KILL
signal, which immediately ends the process. The terminationGracePeriodSeconds
attribute of a Pod or Pod template controls the graceful termination period (default 30 seconds) and may be customized per application as necessary.
5.4.4. Blue-green deployments
Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the green version) to the newer version (the blue version). You can use a Rolling strategy or switch services in a route.
Because many applications depend on persistent data, you must have an application that supports N-1 compatibility, which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer.
Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version.
5.4.4.1. Setting up a blue-green deployment
Blue-green deployments use two DeploymentConfigs. Both are running, and the one in production depends on the service the route specifies, with each DeploymentConfig exposed to a different service.
Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications.
You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (blue) version is live.
If necessary, you can roll back to the older (green) version by switching the service back to the previous version.
Procedure
Create two copies of the example application:
$ oc new-app openshift/deployment-example:v1 --name=example-green $ oc new-app openshift/deployment-example:v2 --name=example-blue
This creates two independent application components: one running the
v1
image under theexample-green
service, and one using thev2
image under theexample-blue
service.Create a route that points to the old service:
$ oc expose svc/example-green --name=bluegreen-example
-
Browse to the application at
example-green.<project>.<router_domain>
to verify you see thev1
image. Edit the route and change the service name to
example-blue
:$ oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-blue"}}}'
-
To verify that the route has changed, refresh the browser until you see the
v2
image.
5.4.5. A/B deployments
The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version.
Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the previous version. As you adjust the request load on each version, the number of Pods in each service might have to be scaled as well to provide the expected performance.
In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user’s reaction to the different versions to inform design decisions.
For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together.
OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI.
5.4.5.1. Load balancing for A/B testing
The user sets up a route with multiple services. Each service handles a version of the application.
Each service is assigned a weight
and the portion of requests to each service is the service_weight
divided by the sum_of_weights
. The weight
for each service is distributed to the service’s endpoints so that the sum of the endpoint weights
is the service weight
.
The route can have up to four services. The weight
for the service can be between 0
and 256
. When the weight
is 0
, the service does not participate in load-balancing but continues to serve existing persistent connections. When the service weight
is not 0
, each endpoint has a minimum weight
of 1
. Because of this, a service with a lot of endpoints can end up with higher weight
than desired. In this case, reduce the number of Pods to get the desired load balance weight
.
Procedure
To set up the A/B environment:
Create the two applications and give them different names. Each creates a DeploymentConfig. The applications are versions of the same program; one is usually the current production version and the other the proposed new version:
$ oc new-app openshift/deployment-example --name=ab-example-a $ oc new-app openshift/deployment-example --name=ab-example-b
Both applications are deployed and services are created.
Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version.
$ oc expose svc/ab-example-a
Browse to the application at
ab-example-<project>.<router_domain>
to verify that you see the desired version.When you deploy the route, the router balances the traffic according to the
weights
specified for the services. At this point, there is a single service with defaultweight=1
so all requests go to it. Adding the other service as analternateBackends
and adjusting theweights
brings the A/B setup to life. This can be done by theoc set route-backends
command or by editing the route.Setting the
oc set route-backend
to0
means the service does not participate in load-balancing, but continues to serve existing persistent connections.NoteChanges to the route just change the portion of traffic to the various services. You might have to scale the DeploymentConfigs to adjust the number of Pods to handle the anticipated loads.
To edit the route, run:
$ oc edit route <route_name> ... metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 ...
5.4.5.1.1. Managing weights using the web console
Procedure
- Navigate to the Route details page (Applications/Routes).
- Select Edit from the Actions menu.
- Check Split traffic across multiple services.
The Service Weights slider sets the percentage of traffic sent to each service.
For traffic split between more than two services, the relative weights are specified by integers between 0 and 256 for each service.
Traffic weightings are shown on the Overview in the expanded rows of the applications between which traffic is split.
5.4.5.1.2. Managing weights using the CLI
Procedure
To manage the services and corresponding weights load balanced by the route, use the
oc set route-backends
command:$ oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]
For example, the following sets
ab-example-a
as the primary service withweight=198
andab-example-b
as the first alternate service with aweight=2
:$ oc set route-backends ab-example ab-example-a=198 ab-example-b=2
This means 99% of traffic is sent to service
ab-example-a
and 1% to serviceab-example-b
.This command does not scale the DeploymentConfigs. You might be required to do so to have enough Pods to handle the request load.
Run the command with no flags to verify the current configuration:
$ oc set route-backends ab-example NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)
To alter the weight of an individual service relative to itself or to the primary service, use the
--adjust
flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed.For example:
$ oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 $ oc set route-backends ab-example --adjust ab-example-b=5% $ oc set route-backends ab-example --adjust ab-example-b=+15%
The
--equal
flag sets theweight
of all services to100
:$ oc set route-backends ab-example --equal
The
--zero
flag sets theweight
of all services to0
. All requests then return with a 503 error.NoteNot all routers may support multiple or weighted backends.
5.4.5.1.3. One service, multiple DeploymentConfigs
Procedure
Create a new application, adding a label
ab-example=true
that will be common to all shards:$ oc new-app openshift/deployment-example --name=ab-example-a
The application is deployed and a service is created. This is the first shard.
Make the application available via a route (or use the service IP directly):
$ oc expose svc/ab-example-a --name=ab-example
-
Browse to the application at
ab-example-<project>.<router_domain>
to verify you see thev1
image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables:
$ oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red"
At this point, both sets of Pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you.
To force your browser to one or the other shard:
Use the
oc scale
command to reduce replicas ofab-example-a
to0
.$ oc scale dc/ab-example-a --replicas=0
Refresh your browser to show
v2
andshard B
(in red).Scale
ab-example-a
to1
replica andab-example-b
to0
:$ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0
Refresh your browser to show
v1
andshard A
(in blue).
If you trigger a deployment on either shard, only the Pods in that shard are affected. You can trigger a deployment by changing the
SUBTITLE
environment variable in either DeploymentConfig:$ oc edit dc/ab-example-a
or
$ oc edit dc/ab-example-b