Chapter 14. Deployments
14.1. Overview
A deployment in OpenShift Enterprise is a replication controller based on a user defined template called a deployment configuration. Deployments are created manually or in response to triggered events.
The deployment system provides:
- A deployment configuration, which is a template for deployments.
- Triggers that drive automated deployments in response to events.
- User-customizable strategies to transition from the previous deployment to the new deployment.
- Rollbacks to a previous deployment.
- Manual replication scaling.
The deployment configuration contains a version number that is incremented each time a new deployment is created from that configuration. In addition, the cause of the last deployment is added to the configuration.
14.2. Creating a Deployment Configuration
A deployment configuration consists of the following key parts:
Deployment configurations are deploymentConfig
OpenShift Enterprise API resources which can be managed with the oc
command like any other resource. The following is an example of a deploymentConfig
resource:
kind: "DeploymentConfig" apiVersion: "v1" metadata: name: "frontend" spec: template: 1 metadata: labels: name: "frontend" spec: containers: - name: "helloworld" image: "openshift/origin-ruby-sample" ports: - containerPort: 8080 protocol: "TCP" replicas: 5 2 selector: name: "frontend" triggers: - type: "ConfigChange" 3 - type: "ImageChange" 4 imageChangeParams: automatic: true containerNames: - "helloworld" from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" strategy: 5 type: "Rolling"
- 1
- The replication controller template named
frontend
describes a simple Ruby application. - 2
- There will be 5 replicas of
frontend
by default. - 3
- A configuration change trigger causes a new deployment to be created any time the replication controller template changes.
- 4
- An image change trigger trigger causes a new deployment to be created each time a new version of the
origin-ruby-sample:latest
image repository is available. - 5
- The Rolling strategy is the default and may be omitted.
14.3. Starting a Deployment
You can start a new deployment manually using the web console, or from the CLI:
$ oc deploy <deployment_config> --latest
If there’s already a deployment in progress, the command will display a message and a new deployment will not be started.
14.4. Viewing a Deployment
To get basic information about recent deployments:
$ oc deploy <deployment_config>
This will show details about the latest and recent deployments, including any currently running deployment.
For more detailed information about a deployment configuration and the latest deployment:
$ oc describe dc <deployment_config>
The web console shows deployments in the Browse tab.
14.5. Canceling a Deployment
To cancel a running or stuck deployment:
$ oc deploy <deployment_config> --cancel
The cancellation is a best-effort operation, and may take some time to complete. It’s possible the deployment will partially or totally complete before the cancellation is effective.
14.6. Retrying a Deployment
To retry the last failed deployment:
$ oc deploy <deployment_config> --retry
If the last deployment didn’t fail, the command will display a message and the deployment will not be retried.
Retrying a deployment restarts the deployment and does not create a new deployment version. The restarted deployment will have the same configuration it had when it failed.
14.7. Rolling Back a Deployment
Rollbacks revert an application back to a previous deployment and can be performed using the REST API, the CLI, or the web console.
To rollback to the last successful deployment:
$ oc rollback <deployment_config>
The deployment configuration’s template will be reverted to match the deployment specified in the rollback command, and a new deployment will be started.
Image change triggers on the deployment configuration are disabled as part of the rollback to prevent unwanted deployments soon after the rollback is complete. To re-enable the image change triggers:
$ oc deploy <deployment_config> --enable-triggers
To roll back to a specific version:
$ oc rollback <deployment_config> --to-version=1
To see what the rollback would look like without performing the rollback:
$ oc rollback <deployment_config> --dry-run
14.8. Executing Commands Inside a Container
You can add a command to a container, which modifies the container’s startup behavior by overruling the image’s ENTRYPOINT
. This is different from a lifecycle hook, which instead can be run once per deployment at a specified time.
Add the command
parameters to the spec
field of the deployment configuration. You can also add an args
field, which modifies the command
(or the ENTRYPOINT
if command
does not exist).
... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' ...
For example, to execute the java
command with the '-jar' and '/opt/app-root/springboots2idemo.jar' arguments:
... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar ...
14.9. Viewing Deployment Logs
To view the logs of the latest deployment for a given deployment configuration:
$ oc logs dc/<deployment_config> [--follow]
Logs can be retrieved either while the deployment is running or if it has failed. If the deployment was successful, there will be no logs to view.
You can also view logs from older deployments:
$ oc logs --version=1 dc/<deployment_config>
This command returns the logs from the first deployment of the provided deployment configuration, if and only if that deployment exists (i.e., it has failed and has not been manually deleted or pruned).
14.10. Triggers
A deployment configuration can contain triggers, which drive the creation of new deployments in response to events, only inside OpenShift Enterprise at the moment.
If no triggers are defined on a deployment configuration, deployments must be started manually.
14.10.1. Configuration Change Trigger
The ConfigChange
trigger results in a new deployment whenever new changes are detected in the pod template of the deployment configuration.
If only a ConfigChange
trigger is defined on a deployment configuration, the first deployment is automatically created soon after the deployment configuration itself is created.
Example 14.1. A ConfigChange
Trigger
triggers: - type: "ConfigChange"
14.10.2. Image Change Trigger
The ImageChange
trigger results in a new deployment whenever the value of an image stream tag changes, either by a build or because it was imported.
Example 14.2. An ImageChange
Trigger
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true 1
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
containerNames:
- "helloworld"
- 1
- If the
imageChangeParams.automatic
field is set tofalse
, the trigger is disabled.
With the above example, when the latest
tag value of the origin-ruby-sample
image stream changes and the new image value differs from the current image specified in the deployment configuration’s helloworld
container, a new deployment is created using the new image for the helloworld
container.
If an ImageChange
trigger is defined on a deployment configuration (with a ConfigChange
trigger or with automatic=true
) and the ImageStreamTag
pointed by the ImageChange
trigger does not exist yet, then the first deployment automatically starts as soon as an image is imported or pushed by a build to the ImageStreamTag
.
14.11. Strategies
A deployment strategy determines the deployment process, and is defined by the deployment configuration. Each application has different requirements for availability (and other considerations) during deployments. OpenShift Enterprise provides strategies to support a variety of deployment scenarios.
A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the deployment is stopped.
The Rolling strategy is the default strategy used if no strategy is specified on a deployment configuration.
14.11.1. Rolling Strategy
The rolling strategy performs a rolling update and supports lifecycle hooks for injecting code into the deployment process.
The rolling deployment strategy waits for pods to pass their readiness check before scaling down old components, and does not allow pods that do not pass their readiness check within a configurable timeout.
The following is an example of the Rolling strategy:
strategy: type: Rolling rollingParams: timeoutSeconds: 120 1 maxSurge: "20%" 2 maxUnavailable: "10%" 3 pre: {} 4 post: {}
The Rolling strategy will:
-
Execute any
pre
lifecycle hook. - Scale up the new deployment based on the surge configuration.
- Scale down the old deployment based on the max unavailable configuration.
- Repeat this scaling until the new deployment has reached the desired replica count and the old deployment has been scaled to zero.
-
Execute any
post
lifecycle hook.
When scaling down, the Rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment will eventually time out and result in a deployment failure.
When executing the post
lifecycle hook, all failures will be ignored regardless of the failure policy specified on the hook.
The maxUnavailable
parameter is the maximum number of pods that can be unavailable during the update. The maxSurge
parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10%) or an absolute value (e.g., 2). The default value for both is 25%.
These parameters allow the deployment to be tuned for availability and speed. For example:
-
maxUnavailable=0
andmaxSurge=20%
ensures full capacity is maintained during the update and rapid scale up. -
maxUnavailable=10%
andmaxSurge=0
performs an update using no extra capacity (an in-place update). -
maxUnavailable=10%
andmaxSurge=10%
scales up and down quickly with some potential for capacity loss.
14.11.2. Recreate Strategy
The Recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.
The following is an example of the Recreate strategy:
strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}
The Recreate strategy will:
- Execute any "pre" lifecycle hook.
- Scale down the previous deployment to zero.
- Execute any "mid" lifecycle hook.
- Scale up the new deployment.
- Execute any "post" lifecycle hook.
During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.
14.11.3. Custom Strategy
The Custom strategy allows you to provide your own deployment behavior.
The following is an example of the Custom strategy:
strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1
In the above example, the organization/strategy container image provides the deployment behavior. The optional command
array overrides any CMD
directive specified in the image’s Dockerfile. The optional environment variables provided are added to the execution environment of the strategy process.
Additionally, OpenShift Enterprise provides the following environment variables to the strategy process:
Environment Variable | Description |
---|---|
| The name of the new deployment (a replication controller). |
| The namespace of the new deployment. |
The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.
14.12. Lifecycle Hooks
The Recreate and Rolling strategies support lifecycle hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:
The following is an example of a "pre" lifecycle hook:
pre:
failurePolicy: Abort
execNewPod: {} 1
- 1
execNewPod
is a pod-based lifecycle hook.
Every hook has a failurePolicy
, which defines the action the strategy should take when a hook failure is encountered:
| The deployment should be considered a failure if the hook fails. |
| The hook execution should be retried until it succeeds. |
| Any hook failure should be ignored and the deployment should proceed. |
Some hook points for a strategy might support only a subset of failure policy values. For example, the Recreate and Rolling strategies do not currently support the Abort
policy for a "post" deployment lifecycle hook. Consult the documentation for a given strategy for details on any restrictions regarding lifecycle hooks.
Hooks have a type-specific field that describes how to execute the hook. Currently pod-based hooks are the only supported hook type, specified by the execNewPod
field.
14.12.1. Pod-based Lifecycle Hook
Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a deployment configuration.
The following simplified example deployment configuration uses the Rolling strategy. Triggers and some other minor details are omitted for brevity:
kind: DeploymentConfig apiVersion: v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4
- 1
- The
helloworld
name refers tospec.template.spec.containers[0].name
. - 2
- This
command
overrides anyENTRYPOINT
defined by theopenshift/origin-ruby-sample
image. - 3
env
is an optional set of environment variables for the hook container.- 4
volumes
is an optional set of volume references for the hook container.
In this example, the "pre" hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod will have the following properties:
-
The hook command will be
/usr/bin/command arg1 arg2
. -
The hook container will have the
CUSTOM_VAR1=custom_value1
environment variable. -
The hook failure policy is
Abort
, meaning the deployment will fail if the hook fails. -
The hook pod will inherit the
data
volume from the deployment configuration pod.
14.13. Deployment Resources
A deployment is completed by a pod that consumes resources (memory and CPU) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits.
You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the Recreate, Rolling, or Custom deployment strategies.
In the following example, each of resources
, cpu
, and memory
is optional:
type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2
However, if a quota has been defined for your project, one of the following two items is required:
A
resources
section set with an explicitrequests
:type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi"
- 1
- The
requests
object contains the list of resources that correspond to the list of resources in the quota.
-
A limit range defined in your project, where the defaults from the
LimitRange
object apply to pods created during the deployment process.
Otherwise, deploy pod creation will fail, citing a failure to satisfy quota.
14.14. Manual Scaling
In addition to rollbacks, you can exercise fine-grained control over the number of replicas from the web console, or by using the oc scale
command. For example, the following command sets the replicas in the deployment configuration frontend
to 3.
$ oc scale dc frontend --replicas=3
The number of replicas eventually propagates to the desired and current state of the deployment configured by the deployment configuration frontend
.
14.15. Assigning Pods to Specific Nodes
You can use node selectors in conjunction with labeled nodes to control pod placement.
OpenShift Enterprise administrators can assign labels during an advanced installation, or added to a node after installation.
Cluster administrators can set the default node selector for your project in order to restrict pod placement to specific nodes. As an OpenShift developer, you can set a node selector on a pod configuration to restrict nodes even further.
To add a node selector when creating a pod, edit the pod configuration, and add the nodeSelector
value. This can be added to a single pod configuration, or in a pod template:
apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd ...
Pods created when the node selector is in place are assigned to nodes with the specified labels.
The labels specified here are used in conjunction with the labels added by a cluster administrator. For example, if a project has the type=user-node
and region=east
labels added to a project by the cluster administrator, and you add the above disktype: ssd
label to a pod, the pod will only ever be scheduled on nodes that have all three labels.
Labels can only be set to one value, so setting a node selector of region=west
in a pod configuration that has region=east
as the administrator-set default, results in a pod that will never be scheduled.
14.16. Running a Pod with a Different Service Account
You can run a pod with a service account other than the default:
Edit the deployment configuration:
$ oc edit dc/<deployment_config>
Add the
serviceAccount
andserviceAccountName
parameters to thespec
field, and specify the service account you want to use:spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>