Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 8. Deployments
8.1. Understanding Deployment and DeploymentConfig objects Link kopierenLink in die Zwischenablage kopiert!
The
Deployment
DeploymentConfig
-
A or
Deploymentobject, either of which describes the desired state of a particular component of the application as a pod template.DeploymentConfig -
objects involve one or more replica sets, which contain a point-in-time record of the state of a deployment as a pod template. Similarly,
Deploymentobjects involve one or more replication controllers, which preceded replica sets.DeploymentConfig - One or more pods, which represent an instance of a particular version of an application.
Use
Deployment
DeploymentConfig
8.1.1. Building blocks of a deployment Link kopierenLink in die Zwischenablage kopiert!
Deployments and deployment configs are enabled by the use of native Kubernetes API objects
ReplicaSet
ReplicationController
Users do not have to manipulate replica sets, replication controllers, or pods owned by
Deployment
DeploymentConfig
If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy.
The following sections provide further details on these objects.
8.1.1.1. Replica sets Link kopierenLink in die Zwischenablage kopiert!
A
ReplicaSet
Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create.
The following is an example
ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-1
labels:
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
tier: frontend
spec:
containers:
- image: openshift/hello-openshift
name: helloworld
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
- 1
- A label query over a set of resources. The result of
matchLabelsandmatchExpressionsare logically conjoined. - 2
- Equality-based selector to specify resources with labels that match the selector.
- 3
- Set-based selector to filter keys. This selects all resources with key equal to
tierand value equal tofrontend.
8.1.1.2. Replication controllers Link kopierenLink in die Zwischenablage kopiert!
Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements.
A replication controller configuration consists of:
- The number of replicas desired, which can be adjusted at run time.
-
A definition to use when creating a replicated pod.
Pod - A selector for identifying managed pods.
A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the
Pod
The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler.
Use a
DeploymentConfig
If you require custom orchestration or do not require updates, use replica sets instead of replication controllers.
The following is an example definition of a replication controller:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend-1
spec:
replicas: 1
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
containers:
- image: openshift/hello-openshift
name: helloworld
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
8.1.2. Deployments Link kopierenLink in die Zwischenablage kopiert!
Kubernetes provides a first-class, native API object type in OpenShift Container Platform called
Deployment
Deployment
For example, the following deployment definition creates a replica set to bring up one
hello-openshift
Deployment definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 80
8.1.3. DeploymentConfig objects Link kopierenLink in die Zwischenablage kopiert!
Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of
DeploymentConfig
DeploymentConfig
However, OpenShift Container Platform deployments from
DeploymentConfig
The
DeploymentConfig
-
A object, which is a template for running applications.
DeploymentConfig - Triggers that drive automated deployments in response to events.
- User-customizable deployment strategies to transition from the previous version to the new version. A strategy runs inside a pod commonly referred as the deployment process.
- A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment.
- Versioning of your application to support rollbacks either manually or automatically in case of deployment failure.
- Manual replication scaling and autoscaling.
When you create a
DeploymentConfig
DeploymentConfig
Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the
TERM
The OpenShift Container Platform
DeploymentConfig
-
The elements of a definition.
ReplicationController - Triggers for creating a new deployment automatically.
- The strategy for transitioning between deployments.
- Lifecycle hooks.
Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the previous replication controller is retained to enable easy rollback if needed.
Example DeploymentConfig definition
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: frontend
spec:
replicas: 5
selector:
name: frontend
template: { ... }
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- helloworld
from:
kind: ImageStreamTag
name: hello-openshift:latest
type: ImageChange
strategy:
type: Rolling
- 1
- A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration.
- 2
- An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream.
- 3
- The default
Rollingstrategy makes a downtime-free transition between deployments.
8.1.4. Comparing Deployment and DeploymentConfig objects Link kopierenLink in die Zwischenablage kopiert!
Both Kubernetes
Deployment
DeploymentConfig
Deployment
DeploymentConfig
The following sections go into more detail on the differences between the two object types to further help you decide which type to use.
8.1.4.1. Design Link kopierenLink in die Zwischenablage kopiert!
One important difference between
Deployment
DeploymentConfig
DeploymentConfig
Deployments
For
DeploymentConfig
However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs.
8.1.4.2. Deployment-specific features Link kopierenLink in die Zwischenablage kopiert!
8.1.4.2.1. Rollover Link kopierenLink in die Zwischenablage kopiert!
The deployment process for
Deployment
DeploymentConfig
Deployment
DeploymentConfig
Deployment
8.1.4.2.2. Proportional scaling Link kopierenLink in die Zwischenablage kopiert!
Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a
Deployment
DeploymentConfig
8.1.4.2.3. Pausing mid-rollout Link kopierenLink in die Zwischenablage kopiert!
Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes.
8.1.4.3. DeploymentConfig object-specific features Link kopierenLink in die Zwischenablage kopiert!
8.1.4.3.1. Automatic rollbacks Link kopierenLink in die Zwischenablage kopiert!
Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure.
8.1.4.3.2. Triggers Link kopierenLink in die Zwischenablage kopiert!
Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment:
$ oc rollout pause deployments/<name>
8.1.4.3.3. Lifecycle hooks Link kopierenLink in die Zwischenablage kopiert!
Deployments do not yet support any lifecycle hooks.
8.1.4.3.4. Custom strategies Link kopierenLink in die Zwischenablage kopiert!
Deployments do not support user-specified custom deployment strategies.
8.2. Managing deployment processes Link kopierenLink in die Zwischenablage kopiert!
8.2.1. Managing DeploymentConfig objects Link kopierenLink in die Zwischenablage kopiert!
DeploymentConfig
oc
8.2.1.1. Starting a deployment Link kopierenLink in die Zwischenablage kopiert!
You can start a rollout to begin the deployment process of your application.
Procedure
To start a new deployment process from an existing
object, run the following command:DeploymentConfig$ oc rollout latest dc/<name>NoteIf a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed.
8.2.1.2. Viewing a deployment Link kopierenLink in die Zwischenablage kopiert!
You can view a deployment to get basic information about all the available revisions of your application.
Procedure
To show details about all recently created replication controllers for the provided
object, including any currently running deployment process, run the following command:DeploymentConfig$ oc rollout history dc/<name>To view details specific to a revision, add the
flag:--revision$ oc rollout history dc/<name> --revision=1For more detailed information about a
object and its latest revision, use theDeploymentConfigcommand:oc describe$ oc describe dc <name>
8.2.1.3. Retrying a deployment Link kopierenLink in die Zwischenablage kopiert!
If the current revision of your
DeploymentConfig
Procedure
To restart a failed deployment process:
$ oc rollout retry dc/<name>If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried.
NoteRetrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed.
8.2.1.4. Rolling back a deployment Link kopierenLink in die Zwischenablage kopiert!
Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.
Procedure
To rollback to the last successful deployed revision of your configuration:
$ oc rollout undo dc/<name>The
object’s template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified withDeploymentConfig, then the last successfully deployed revision is used.--to-revisionImage change triggers on the
object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete.DeploymentConfigTo re-enable the image change triggers:
$ oc set triggers dc/<name> --auto
Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations.
8.2.1.5. Executing commands inside a container Link kopierenLink in die Zwischenablage kopiert!
You can add a command to a container, which modifies the container’s startup behavior by overruling the image’s
ENTRYPOINT
Procedure
Add the
parameters to thecommandfield of thespecobject. You can also add anDeploymentConfigfield, which modifies theargs(or thecommandifENTRYPOINTdoes not exist).commandkind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'For example, to execute the
command with thejavaand-jararguments:/opt/app-root/springboots2idemo.jarkind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar # ...
8.2.1.6. Viewing deployment logs Link kopierenLink in die Zwischenablage kopiert!
Procedure
To stream the logs of the latest revision for a given
object:DeploymentConfig$ oc logs -f dc/<name>If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application.
You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually:
$ oc logs --version=1 dc/<name>
8.2.1.7. Deployment triggers Link kopierenLink in die Zwischenablage kopiert!
A
DeploymentConfig
If no triggers are defined on a
DeploymentConfig
8.2.1.7.1. Config change deployment triggers Link kopierenLink in die Zwischenablage kopiert!
The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the
DeploymentConfig
If a config change trigger is defined on a
DeploymentConfig
DeploymentConfig
Config change deployment trigger
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: example-dc
# ...
spec:
# ...
triggers:
- type: "ConfigChange"
8.2.1.7.2. Image change deployment triggers Link kopierenLink in die Zwischenablage kopiert!
The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed).
Image change deployment trigger
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: example-dc
# ...
spec:
# ...
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
namespace: "myproject"
containerNames:
- "helloworld"
- 1
- If the
imageChangeParams.automaticfield is set tofalse, the trigger is disabled.
With the above example, when the
latest
origin-ruby-sample
DeploymentConfig
helloworld
helloworld
If an image change trigger is defined on a
DeploymentConfig
automatic=false
automatic=true
8.2.1.7.3. Setting deployment triggers Link kopierenLink in die Zwischenablage kopiert!
Procedure
You can set deployment triggers for a
object using theDeploymentConfigcommand. For example, to set a image change trigger, use the following command:oc set triggers$ oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name>
8.2.1.8. Setting deployment resources Link kopierenLink in die Zwischenablage kopiert!
A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits.
The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a
Cannot allocate memory
You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies.
Procedure
In the following example, each of
,resources,cpu, andmemoryis optional:ephemeral-storagekind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: limits: cpu: "100m"1 memory: "256Mi"2 ephemeral-storage: "1Gi"3 However, if a quota has been defined for your project, one of the following two items is required:
A
section set with an explicitresources:requestskind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: requests:1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi"- 1
- The
requestsobject contains the list of resources that correspond to the list of resources in the quota.
-
A limit range defined in your project, where the defaults from the object apply to pods created during the deployment process.
LimitRange
To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota.
8.2.1.9. Scaling manually Link kopierenLink in die Zwischenablage kopiert!
In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them.
Pods can also be auto-scaled using the
oc autoscale
Procedure
To manually scale a
object, use theDeploymentConfigcommand. For example, the following command sets the replicas in theoc scalefrontendobject toDeploymentConfig.3$ oc scale dc frontend --replicas=3The number of replicas eventually propagates to the desired and current state of the deployment configured by the
objectDeploymentConfig.frontend
8.2.1.10. Accessing private repositories from DeploymentConfig objects Link kopierenLink in die Zwischenablage kopiert!
You can add a secret to your
DeploymentConfig
Procedure
- Create a new project.
-
Navigate to Workloads
Secrets. - Create a secret that contains credentials for accessing a private image repository.
-
Navigate to Workloads
DeploymentConfigs. -
Create a object.
DeploymentConfig -
On the object editor page, set the Pull Secret and save your changes.
DeploymentConfig
8.2.1.11. Assigning pods to specific nodes Link kopierenLink in die Zwischenablage kopiert!
You can use node selectors in conjunction with labeled nodes to control pod placement.
Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a
Pod
Procedure
To add a node selector when creating a pod, edit the
configuration, and add thePodvalue. This can be added to a singlenodeSelectorconfiguration, or in aPodtemplate:PodapiVersion: v1 kind: Pod metadata: name: my-pod # ... spec: nodeSelector: disktype: ssd # ...Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator.
For example, if a project has the
andtype=user-nodelabels added to a project by the cluster administrator, and you add the aboveregion=eastlabel to a pod, the pod is only ever scheduled on nodes that have all three labels.disktype: ssdNoteLabels can only be set to one value, so setting a node selector of
in aregion=westconfiguration that hasPodas the administrator-set default, results in a pod that will never be scheduled.region=east
8.2.1.12. Running a pod with a different service account Link kopierenLink in die Zwischenablage kopiert!
You can run a pod with a service account other than the default.
Procedure
Edit the
object:DeploymentConfig$ oc edit dc/<deployment_config>Add the
andserviceAccountparameters to theserviceAccountNamefield, and specify the service account you want to use:specapiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc # ... spec: # ... securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>
8.3. Using deployment strategies Link kopierenLink in die Zwischenablage kopiert!
Deployment strategies are used to change or upgrade applications without downtime so that users barely notice a change.
Because users generally access applications through a route handled by a router, deployment strategies can focus on
DeploymentConfig
DeploymentConfig
Most deployment strategies are supported through the
DeploymentConfig
8.3.1. Choosing a deployment strategy Link kopierenLink in die Zwischenablage kopiert!
Consider the following when choosing a deployment strategy:
- Long-running connections must be handled gracefully.
- Database conversions can be complex and must be done and rolled back along with the application.
- If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition.
- You must have the infrastructure to do this.
- If you have a non-isolated test environment, you can break both new and old versions.
A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the
DeploymentConfig
10m
TimeoutSeconds
dc.spec.strategy.*params
8.3.2. Rolling strategy Link kopierenLink in die Zwischenablage kopiert!
A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a
DeploymentConfig
A rolling deployment typically waits for new pods to become
ready
When to use a rolling deployment:
- When you want to take no downtime during an application update.
- When your application supports having old code and new code running at the same time.
A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility.
Example rolling strategy definition
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: example-dc
# ...
spec:
# ...
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 120
maxSurge: "20%"
maxUnavailable: "10%"
pre: {}
post: {}
- 1
- The time to wait between individual pod updates. If unspecified, this value defaults to
1. - 2
- The time to wait between polling the deployment status after update. If unspecified, this value defaults to
1. - 3
- The time to wait for a scaling event before giving up. Optional; the default is
600. Here, giving up means automatically rolling back to the previous complete deployment. - 4
maxSurgeis optional and defaults to25%if not specified. See the information below the following procedure.- 5
maxUnavailableis optional and defaults to25%if not specified. See the information below the following procedure.- 6
preandpostare both lifecycle hooks.
The rolling strategy:
-
Executes any lifecycle hook.
pre - Scales up the new replication controller based on the surge count.
- Scales down the old replication controller based on the max unavailable count.
- Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero.
-
Executes any lifecycle hook.
post
When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure.
The
maxUnavailable
maxSurge
10%
2
25%
These parameters allow the deployment to be tuned for availability and speed. For example:
-
and
maxUnavailable*=0ensures full capacity is maintained during the update and rapid scale up.maxSurge*=20% -
and
maxUnavailable*=10%performs an update using no extra capacity (an in-place update).maxSurge*=0 -
and
maxUnavailable*=10%scales up and down quickly with some potential for capacity loss.maxSurge*=10%
Generally, if you want fast rollouts, use
maxSurge
maxUnavailable
8.3.2.1. Canary deployments Link kopierenLink in die Zwischenablage kopiert!
All rolling deployments in OpenShift Container Platform are canary deployments; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the
DeploymentConfig
The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy.
8.3.2.2. Creating a rolling deployment Link kopierenLink in die Zwischenablage kopiert!
Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI.
Procedure
Create an application based on the example deployment images found in Quay.io:
$ oc new-app quay.io/openshifttest/deployment-example:latestNoteThis image does not expose any ports. If you want to expose your applications over an external LoadBalancer service or enable access to the application over the public internet, create a service by using the
command after completing this procedure.oc expose dc/deployment-example --port=<port>If you have the router installed, make the application available via a route or use the service IP directly.
$ oc expose svc/deployment-example-
Browse to the application at to verify you see the
deployment-example.<project>.<router_domain>image.v1 Scale the
object up to three replicas:DeploymentConfig$ oc scale dc/deployment-example --replicas=3Trigger a new deployment automatically by tagging a new version of the example as the
tag:latest$ oc tag deployment-example:v2 deployment-example:latest-
In your browser, refresh the page until you see the image.
v2 When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1:
$ oc describe dc deployment-example
During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as
ready
If the pods do not become ready, the process aborts, and the deployment rolls back to its previous version.
8.3.2.3. Editing a deployment by using the Developer perspective Link kopierenLink in die Zwischenablage kopiert!
You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective.
Prerequisites
- You are in the Developer perspective of the web console.
- You have created an application.
Procedure
- Navigate to the Topology view.
- Click your application to see the Details panel.
- In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page.
You can edit the following Advanced options for your deployment:
Optional: You can pause rollouts by clicking Pause rollouts, and then selecting the Pause rollouts for this deployment checkbox.
By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time.
- Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas.
- Click Save.
8.3.2.4. Starting a rolling deployment using the Developer perspective Link kopierenLink in die Zwischenablage kopiert!
You can upgrade an application by starting a rolling deployment.
Prerequisites
- You are in the Developer perspective of the web console.
- You have created an application.
Procedure
- In the Topology view, click the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy.
In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one.
Figure 8.1. Rolling update
8.3.3. Recreate strategy Link kopierenLink in die Zwischenablage kopiert!
The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.
Example recreate strategy definition
kind: Deployment
apiVersion: apps/v1
metadata:
name: hello-openshift
# ...
spec:
# ...
strategy:
type: Recreate
recreateParams:
pre: {}
mid: {}
post: {}
The recreate strategy:
-
Executes any lifecycle hook.
pre - Scales down the previous deployment to zero.
-
Executes any lifecycle hook.
mid - Scales up the new deployment.
-
Executes any lifecycle hook.
post
During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.
When to use a recreate deployment:
- When you must run migrations or other data transformations before your new code starts.
- When you do not support having new and old versions of your application code running at the same time.
- When you want to use a RWO volume, which is not supported being shared between multiple replicas.
A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time.
8.3.3.1. Editing a deployment by using the Developer perspective Link kopierenLink in die Zwischenablage kopiert!
You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective.
Prerequisites
- You are in the Developer perspective of the web console.
- You have created an application.
Procedure
- Navigate to the Topology view.
- Click your application to see the Details panel.
- In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page.
You can edit the following Advanced options for your deployment:
Optional: You can pause rollouts by clicking Pause rollouts, and then selecting the Pause rollouts for this deployment checkbox.
By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time.
- Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas.
- Click Save.
8.3.3.2. Starting a recreate deployment using the Developer perspective Link kopierenLink in die Zwischenablage kopiert!
You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console.
Prerequisites
- Ensure that you are in the Developer perspective of the web console.
- Ensure that you have created an application using the Add view and see it deployed in the Topology view.
Procedure
To switch to a recreate update strategy and to upgrade an application:
- Click your application to see the Details panel.
- In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application.
-
In the YAML editor, change the to
spec.strategy.typeand click Save.Recreate - In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate.
Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version.
Figure 8.2. Recreate update
8.3.4. Custom strategy Link kopierenLink in die Zwischenablage kopiert!
The custom strategy allows you to provide your own deployment behavior.
Example custom strategy definition
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: example-dc
# ...
spec:
# ...
strategy:
type: Custom
customParams:
image: organization/strategy
command: [ "command", "arg1" ]
environment:
- name: ENV_1
value: VALUE_1
In the above example, the
organization/strategy
command
CMD
Dockerfile
Additionally, OpenShift Container Platform provides the following environment variables to the deployment process:
| Environment variable | Description |
|---|---|
|
| The name of the new deployment, a replication controller. |
|
| The name space of the new deployment. |
The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.
Alternatively, use the
customParams
openshift-deploy
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: example-dc
# ...
spec:
# ...
strategy:
type: Rolling
customParams:
command:
- /bin/sh
- -c
- |
set -e
openshift-deploy --until=50%
echo Halfway there
openshift-deploy
echo Complete
This results in following deployment:
Started deployment #2
--> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling custom-deployment-2 up to 1
--> Reached 50% (currently 50%)
Halfway there
--> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling custom-deployment-1 down to 1
Scaling custom-deployment-2 up to 2
Scaling custom-deployment-1 down to 0
--> Success
Complete
If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication.
8.3.4.1. Editing a deployment by using the Developer perspective Link kopierenLink in die Zwischenablage kopiert!
You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective.
Prerequisites
- You are in the Developer perspective of the web console.
- You have created an application.
Procedure
- Navigate to the Topology view.
- Click your application to see the Details panel.
- In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page.
You can edit the following Advanced options for your deployment:
Optional: You can pause rollouts by clicking Pause rollouts, and then selecting the Pause rollouts for this deployment checkbox.
By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time.
- Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas.
- Click Save.
8.3.5. Lifecycle hooks Link kopierenLink in die Zwischenablage kopiert!
The rolling and recreate strategies support lifecycle hooks, or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:
Example pre lifecycle hook
pre:
failurePolicy: Abort
execNewPod: {}
- 1
execNewPodis a pod-based lifecycle hook.
Every hook has a failure policy, which defines the action the strategy should take when a hook failure is encountered:
|
| The deployment process will be considered a failure if the hook fails. |
|
| The hook execution should be retried until it succeeds. |
|
| Any hook failure should be ignored and the deployment should proceed. |
Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the
execNewPod
8.3.5.1. Pod-based lifecycle hook Link kopierenLink in die Zwischenablage kopiert!
Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a
DeploymentConfig
The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity:
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: frontend
spec:
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: helloworld
image: openshift/origin-ruby-sample
replicas: 5
selector:
name: frontend
strategy:
type: Rolling
rollingParams:
pre:
failurePolicy: Abort
execNewPod:
containerName: helloworld
command: [ "/usr/bin/command", "arg1", "arg2" ]
env:
- name: CUSTOM_VAR1
value: custom_value1
volumes:
- data
- 1
- The
helloworldname refers tospec.template.spec.containers[0].name. - 2
- This
commandoverrides anyENTRYPOINTdefined by theopenshift/origin-ruby-sampleimage. - 3
envis an optional set of environment variables for the hook container.- 4
volumesis an optional set of volume references for the hook container.
In this example, the
pre
openshift/origin-ruby-sample
helloworld
-
The hook command is .
/usr/bin/command arg1 arg2 -
The hook container has the environment variable.
CUSTOM_VAR1=custom_value1 -
The hook failure policy is , meaning the deployment process fails if the hook fails.
Abort -
The hook pod inherits the volume from the
dataobject pod.DeploymentConfig
8.3.5.2. Setting lifecycle hooks Link kopierenLink in die Zwischenablage kopiert!
You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI.
Procedure
Use the
command to set the type of hook you want:oc set deployment-hook,--pre, or--mid. For example, to set a pre-deployment hook:--post$ oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2
8.4. Using route-based deployment strategies Link kopierenLink in die Zwischenablage kopiert!
Deployment strategies provide a way for the application to evolve. Some strategies use
Deployment
Deployment
The most common route-based strategy is to use a blue-green deployment. The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version.
A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users.
A canary deployment tests the new version but when a problem is detected it quickly falls back to the previous version. This can be done with both of the above strategies.
The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled.
8.4.1. Proxy shards and traffic splitting Link kopierenLink in die Zwischenablage kopiert!
In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard, which forwards or splits the traffic it receives to a separate service or application running elsewhere.
In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes.
Any TCP (or UDP) proxy could be run under the desired shard. Use the
oc scale
8.4.2. N-1 compatibility Link kopierenLink in die Zwischenablage kopiert!
Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem.
This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user’s browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it.
For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional.
One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment.
8.4.3. Graceful termination Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit.
On shutdown, OpenShift Container Platform sends a
TERM
SIGTERM
After the graceful termination period expires, a process that has not exited is sent the
KILL
terminationGracePeriodSeconds
8.4.4. Blue-green deployments Link kopierenLink in die Zwischenablage kopiert!
Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route.
Because many applications depend on persistent data, you must have an application that supports N-1 compatibility, which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer.
Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version.
8.4.4.1. Setting up a blue-green deployment Link kopierenLink in die Zwischenablage kopiert!
Blue-green deployments use two
Deployment
Deployment
Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications.
You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live.
If necessary, you can roll back to the older (blue) version by switching the service back to the previous version.
Procedure
Create two independent application components.
Create a copy of the example application running the
image under thev1service:example-blue$ oc new-app openshift/deployment-example:v1 --name=example-blueCreate a second copy that uses the
image under thev2service:example-green$ oc new-app openshift/deployment-example:v2 --name=example-green
Create a route that points to the old service:
$ oc expose svc/example-blue --name=bluegreen-example-
Browse to the application at to verify you see the
bluegreen-example-<project>.<router_domain>image.v1 Edit the route and change the service name to
:example-green$ oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}'-
To verify that the route has changed, refresh the browser until you see the image.
v2
8.4.5. A/B deployments Link kopierenLink in die Zwischenablage kopiert!
The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version.
Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the previous version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance.
In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user’s reaction to the different versions to inform design decisions.
For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together.
OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI.
8.4.5.1. Load balancing for A/B testing Link kopierenLink in die Zwischenablage kopiert!
The user sets up a route with multiple services. Each service handles a version of the application.
Each service is assigned a
weight
service_weight
sum_of_weights
weight
weights
weight
The route can have up to four services. The
weight
0
256
weight
0
weight
0
weight
1
weight
weight
Procedure
To set up the A/B environment:
Create the two applications and give them different names. Each creates a
object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version.DeploymentCreate the first application. The following example creates an application called
:ab-example-a$ oc new-app openshift/deployment-example --name=ab-example-aCreate the second application:
$ oc new-app openshift/deployment-example:v2 --name=ab-example-bBoth applications are deployed and services are created.
Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version.
$ oc expose svc/ab-example-aBrowse to the application at
to verify that you see the expected version.ab-example-a.<project>.<router_domain>When you deploy the route, the router balances the traffic according to the
specified for the services. At this point, there is a single service with defaultweightsso all requests go to it. Adding the other service as anweight=1and adjusting thealternateBackendsbrings the A/B setup to life. This can be done by theweightscommand or by editing the route.oc set route-backendsNoteWhen using
, also use thealternateBackendsload balancing strategy to ensure requests are distributed as expected to the services based on weight.roundrobincan be set for a route by using a route annotation. See the Additional resources section for more information about route annotations.roundrobinSetting the
tooc set route-backendmeans the service does not participate in load-balancing, but continues to serve existing persistent connections.0NoteChanges to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads.
To edit the route, run:
$ oc edit route <route_name>Example output
apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin # ... spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 # ...
8.4.5.1.1. Managing weights of an existing route using the web console Link kopierenLink in die Zwischenablage kopiert!
Procedure
-
Navigate to the Networking
Routes page. -
Click the Actions menu
next to the route you want to edit and select Edit Route.
-
Edit the YAML file. Update the to be an integer between
weightand0that specifies the relative weight of the target against other target reference objects. The value256suppresses requests to this back end. The default is0. Run100for more information about the options.oc explain routes.spec.alternateBackends - Click Save.
8.4.5.1.2. Managing weights of an new route using the web console Link kopierenLink in die Zwischenablage kopiert!
-
Navigate to the Networking
Routes page. - Click Create Route.
- Enter the route Name.
- Select the Service.
- Click Add Alternate Service.
-
Enter a value for Weight and Alternate Service Weight. Enter a number between and
0that depicts relative weight compared with other targets. The default is255.100 - Select the Target Port.
- Click Create.
8.4.5.1.3. Managing weights using the CLI Link kopierenLink in die Zwischenablage kopiert!
Procedure
To manage the services and corresponding weights load balanced by the route, use the
command:oc set route-backends$ oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]For example, the following sets
as the primary service withab-example-aandweight=198as the first alternate service with aab-example-b:weight=2$ oc set route-backends ab-example ab-example-a=198 ab-example-b=2This means 99% of traffic is sent to service
and 1% to serviceab-example-a.ab-example-bThis command does not scale the deployment. You might be required to do so to have enough pods to handle the request load.
Run the command with no flags to verify the current configuration:
$ oc set route-backends ab-exampleExample output
NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)To alter the weight of an individual service relative to itself or to the primary service, use the
flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed.--adjustThe following example alters the weight of
andab-example-aservices:ab-example-b$ oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10Alternatively, alter the weight of a service by specifying a percentage:
$ oc set route-backends ab-example --adjust ab-example-b=5%By specifying
before the percentage declaration, you can adjust a weighting relative to the current setting. For example:+$ oc set route-backends ab-example --adjust ab-example-b=+15%The
flag sets the--equalof all services toweight:100$ oc set route-backends ab-example --equalThe
flag sets the--zeroof all services toweight. All requests then return with a 503 error.0NoteNot all routers may support multiple or weighted backends.
8.4.5.1.4. One service, multiple Deployment objects Link kopierenLink in die Zwischenablage kopiert!
Procedure
Create a new application, adding a label
that will be common to all shards:ab-example=true$ oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA$ oc delete svc/ab-example-aThe application is deployed and a service is created. This is the first shard.
Make the application available via a route, or use the service IP directly:
$ oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true$ oc expose service ab-example-
Browse to the application at to verify you see the
ab-example-<project_name>.<router_domain>image.v1 Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables:
$ oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true$ oc delete svc/ab-example-bAt this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you.
To force your browser to one or the other shard:
Use the
command to reduce replicas ofoc scaletoab-example-a.0$ oc scale dc/ab-example-a --replicas=0Refresh your browser to show
andv2(in red).shard BScale
toab-example-areplica and1toab-example-b:0$ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0Refresh your browser to show
andv1(in blue).shard A
If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the
environment variable in eitherSUBTITLEobject:Deployment$ oc edit dc/ab-example-aor
$ oc edit dc/ab-example-b