This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.3.2. Managing deployment processes
3.2.1. Managing DeploymentConfig objects 复制链接链接已复制到粘贴板!
DeploymentConfig objects can be managed from the OpenShift Container Platform web console’s Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated.
3.2.1.1. Starting a deployment 复制链接链接已复制到粘贴板!
You can start a rollout to begin the deployment process of your application.
Procedure
To start a new deployment process from an existing
DeploymentConfigobject, run the following command:oc rollout latest dc/<name>
$ oc rollout latest dc/<name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed.
3.2.1.2. Viewing a deployment 复制链接链接已复制到粘贴板!
You can view a deployment to get basic information about all the available revisions of your application.
Procedure
To show details about all recently created replication controllers for the provided
DeploymentConfigobject, including any currently running deployment process, run the following command:oc rollout history dc/<name>
$ oc rollout history dc/<name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view details specific to a revision, add the
--revisionflag:oc rollout history dc/<name> --revision=1
$ oc rollout history dc/<name> --revision=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more detailed information about a
DeploymentConfigobject and its latest revision, use theoc describecommand:oc describe dc <name>
$ oc describe dc <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.3. Retrying a deployment 复制链接链接已复制到粘贴板!
If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process.
Procedure
To restart a failed deployment process:
oc rollout retry dc/<name>
$ oc rollout retry dc/<name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried.
注意Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed.
3.2.1.4. Rolling back a deployment 复制链接链接已复制到粘贴板!
Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.
Procedure
To rollback to the last successful deployed revision of your configuration:
oc rollout undo dc/<name>
$ oc rollout undo dc/<name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
DeploymentConfigobject’s template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with--to-revision, then the last successfully deployed revision is used.Image change triggers on the
DeploymentConfigobject are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete.To re-enable the image change triggers:
oc set triggers dc/<name> --auto
$ oc set triggers dc/<name> --autoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations.
3.2.1.5. Executing commands inside a container 复制链接链接已复制到粘贴板!
You can add a command to a container, which modifies the container’s start-up behavior by overruling the image’s ENTRYPOINT. This is different from a lifecycle hook, which instead can be run once per deployment at a specified time.
Procedure
Add the
commandparameters to thespecfield of theDeploymentConfigobject. You can also add anargsfield, which modifies thecommand(or theENTRYPOINTifcommanddoes not exist).Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to execute the
javacommand with the-jarand/opt/app-root/springboots2idemo.jararguments:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.6. Viewing deployment logs 复制链接链接已复制到粘贴板!
Procedure
To stream the logs of the latest revision for a given
DeploymentConfigobject:oc logs -f dc/<name>
$ oc logs -f dc/<name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application.
You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually:
oc logs --version=1 dc/<name>
$ oc logs --version=1 dc/<name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.7. Deployment triggers 复制链接链接已复制到粘贴板!
A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster.
If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.
Config change deployment triggers
The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object.
If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused.
Config change deployment trigger
triggers: - type: "ConfigChange"
triggers:
- type: "ConfigChange"
Image change deployment triggers
The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed).
Image change deployment trigger
- 1
- If the
imageChangeParams.automaticfield is set tofalse, the trigger is disabled.
With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object’s helloworld container, a new replication controller is created using the new image for the helloworld container.
If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false, or with automatic=true) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag.
3.2.1.7.1. Setting deployment triggers 复制链接链接已复制到粘贴板!
Procedure
You can set deployment triggers for a
DeploymentConfigobject using theoc set triggerscommand. For example, to set a image change trigger, use the following command:oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name>$ oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.8. Setting deployment resources 复制链接链接已复制到粘贴板!
A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits.
The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources.
You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies.
Procedure
In the following example, each of
resources,cpu,memory, andephemeral-storageis optional:Copy to Clipboard Copied! Toggle word wrap Toggle overflow However, if a quota has been defined for your project, one of the following two items is required:
A
resourcessection set with an explicitrequests:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
requestsobject contains the list of resources that correspond to the list of resources in the quota.
-
A limit range defined in your project, where the defaults from the
LimitRangeobject apply to pods created during the deployment process.
To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota.
3.2.1.9. Scaling manually 复制链接链接已复制到粘贴板!
In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them.
Pods can also be auto-scaled using the oc autoscale command.
Procedure
To manually scale a
DeploymentConfigobject, use theoc scalecommand. For example, the following command sets the replicas in thefrontendDeploymentConfigobject to3.oc scale dc frontend --replicas=3
$ oc scale dc frontend --replicas=3Copy to Clipboard Copied! Toggle word wrap Toggle overflow The number of replicas eventually propagates to the desired and current state of the deployment configured by the
DeploymentConfigobjectfrontend.
You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method.
Procedure
- Create a new project.
- From the Workloads page, create a secret that contains credentials for accessing a private image repository.
-
Create a
DeploymentConfigobject. -
On the
DeploymentConfigobject editor page, set the Pull Secret and save your changes.
3.2.1.11. Assigning pods to specific nodes 复制链接链接已复制到粘贴板!
You can use node selectors in conjunction with labeled nodes to control pod placement.
Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further.
Procedure
To add a node selector when creating a pod, edit the
Podconfiguration, and add thenodeSelectorvalue. This can be added to a singlePodconfiguration, or in aPodtemplate:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator.
For example, if a project has the
type=user-nodeandregion=eastlabels added to a project by the cluster administrator, and you add the abovedisktype: ssdlabel to a pod, the pod is only ever scheduled on nodes that have all three labels.注意Labels can only be set to one value, so setting a node selector of
region=westin aPodconfiguration that hasregion=eastas the administrator-set default, results in a pod that will never be scheduled.
3.2.1.12. Running a pod with a different service account 复制链接链接已复制到粘贴板!
You can run a pod with a service account other than the default.
Procedure
Edit the
DeploymentConfigobject:oc edit dc/<deployment_config>
$ oc edit dc/<deployment_config>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
serviceAccountandserviceAccountNameparameters to thespecfield, and specify the service account you want to use:spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>Copy to Clipboard Copied! Toggle word wrap Toggle overflow