此内容没有您所选择的语言版本。

Chapter 3. Using Jobs and DaemonSets


3.1. Running background tasks on nodes automatically with daemonsets

As an administrator, you can create and use DaemonSets to run replicas of a pod on specific or all nodes in an OpenShift Container Platform cluster.

A DaemonSet ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to the cluster. As nodes are removed from the cluster, those pods are removed through garbage collection. Deleting a DaemonSet will clean up the Pods it created.

You can use daemonsets to create shared storage, run a logging pod on every node in your cluster, or deploy a monitoring agent on every node.

For security reasons, only cluster administrators can create daemonsets.

For more information on daemonsets, see the Kubernetes documentation.

Important

Daemonset scheduling is incompatible with project’s default node selector. If you fail to disable it, the daemonset gets restricted by merging with the default node selector. This results in frequent pod recreates on the nodes that got unselected by the merged node selector, which in turn puts unwanted load on the cluster.

3.1.1. Scheduled by default scheduler

A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the node that a Pod runs on is selected by the Kubernetes scheduler. However, previously daemonSet pods are created and scheduled by the DaemonSet controller. That introduces the following issues:

  • Inconsistent Pod behavior: Normal Pods waiting to be scheduled are created and in Pending state, but DaemonSet pods are not created in Pending state. This is confusing to the user.
  • Pod preemption is handled by default scheduler. When preemption is enabled, the DaemonSet controller will make scheduling decisions without considering pod priority and preemption.

ScheduleDaemonSetPods is enabled by default in OpenShift Container Platform which lets you to schedule DaemonSets using the default scheduler instead of the DaemonSet controller, by adding the NodeAffinity term to the DaemonSet pods, instead of the .spec.nodeName term. The default scheduler is then used to bind the pod to the target host. If node affinity of the DaemonSet pod already exists, it is replaced. The DaemonSet controller only performs these operations when creating or modifying DaemonSet pods, and no changes are made to the spec.template of the DaemonSet.

nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name

In addition, node.kubernetes.io/unschedulable:NoSchedule toleration is added automatically to DaemonSet Pods. The default scheduler ignores unschedulable Nodes when scheduling DaemonSet Pods.

3.1.2. Creating daemonsets

When creating daemonsets, the nodeSelector field is used to indicate the nodes on which the daemonset should deploy replicas.

Prerequisites

  • Before you start using daemonsets, disable the default project-wide node selector in your namespace, by setting the namespace annotation openshift.io/node-selector to an empty string:

    $ oc patch namespace myproject -p \
        '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'
  • If you are creating a new project, overwrite the default node selector using oc adm new-project <name> --node-selector="".

Procedure

To create a daemonset:

  1. Define the daemonset yaml file:

    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: hello-daemonset
    spec:
      selector:
          matchLabels:
            name: hello-daemonset 1
      template:
        metadata:
          labels:
            name: hello-daemonset 2
        spec:
          nodeSelector: 3
            role: worker
          containers:
          - image: openshift/hello-openshift
            imagePullPolicy: Always
            name: registry
            ports:
            - containerPort: 80
              protocol: TCP
            resources: {}
            terminationMessagePath: /dev/termination-log
          serviceAccount: default
          terminationGracePeriodSeconds: 10
    1
    The label selector that determines which pods belong to the daemonset.
    2
    The pod template’s label selector. Must match the label selector above.
    3
    The node selector that determines on which nodes pod replicas should be deployed. A matching label must be present on the node.
  2. Create the daemonset object:

    $ oc create -f daemonset.yaml
  3. To verify that the pods were created, and that each node has a pod replica:

    1. Find the daemonset pods:

      $ oc get pods
      hello-daemonset-cx6md   1/1       Running   0          2m
      hello-daemonset-e3md9   1/1       Running   0          2m
    2. View the pods to verify the pod has been placed onto the node:

      $ oc describe pod/hello-daemonset-cx6md|grep Node
      Node:        openshift-node01.hostname.com/10.14.20.134
      $ oc describe pod/hello-daemonset-e3md9|grep Node
      Node:        openshift-node02.hostname.com/10.14.20.137
Important
  • If you update a daemonset’s pod template, the existing pod replicas are not affected.
  • If you delete a daemonSet and then create a new daemonset with a different template but the same label selector, it recognizes any existing pod replicas as having matching labels and thus does not update them or create new replicas despite a mismatch in the pod template.
  • If you change node labels, the daemonset adds pods to nodes that match the new labels and deletes pods from nodes that do not match the new labels.

To update a daemonset, force new pod replicas to be created by deleting the old replicas or nodes.

3.2. Running tasks in pods using jobs

A job executes a task in your OpenShift Container Platform cluster.

A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job will clean up any pod replicas it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types.

Sample Job specification

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  parallelism: 1    1
  completions: 1    2
  activeDeadlineSeconds: 1800 3
  backoffLimit: 6   4
  template:         5
    metadata:
      name: pi
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: OnFailure    6

  1. The pod replicas a job should run in parallel.
  2. Successful pod completions are needed to mark a job completed.
  3. The maximum duration the job can run.
  4. The number of retries for a job.
  5. The template for the pod the controller creates.
  6. The restart policy of the pod.

See the Kubernetes documentation for more information about jobs.

3.2.1. Understanding Jobs and CronJobs

A Job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a Job will clean up any pods it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types.

There are two possible resource types that allow creating run-once objects in OpenShift Container Platform:

Job
A regular Job is a run-once object that creates a task and ensures the Job finishes.

There are three main types of task suitable to run as a Job:

  • Non-parallel Jobs:

    • A Job that starts only one Pod, unless the Pod fails.
    • The Job is complete as soon as its Pod terminates successfully.
  • Parallel Jobs with a fixed completion count:

    • a Job that starts multiple pods.
    • The Job represents the overall task and is complete when there is one successful Pod for each value in the range 1 to the completions value.
  • Parallel Jobs with a work queue:

    • A Job with multiple parallel worker processes in a given pod.
    • OpenShift Container Platform coordinates pods to determine what each should work on or use an external queue service.
    • Each Pod is independently capable of determining whether or not all peer pods are complete and that the entire Job is done.
    • When any Pod from the Job terminates with success, no new Pods are created.
    • When at least one Pod has terminated with success and all Pods are terminated, the Job is successfully completed.
    • When any Pod has exited with success, no other Pod should be doing any work for this task or writing any output. Pods should all be in the process of exiting.

For more information about how to make use of the different types of Job, see Job Patterns in the Kubernetes documentation.

CronJob
A CronJob can be scheduled to run multiple times, use a CronJob.

A CronJob builds on a regular Job by allowing you to specify how the Job should be run. CronJobs are part of the Kubernetes API, which can be managed with oc commands like other object types.

CronJobs are useful for creating periodic and recurring tasks, like running backups or sending emails. CronJobs can also schedule individual tasks for a specific time, such as if you want to schedule a Job for a low activity period.

Warning

A CronJob creates a Job object approximately once per execution time of its schedule, but there are circumstances in which it fails to create a Job or two Jobs might be created. Therefore, Jobs must be idempotent and you must configure history limits.

3.2.2. Understanding how to create Jobs

Both resource types require a Job configuration that consists of the following key parts:

  • A pod template, which describes the pod that OpenShift Container Platform creates.
  • The parallelism parameter, which specifies how many pods running in parallel at any point in time should execute a Job.

    • For non-parallel Jobs, leave unset. When unset, defaults to 1.
  • The completions parameter, specifying how many successful pod completions are needed to finish a Job.

    • For non-parallel Jobs, leave unset. When unset, defaults to 1.
    • For parallel Jobs with a fixed completion count, specify a value.
    • For parallel Jobs with a work queue, leave unset. When unset defaults to the parallelism value.

3.2.2.1. Understanding how to set a maximum duration for Jobs

When defining a Job, you can define its maximum duration by setting the activeDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced.

The maximum duration is counted from the time when a first pod gets scheduled in the system, and defines how long a Job can be active. It tracks overall time of an execution. After reaching the specified timeout, the Job is terminated by OpenShift Container Platform.

3.2.2.2. Understanding how to set a Job back off policy for pod failure

A Job can be considered failed, after a set amount of retries due to a logical error in configuration or other similar reasons. Failed Pods associated with the Job are recreated by the controller with an exponential back off delay (10s, 20s, 40s …) capped at six minutes. The limit is reset if no new failed pods appear between controller checks.

Use the spec.backoffLimit parameter to set the number of retries for a Job.

3.2.2.3. Understanding how to configure a CronJob to remove artifacts

CronJobs can leave behind artifact resources such as Jobs or pods. As a user it is important to configure history limits so that old Jobs and their pods are properly cleaned. There are two fields within CronJob’s spec responsible for that:

  • .spec.successfulJobsHistoryLimit. The number of successful finished Jobs to retain (defaults to 3).
  • .spec.failedJobsHistoryLimit. The number of failed finished Jobs to retain (defaults to 1).
Tip
  • Delete CronJobs that you no longer need:

    $ oc delete cronjob/<cron_job_name>

    Doing this prevents them from generating unnecessary artifacts.

  • You can suspend further executions by setting the spec.suspend to true. All subsequent executions are suspended until you reset to false.

3.2.3. Known limitations

The Job specification restart policy only applies to the pods, and not the job controller. However, the job controller is hard-coded to keep retrying Jobs to completion.

As such, restartPolicy: Never or --restart=Never results in the same behavior as restartPolicy: OnFailure or --restart=OnFailure. That is, when a Job fails it is restarted automatically until it succeeds (or is manually discarded). The policy only sets which subsystem performs the restart.

With the Never policy, the job controller performs the restart. With each attempt, the job controller increments the number of failures in the Job status and create new pods. This means that with each failed attempt, the number of pods increases.

With the OnFailure policy, kubelet performs the restart. Each attempt does not increment the number of failures in the Job status. In addition, kubelet will retry failed Jobs starting pods on the same nodes.

3.2.4. Creating jobs

You create a job in OpenShift Container Platform by creating a job object.

Procedure

To create a job:

  1. Create a YAML file similar to the following:

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: pi
    spec:
      parallelism: 1    1
      completions: 1    2
      activeDeadlineSeconds: 1800 3
      backoffLimit: 6   4
      template:         5
        metadata:
          name: pi
        spec:
          containers:
          - name: pi
            image: perl
            command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
          restartPolicy: OnFailure    6
    1. Optionally, specify how many pod replicas a job should run in parallel; defaults to 1.

      • For non-parallel jobs, leave unset. When unset, defaults to 1.
    2. Optionally, specify how many successful pod completions are needed to mark a job completed.

      • For non-parallel jobs, leave unset. When unset, defaults to 1.
      • For parallel jobs with a fixed completion count, specify the number of completions.
      • For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value.
    3. Optionally, specify the maximum duration the job can run.
    4. Optionally, specify the number of retries for a job. This field defaults to six.
    5. Specify the template for the pod the controller creates.
    6. Specify the restart policy of the pod:

      • Never. Do not restart the job.
      • OnFailure. Restart the job only if it fails.
      • Always. Always restart the job.

For details on how OpenShift Container Platform uses restart policy with failed containers, see the Example States in the Kubernetes documentation.

  1. Create the job:

    $ oc create -f <file-name>.yaml
Note

You can also create and launch a job from a single command using oc run. The following command creates and launches the same job as specified in the previous example:

$ oc run pi --image=perl --replicas=1  --restart=OnFailure \
    --command -- perl -Mbignum=bpi -wle 'print bpi(2000)'

3.2.5. Creating CronJobs

You create a CronJob in OpenShift Container Platform by creating a job object.

Procedure

To create a CronJob:

  1. Create a YAML file similar to the following:

    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: pi
    spec:
      schedule: "*/1 * * * *"  1
      concurrencyPolicy: "Replace" 2
      startingDeadlineSeconds: 200 3
      suspend: true            4
      successfulJobsHistoryLimit: 3 5
      failedJobsHistoryLimit: 1     6
      jobTemplate:             7
        spec:
          template:
            metadata:
              labels:          8
                parent: "cronjobpi"
            spec:
              containers:
              - name: pi
                image: perl
                command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
              restartPolicy: OnFailure 9
    1 1 1
    Schedule for the job specified in cron format. In this example, the job will run every minute.
    2 2 2
    An optional concurrency policy, specifying how to treat concurrent jobs within a CronJob. Only one of the following concurrent policies may be specified. If not specified, this defaults to allowing concurrent executions.
    • Allow allows CronJobs to run concurrently.
    • Forbid forbids concurrent runs, skipping the next run if the previous has not finished yet.
    • Replace cancels the currently running job and replaces it with a new one.
    3 3 3
    An optional deadline (in seconds) for starting the job if it misses its scheduled time for any reason. Missed jobs executions will be counted as failed ones. If not specified, there is no deadline.
    4 4 4
    An optional flag allowing the suspension of a CronJob. If set to true, all subsequent executions will be suspended.
    5 5 5
    The number of successful finished jobs to retain (defaults to 3).
    6 6 6
    The number of failed finished jobs to retain (defaults to 1).
    7
    Job template. This is similar to the job example.
    8
    Sets a label for jobs spawned by this CronJob.
    9
    The restart policy of the pod. This does not apply to the job controller.
    Note

    The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish.

  2. Create the CronJob:

    $ oc create -f <file-name>.yaml
Note

You can also create and launch a CronJob from a single command using oc run. The following command creates and launches the same CronJob as specified in the previous example:

$ oc run pi --image=perl --schedule='*/1 * * * *' \
    --restart=OnFailure --labels parent="cronjobpi" \
    --command -- perl -Mbignum=bpi -wle 'print bpi(2000)'

With oc run, the --schedule option accepts schedules in cron format.

When creating a CronJob, oc run only supports the Never or OnFailure restart policies (--restart).

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.