Chapter 18. Scheduler


18.1. Overview

The Kubernetes pod scheduler is responsible for determining placement of new pods onto nodes within the cluster. It reads data from the pod and tries to find a node that is a good fit based on configured policies. It is completely independent and exists as a standalone/pluggable solution. It does not modify the pod and just creates a binding for the pod that ties the pod to the particular node.

18.2. Generic Scheduler

The existing generic scheduler is the default platform-provided scheduler "engine" that selects a node to host the pod in a 3-step operation:

  1. Filter the nodes
  2. Prioritize the filtered list of nodes
  3. Select the best fit node

18.2.1. Filter the Nodes

The available nodes are filtered based on the constraints or requirements specified. This is done by running each of the nodes through the list of filter functions called 'predicates'.

18.2.2. Prioritize the Filtered List of Nodes

This is achieved by passing each node through a series of 'priority' functions that assign it a score between 0 - 10, with 0 indicating a bad fit and 10 indicating a good fit to host the pod. The scheduler configuration can also take in a simple "weight" (positive numeric value) for each priority function. The node score provided by each priority function is multiplied by the "weight" (default weight is 1) and then combined by just adding the scores for each node provided by all the priority functions. This weight attribute can be used by administrators to give higher importance to some priority functions.

18.2.3. Select the Best Fit Node

The nodes are sorted based on their scores and the node with the highest score is selected to host the pod. If multiple nodes have the same high score, then one of them is selected at random.

18.3. Available Predicates

There are several predicates provided out of the box in Kubernetes. Some of these predicates can be customized by providing certain parameters. Multiple predicates can be combined to provide additional filtering of nodes.

18.3.1. Static Predicates

These predicates do not take any configuration parameters or inputs from the user. These are specified in the scheduler configuration using their exact name.

PodFitsPorts deems a node to be fit for hosting a pod based on the absence of port conflicts.

{"name" : "PodFitsPorts"}

PodFitsResources determines a fit based on resource availability. The nodes can declare their resource capacities and then pods can specify what resources they require. Fit is based on requested, rather than used resources.

{"name" : "PodFitsResources"}

NoDiskConflict determines fit based on non-conflicting disk volumes. It evaluates if a pod can fit due to the volumes it requests, and those that are already mounted. It is GCE PD, Amazon EBS, and Ceph RBD specific. Only Persistent Volume Claims for those supported types are checked. Persistent Volumes added directly to pods are not evaluated and are not constrained by this policy.

{"name" : "NoDiskConflict"}

MatchNodeSelector determines fit based on node selector query that is defined in the pod.

{"name" : "MatchNodeSelector"}

HostName determines fit based on the presence of the Host parameter and a string match with the name of the host.

{"name" : "HostName"}

18.3.2. Configurable Predicates

These predicates can be configured by the user to tweak their functioning. They can be given any user-defined name. The type of the predicate is identified by the argument that they take. Since these are configurable, multiple predicates of the same type (but different configuration parameters) can be combined as long as their user-defined names are different.

ServiceAffinity filters out nodes that do not belong to the specified topological level defined by the provided labels. This predicate takes in a list of labels and ensures affinity within the nodes (that have the same label values) for pods belonging to the same service. If the pod specifies a value for the labels in its NodeSelector, then the nodes matching those labels are the ones where the pod is scheduled. If the pod does not specify the labels in its NodeSelector, then the first pod can be placed on any node based on availability and all subsequent pods of the service will be scheduled on nodes that have the same label values.

{"name" : "Zone", "argument" : {"serviceAffinity" : {"labels" : ["zone"]}}}

LabelsPresence checks whether a particular node has a certain label defined or not, regardless of its value. Matching by label can be useful, for example, where nodes have their physical location or status defined by labels.

{"name" : "RequireRegion", "argument" : {"labelsPresence" : {"labels" : ["region"], "presence" : true}}}
  • If "presence" is false, and any of the requested labels match any of the nodes’s labels, it returns false. Otherwise, it returns true.
  • If "presence" is true, and any of the requested labels do not match any of the node’s labels, it returns false. Otherwise, it returns true.

18.4. Available Priority Functions

A custom set of priority functions can be specified to configure the scheduler. There are several priority functions provided out-of-the-box in Kubernetes. Some of these priority functions can be customized by providing certain parameters. Multiple priority functions can be combined and different weights can be given to each in order to impact the prioritization. A weight is required to be specified and cannot be 0 or negative.

18.4.1. Static Priority Functions

These priority functions do not take any configuration parameters or inputs from the user. These are specified in the scheduler configuration using their exact name as well as the weight.

LeastRequestedPriority favors nodes with fewer requested resources. It calculates the percentage of memory and CPU requested by pods scheduled on the node, and prioritizes nodes that have the highest available/remaining capacity.

{"name" : "LeastRequestedPriority", "weight" : 1}

BalancedResourceAllocation favors nodes with balanced resource usage rate. It calculates the difference between the consumed CPU and memory as a fraction of capacity, and prioritizes the nodes based on how close the two metrics are to each other. This should always be used together with LeastRequestedPriority.

{"name" : "BalancedResourceAllocation", "weight" : 1}

ServiceSpreadingPriority spreads pods by minimizing the number of pods belonging to the same service onto the same machine.

{"name" : "ServiceSpreadingPriority", "weight" : 1}

EqualPriority gives an equal weight of one to all nodes, if no priority configs are provided. It is not required/recommended outside of testing.

{"name" : "EqualPriority", "weight" : 1}

18.4.2. Configurable Priority Functions

These priority functions can be configured by the user by providing certain parameters. They can be given any user-defined name. The type of the priority function is identified by the argument that they take. Since these are configurable, multiple priority functions of the same type (but different configuration parameters) can be combined as long as their user-defined names are different.

ServiceAntiAffinity takes a label and ensures a good spread of the pods belonging to the same service across the group of nodes based on the label values. It gives the same score to all nodes that have the same value for the specified label. It gives a higher score to nodes within a group with the least concentration of pods.

{"name" : "RackSpread", "weight" : 1, "argument" : {"serviceAntiAffinity" : {"label" : "rack"}}}

LabelPreference prefers nodes that have a particular label defined or not, regardless of its value.

{"name" : "RackPreferred", "weight" : 1, "argument" : {"labelPreference" : {"label" : "rack"}}}

18.5. Scheduler Policy

The selection of the predicate and priority functions defines the policy for the scheduler. Administrators can provide a JSON file that specifies the predicates and priority functions to configure the scheduler. The path to the scheduler policy file can be specified in the master configuration file. In the absence of the scheduler policy file, the default configuration gets applied.

It is important to note that the predicates and priority functions defined in the scheduler configuration file will completely override the default scheduler policy. If any of the default predicates and priority functions are required, they have to be explicitly specified in the scheduler configuration file.

18.5.1. Default Scheduler Policy

The default scheduler policy includes the following predicates:

  1. NoVolumeZoneConflict
  2. MaxEBSVolumeCount
  3. MaxGCEPDVolumeCount
  4. MatchInterPodAffinity
  5. NoDiskConflict
  6. GeneralPredicates
  7. PodToleratesNodeTaints
  8. CheckNodeMemoryPressure
  9. CheckNodeDiskPressure

The default scheduler policy includes the following priority functions. Each of the priority function has a weight of 1 except NodePreferAvoidPodsPriority, which has a weight of 10000:

  1. SelectorSpreadPriority
  2. InterPodAffinityPriority
  3. LeastRequestedPriority
  4. BalancedResourceAllocation
  5. NodePreferAvoidPodsPriority
  6. NodeAffinityPriority
  7. TaintTolerationPriority

18.5.2. Modifying Scheduler Policy

The scheduler policy is defined in a file on the master, named /etc/origin/master/scheduler.json by default, unless overridden by the kubernetesMasterConfig.schedulerConfigFile field in the master configuration file.

To modify the scheduler policy:

  1. Edit the scheduler configuration file to set the desired predicates and priority functions. You can create a custom configuration, or modify one of the sample policy configurations.
  2. Restart the OpenShift Container Platform master services for the changes to take effect.

18.6. Use Cases

One of the important use cases for scheduling within OpenShift Container Platform is to support flexible affinity and anti-affinity policies.

18.6.1. Infrastructure Topological Levels

Administrators can define multiple topological levels for their infrastructure (nodes). This is done by specifying labels on nodes (e.g., region=r1, zone=z1, rack=s1). These label names have no particular meaning and administrators are free to name their infrastructure levels anything (eg, city/building/room). Also, administrators can define any number of levels for their infrastructure topology, with three levels usually being adequate (eg. regions zones racks). Lastly, administrators can specify affinity and anti-affinity rules at each of these levels in any combination.

18.6.2. Affinity

Administrators should be able to configure the scheduler to specify affinity at any topological level, or even at multiple levels. Affinity at a particular level indicates that all pods that belong to the same service will be scheduled onto nodes that belong to the same level. This handles any latency requirements of applications by allowing administrators to ensure that peer pods do not end up being too geographically separated. If no node is available within the same affinity group to host the pod, then the pod will not get scheduled.

18.6.3. Anti Affinity

Administrators should be able to configure the scheduler to specify anti-affinity at any topological level, or even at multiple levels. Anti-Affinity (or 'spread') at a particular level indicates that all pods that belong to the same service will be spread across nodes that belong to that level. This ensures that the application is well spread for high availability purposes. The scheduler will try to balance the service pods across all applicable nodes as evenly as possible.

18.7. Sample Policy Configurations

The configuration below specifies the default scheduler configuration, if it were to be specified via the scheduler policy file.

kind: "Policy"
version: "v1"
predicates:
  - name: "PodFitsPorts"
  - name: "PodFitsResources"
  - name: "NoDiskConflict"
  - name: "MatchNodeSelector"
  - name: "HostName"
priorities:
  - name: "LeastRequestedPriority"
    weight: 1
  - name: "BalancedResourceAllocation"
    weight: 1
  - name: "ServiceSpreadingPriority"
    weight: 1
Important

In all of the sample configurations below, the list of predicates and priority functions is truncated to include only the ones that pertain to the use case specified. In practice, a complete/meaningful scheduler policy should include most, if not all, of the default predicates and priority functions listed above.

Three topological levels defined as region (affinity) -→ zone (affinity) -→ rack (anti-affinity)

kind: "Policy"
version: "v1"
predicates:
...
  - name: "RegionZoneAffinity"
    argument:
      serviceAffinity:
        labels:
          - "region"
          - "zone"
priorities:
...
  - name: "RackSpread"
    weight: 1
    argument:
      serviceAntiAffinity:
        label: "rack"

Three topological levels defined as city (affinity) building (anti-affinity) room (anti-affinity):

kind: "Policy"
version: "v1"
predicates:
...
  - name: "CityAffinity"
    argument:
      serviceAffinity:
        labels:
          - "city"
priorities:
...
  - name: "BuildingSpread"
    weight: 1
    argument:
      serviceAntiAffinity:
        label: "building"
  - name: "RoomSpread"
    weight: 1
    argument:
      serviceAntiAffinity:
        label: "room"

Only use nodes with the 'region' label defined and prefer nodes with the 'zone' label defined:

kind: "Policy"
version: "v1"
predicates:
...
  - name: "RequireRegion"
    argument:
      labelsPresence:
        labels:
          - "region"
        presence: true
priorities:
...
  - name: "ZonePreferred"
    weight: 1
    argument:
      labelPreference:
        label: "zone"
        presence: true

Configuration example combining static and configurable predicates and priority functions:

kind: "Policy"
version: "v1"
predicates:
...
  - name: "RegionAffinity"
    argument:
      serviceAffinity:
        labels:
          - "region"
  - name: "RequireRegion"
    argument:
      labelsPresence:
        labels:
          - "region"
        presence: true
  - name: "BuildingNodesAvoid"
    argument:
      labelsPresence:
        labels:
          - "building"
        presence: false
  - name: "PodFitsPorts"
  - name: "MatchNodeSelector"
priorities:
...
  - name: "ZoneSpread"
    weight: 2
    argument:
      serviceAntiAffinity:
        label: "zone"
  - name: "ZonePreferred"
    weight: 1
    argument:
      labelPreference:
        label: "zone"
        presence: true
  - name: "ServiceSpreadingPriority"
    weight: 1

18.8. Scheduler Extensibility

As is the case with almost everything else in Kubernetes/OpenShift Container Platform, the scheduler is built using a plug-in model and the current implementation itself is a plug-in. There are two ways to extend the scheduler functionality:

  • Enhancements
  • Replacement

18.8.1. Enhancements

The scheduler functionality can be enhanced by adding new predicates and priority functions. They can either be contributed upstream or maintained separately. These predicates and priority functions would need to be registered with the scheduler factory and then specified in the scheduler policy file.

18.8.2. Replacement

Since the scheduler is a plug-in, it can be replaced in favor of an alternate implementation. The scheduler code has a clean separation that watches new pods as they get created and identifies the most suitable node to host them. It then creates bindings (pod to node bindings) for the pods using the master API.

18.9. Controlling Pod Placement

As a cluster administrator, you can set a policy to prevent application developers with certain roles from targeting specific nodes when scheduling pods.

Important

This process involves the pods/binding permission role, which is needed to target particular nodes. The constraint on the use of the nodeSelector field of a pod configuration is based on the pods/binding permission and the nodeSelectorLabelBlacklist configuration option.

The nodeSelectorLabelBlacklist field of a master configuration file gives you control over the labels that certain roles can specify in a pod configuration’s nodeSelector field. Users, service accounts, and groups that have the pods/binding permission can specify any node selector. Those without the pods/binding permission are prohibited from setting a nodeSelector for any label that appears in nodeSelectorLabelBlacklist.

As a hypothetical example, an OpenShift Container Platform cluster might consist of five data centers spread across two regions. In the U.S., us-east, us-central, and us-west; and in the Asia-Pacific region (APAC), apac-east and apac-west. Each node in each geographical region is labeled accordingly. For example, region: us-east.

Note

See Updating Labels on Nodes for details on assigning labels.

As a cluster administrator, you can create an infrastructure where application developers should be deploying pods only onto the nodes closest to their geographical location. You can create a node selector, grouping the U.S. data centers into superregion: us and the APAC data centers into superregion: apac.

To maintain an even loading of resources per data center, you can add the desired region to the nodeSelectorLabelBlacklist section of a master configuration. Then, whenever a developer located in the U.S. creates a pod, it is deployed onto a node in one of the regions with the superregion: us label. If the developer tries to target a specific region for their pod (for example, region: us-east), they will receive an error. If they try again, without the node selector on their pod, it can still be deployed onto the region they tried to target, because superregion: us is set as the project-level node selector, and nodes labeled region: us-east are also labeled superregion: us.

18.9.1. Constraining Pod Placement Using Node Name

Ensure a pod is deployed onto only a specified node host by assigning it a label and specifying this in the nodeName setting in a pod configuration.

  1. Ensure you have the desired labels and node selector set up in your environment.

    For example, make sure that your pod configuration features the nodeName value indicating the desired label:

    apiVersion: v1
    kind: Pod
    spec:
      nodeName: <key: value>
  2. Modify the master configuration file, /etc/origin/master/master-config.yaml, to add PodNodeConstraints to the admissionConfig section:

    ...
    admissionConfig:
      pluginConfig:
        PodNodeConstraints:
          configuration:
            apiversion: v1
            kind: PodNodeConstraintsConfig
    ...
  3. Restart OpenShift Container Platform for the changes to take effect.

    # systemctl restart atomic-openshift-master

18.9.2. Constraining Pod Placement Using a Node Selector

Using nodeSelector in a pod configuration, you can ensure that pods are only placed onto nodes with specific labels.

  1. Ensure you have the desired labels (see Updating Labels on Nodes for details) and node selector set up in your environment.

    For example, make sure that your pod configuration features the nodeSelector value indicating the desired label:

    apiVersion: v1
    kind: Pod
    spec:
      nodeSelector:
        <key>: <value>
    ...
  2. Modify the master configuration file, /etc/origin/master/master-config.yaml, to add nodeSelectorLabelBlacklist to the admissionConfig section with the labels that are assigned to the node hosts you want to deny pod placement:

    ...
    admissionConfig:
      pluginConfig:
        PodNodeConstraints:
          configuration:
            apiversion: v1
            kind: PodNodeConstraintsConfig
            nodeSelectorLabelBlacklist:
              - kubernetes.io/hostname
              - <label>
    ...
  3. Restart OpenShift Container Platform for the changes to take effect.
# systemctl restart atomic-openshift-master

18.9.3. Control Pod Placement to Projects

The Pod Node Selector admission controller allows you to force pods onto nodes associated with a specific project and prevent pods from being scheduled in those nodes.

The Pod Node Selector admission controller determines where a pod can be placed using labels on projects and node selectors specified in pods. A new pod will be placed on a node associated with a project only if the node selectors in the pod match the labels in the project.

After the pod is created, the node selectors are merged into the pod so that the pod specification includes the labels originally included in the specification and any new labels from the node selectors. The example below illustrates the merging effect.

The Pod Node Selector admission controller also allows you to create a list of labels that are permitted in a specific project. This list acts as a whitelist that lets developers know what labels are acceptable to use in a project and gives administrators greater control over labeling in a cluster.

To activate the Pod Node Selector admission controller:

  1. Configure the Pod Node Selector admission controller and whitelist, using one of the following methods:

    • Add the following to the master configuration file, /etc/origin/master/master-config.yaml:

      admissionConfig:
        pluginConfig:
          PodNodeSelector:
            configuration:
              podNodeSelectorPluginConfig: 1
                clusterDefaultNodeSelector: "k3=v3" 2
                ns1: region=west,env=test,infra=fedora,os=fedora 3
      1
      Adds the Pod Node Selector admission controller plug-in.
      2 3
      Creates default labels for all nodes.
      Creates a whitelist of permitted labels in the specified project. Here, the project is ns1 and the labels are the key=value pairs that follow.
    • Create a file containing the admission controller information:

      podNodeSelectorPluginConfig:
          clusterDefaultNodeSelector: "k3=v3"
           ns1: region=west,env=test,infra=fedora,os=fedora

      Then, reference the file in the master configuration:

      admissionConfig:
        pluginConfig:
          PodNodeSelector:
            location: <path-to-file>
      Note

      If a project does not have node selectors specified, the pods associated with that project will be merged using the default node selector (clusterDefaultNodeSelector).

  2. Restart OpenShift Container Platform for the changes to take effect.

    # systemctl restart atomic-openshift-master
  3. Create a project object that includes the scheduler.alpha.kubernetes.io/node-selector annotation and labels.

    apiVersion: v1
    kind: Namespace
    metadata
      name: ns1
      annotations:
        scheduler.alpha.kubernetes.io/node-selector: env=test,infra=fedora 1
    spec: {},
    status: {}
    1
    Annotation to create the labels to match the project label selector. Here, the key/value labels are env=test and infra=fedora.
    Note

    When using the Pod Node Selector admission controller, you cannot use oc adm new-project <project-name> for setting project node selector. When you set the project node selector using the oc adm new-project myproject --node-selector='type=user-node,region=<region> command, OpenShift Container Platform sets the openshift.io/node-selector annotation, which is processed by NodeEnv admission plugin.

  4. Create a pod specification that includes the labels in the node selector, for example:

    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        name: hello-pod
      name: hello-pod
    spec:
      containers:
        - image: "docker.io/ocpqe/hello-pod:latest"
          imagePullPolicy: IfNotPresent
          name: hello-pod
          ports:
            - containerPort: 8080
              protocol: TCP
          resources: {}
          securityContext:
            capabilities: {}
            privileged: false
          terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      nodeSelector: 1
        env: test
        os: fedora
      serviceAccount: ""
    status: {}
    1
    Node selectors to match project labels.
  5. Create the pod in the project:

    # oc create -f pod.yaml --namespace=ns1
  6. Check that the node selector labels were added to the pod configuration:

    get pod pod1 --namespace=ns1 -o json
    
    nodeSelector": {
     "env": "test",
     "infra": "fedora",
     "os": "fedora"
    }

    The node selectors are merged into the pod and the pod should be scheduled in the appropriate project.

If you create a pod with a label that is not specified in the project specification, the pod is not scheduled on the node.

For example, here the label env: production is not in any project specification:

nodeSelector:
 "env: production"
 "infra": "fedora",
 "os": "fedora"

If there is a node that does not have a node selector annotation, the pod will be scheduled there.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.