This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 6. Scheduling Windows container workloads
You can schedule Windows workloads to Windows compute nodes.
The WMCO is not supported in clusters that use a cluster-wide proxy because the WMCO is not able to route traffic through the proxy connection for the workloads.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You are using a Windows container as the OS image with the Docker-formatted container runtime add-on enabled.
- You have created a Windows machine set.
Currently, the Docker-formatted container runtime is used in Windows nodes. Kubernetes is deprecating Docker as a container runtime; you can reference the Kubernetes documentation for more information in Docker deprecation. Containerd will be the new supported container runtime for Windows nodes in a future release of Kubernetes.
6.1. Windows pod placement
Before deploying your Windows workloads to the cluster, you must configure your Windows node scheduling so pods are assigned correctly. Since you have a machine hosting your Windows node, it is managed the same as a Linux-based node. Likewise, scheduling a Windows pod to the appropriate Windows node is completed similarly, using mechanisms like taints, tolerations, and node selectors.
With multiple operating systems, and the ability to run multiple Windows OS variants in the same cluster, you must map your Windows pods to a base Windows OS variant by using a RuntimeClass
object. For example, if you have multiple Windows nodes running on different Windows Server container versions, the cluster could schedule your Windows pods to an incompatible Windows OS variant. You must have RuntimeClass
objects configured for each Windows OS variant on your cluster. Using a RuntimeClass
object is also recommended if you have only one Windows OS variant available in your cluster.
For more information, see Microsoft’s documentation on Host and container version compatibility.
The container base image must be the same Windows OS version and build number that is running on the node where the conainer is to be scheduled.
Also, if you upgrade the Windows nodes from one version to another, for example going from 20H2 to 2022, you must upgrade your container base image to match the new version. For more information, see Windows container version compatibility.
Additional resources
6.2. Creating a RuntimeClass object to encapsulate scheduling mechanisms
Using a RuntimeClass
object simplifies the use of scheduling mechanisms like taints and tolerations; you deploy a runtime class that encapsulates your taints and tolerations and then apply it to your pods to schedule them to the appropriate node. Creating a runtime class is also necessary in clusters that support multiple operating system variants.
Procedure
Create a
RuntimeClass
object YAML file. For example,runtime-class.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: node.k8s.io/v1beta1 kind: RuntimeClass metadata: name: <runtime_class_name> handler: 'docker' scheduling: nodeSelector: kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: - effect: NoSchedule key: os operator: Equal value: "Windows"
apiVersion: node.k8s.io/v1beta1 kind: RuntimeClass metadata: name: <runtime_class_name>
1 handler: 'docker' scheduling: nodeSelector:
2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations:
3 - effect: NoSchedule key: os operator: Equal value: "Windows"
- 1
- Specify the
RuntimeClass
object name, which is defined in the pods you want to be managed by this runtime class. - 2
- Specify labels that must be present on nodes that support this runtime class. Pods using this runtime class can only be scheduled to a node matched by this selector. The node selector of the runtime class is merged with the existing node selector of the pod. Any conflicts prevent the pod from being scheduled to the node.
- 3
- Specify tolerations to append to pods, excluding duplicates, running with this runtime class during admission. This combines the set of nodes tolerated by the pod and the runtime class.
Create the
RuntimeClass
object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f runtime-class.yaml
$ oc create -f runtime-class.yaml
Apply the
RuntimeClass
object to your pod to ensure it is scheduled to the appropriate operating system variant:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: <runtime_class_name> ...
apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: <runtime_class_name>
1 ...
- 1
- Specify the runtime class to manage the scheduling of your pod.
6.3. Sample Windows container workload deployment
You can deploy Windows container workloads to your cluster once you have a Windows compute node available.
This sample deployment is provided for reference only.
Example Service
object
apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: win-webserver
labels:
app: win-webserver
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
app: win-webserver
type: LoadBalancer
Example Deployment
object
apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: tolerations: - key: "os" value: "Windows" Effect: "NoSchedule" containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 imagePullPolicy: IfNotPresent command: - powershell.exe - -command - $listener = New-Object System.Net.HttpListener; $listener.Prefixes.Add('http://*:80/'); $listener.Start();Write-Host('Listening at http://*:80/'); while ($listener.IsListening) { $context = $listener.GetContext(); $response = $context.Response; $content='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; $buffer = [System.Text.Encoding]::UTF8.GetBytes($content); $response.ContentLength64 = $buffer.Length; $response.OutputStream.Write($buffer, 0, $buffer.Length); $response.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: "ContainerAdministrator" nodeSelector: kubernetes.io/os: windows
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: win-webserver
name: win-webserver
spec:
selector:
matchLabels:
app: win-webserver
replicas: 1
template:
metadata:
labels:
app: win-webserver
name: win-webserver
spec:
tolerations:
- key: "os"
value: "Windows"
Effect: "NoSchedule"
containers:
- name: windowswebserver
image: mcr.microsoft.com/windows/servercore:ltsc2019
imagePullPolicy: IfNotPresent
command:
- powershell.exe
- -command
- $listener = New-Object System.Net.HttpListener; $listener.Prefixes.Add('http://*:80/'); $listener.Start();Write-Host('Listening at http://*:80/'); while ($listener.IsListening) { $context = $listener.GetContext(); $response = $context.Response; $content='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; $buffer = [System.Text.Encoding]::UTF8.GetBytes($content); $response.ContentLength64 = $buffer.Length; $response.OutputStream.Write($buffer, 0, $buffer.Length); $response.Close(); };
securityContext:
runAsNonRoot: false
windowsOptions:
runAsUserName: "ContainerAdministrator"
nodeSelector:
kubernetes.io/os: windows
When using the mcr.microsoft.com/powershell:<tag>
container image, you must define the command as pwsh.exe
. If you are using the mcr.microsoft.com/windows/servercore:<tag>
container image, you must define the command as powershell.exe
. For more information, see Microsoft’s documentation.
6.4. Scaling a machine set manually
To add or remove an instance of a machine in a machine set, you can manually scale the machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets.
Prerequisites
-
Install an OpenShift Container Platform cluster and the
oc
command line. -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the machine sets that are in the cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
The machine sets are listed in the form of
<clusterid>-worker-<aws-region-az>
.View the machines that are in the cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get machine -n openshift-machine-api
$ oc get machine -n openshift-machine-api
Set the annotation on the machine that you want to delete:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true"
$ oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true"
Cordon and drain the node that you want to delete:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm cordon <node_name> oc adm drain <node_name>
$ oc adm cordon <node_name> $ oc adm drain <node_name>
Scale the machine set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Or:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
TipYou can alternatively apply the following YAML to scale the machine set:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2
You can scale the machine set up or down. It takes several minutes for the new machines to be available.
Verification
Verify the deletion of the intended machine:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get machines
$ oc get machines