Windows Container Support for OpenShift
Red Hat OpenShift for Windows Containers Guide
Abstract
Chapter 1. Red Hat OpenShift support for Windows Containers overview
Red Hat OpenShift support for Windows Containers is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform. Windows instances deployed by the WMCO are configured with the containerd container runtime. For more information, see the release notes.
You can add Windows nodes either by creating a compute machine set or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a configuration map.
Compute machine sets are not supported for bare metal or provider agnostic clusters.
For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads.
You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads.
You can create a Windows MachineSet
object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet
object on multiple platforms.
You can schedule Windows workloads to Windows compute nodes.
You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates.
You can remove a Windows node by deleting a specific machine.
You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions.
You can disable Windows container workloads by performing the following:
- Uninstalling the Windows Machine Config Operator
- Deleting the Windows Machine Config Operator namespace
Chapter 2. Release notes
2.1. Red Hat OpenShift support for Windows Containers release notes
The release notes for Red Hat OpenShift for Windows Containers tracks the development of the Windows Machine Config Operator (WMCO), which provides all Windows container workload capabilities in OpenShift Container Platform.
2.1.1. Windows Machine Config Operator numbering
Starting with this release, y-stream releases of the WMCO will be in step with OpenShift Container Platform, with only z-stream releases between OpenShift Container Platform releases. The WMCO numbering reflects the associated OpenShift Container Platform version in the y-stream position. For example, the current release of WMCO is associated with OpenShift Container Platform version 4.15. Thus, the numbering is WMCO 10.15.z.
2.1.2. Release notes for Red Hat Windows Machine Config Operator 10.15.3
This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.15.3 were released in RHSA-2024:5745.
2.1.2.1. Bug fixes
-
Previously, after rotating the
kube-apiserver-to-kubelet-client-ca certificate
, the contents of thekubetl-ca.crt file
on Windows nodes was not populated correctly. With this fix, after certificate rotation, thekubetl-ca.crt
file contains the correct certificates. (OCPBUGS-33875) - Previously, if reverse DNS lookup failed due to an error, such as the reverse DNS lookup services being unavailable, the WMCO would not fall back to using the VM hostname to determine if a certificate signing requests (CSR) should be approved. As a consequence, Bring-Your-Own-Host (BYOH) Windows nodes configured with an IP address would not become available. With this fix, BYOH nodes are properly added if reverse DNS is not available. (OCPBUGS-37533)
- Previously, if there were multiple service account token secrets in the WMCO namespace, the scaling of Windows nodes would fail. With this fix, the WMCO uses only the secret it creates, ignoring any other service account token secrets in the WMCO namespace. As a result, Windows nodes scale properly. (OCPBUGS-38485)
2.1.3. Known limitations
Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes):
The following OpenShift Container Platform features are not supported on Windows nodes:
- Image builds
- OpenShift Pipelines
- OpenShift Service Mesh
- OpenShift monitoring of user-defined projects
- OpenShift Serverless
- Horizontal Pod Autoscaling
- Vertical Pod Autoscaling
The following Red Hat features are not supported on Windows nodes:
- Dual NIC is not supported on WMCO-managed Windows instances.
- Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads.
- Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads.
- Windows nodes are not supported in clusters that are in a disconnected environment.
- Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN.
- Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States).
-
Due to a limitation within the Windows operating system,
clusterNetwork
CIDR addresses of class E, such as240.0.0.0
, are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations :
- Huge pages are not supported for Windows containers.
- Privileged containers are not supported for Windows containers.
- Kubernetes has identified several API compatibility issues.
2.2. Release notes for past releases of the Windows Machine Config Operator
The following release notes are for previous versions of the Windows Machine Config Operator (WMCO).
2.2.1. Release notes for Red Hat Windows Machine Config Operator 10.15.2
This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.15.2 were released in RHBA-2024:2704.
2.2.1.1. Bug fixes
- Previously, on Azure clusters the WMCO would check if an external Cloud Controller Manager (CCM) was being used on the cluster. CCM use is the default. If a CCM is being used, the Operator would adjust configuration logic accordingly. Because the status condition that the WMCO used to check for the CCM was removed, the WMCO proceeded as if a CCM was not in use. This fix removes the check. As a result, the WMCO always configures the required logic on Azure clusters. (OCPBUGS-31704)
- Previously, the kubelet was unable to authenticate with private Elastic Container Registries (ECR) registries. Because of this error, the kubelet was not able to pull images from these registries. With this fix, the kubelet is able to pull images from these registries as expected. (OCPBUGS-26602)
- Previously, the WMCO was logging error messages when any commands being run through an SSH connection to a Windows instance failed. This was incorrect behavior because some commands are expected to fail. For example, when WMCO reboots a node the Operator runs PowerShell commands on the instance until they fail, meaning the SSH connection rebooted as expected. With this fix, only actualy errors are now logged. (OCPBUGS-20255)
2.2.2. Release notes for Red Hat Windows Machine Config Operator 10.15.1
This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.15.1 were released in RHBA-2024:1191.
Due to an internal issue, the planned WMCO 10.15.0 could not be released. Multiple bug and security fixes, described below, were included in WMCO 10.15.0. These fixes are included in WMCO 10.15.1. For specific details about these bug and security fixes, see the RHSA-2024:0954 errata.
2.2.2.1. New features and improvements
2.2.2.1.1. CPU and memory usage metrics are now available
CPU and memory usage metrics for Windows pods are now available in Prometheus. The metrics are shown in the OpenShift Container Platform web console on the Metrics tab for each Windows pod and can be queried by users.
2.2.2.1.2. Operator SDK upgrade
The WMCO now uses the Operator SDK version 1.32.0.
2.2.2.1.3. Kubernetes upgrade
The WMCO now uses Kubernetes 1.28.
2.2.2.2. Bug fixes
-
Previously, there was a flaw in the handling of multiplexed streams in the HTTP/2 protocol, which is utilized by the WMCO. A client could repeatedly make a request for a new multiplex stream and then immediately send an
RST_STREAM
frame to cancel those requests. This activity created additional work for the server by setting up and dismantling streams, but avoided any server-side limitations on the maximum number of active streams per connection. As a result, a denial of service occurred due to server resource consumption. This issue has been fixed. (BZ-2243296) - Previously, there was a flaw in Kubernetes, where a user who can create pods and persistent volumes on Windows nodes was able to escalate to admin privileges on those nodes. Kubernetes clusters were only affected if they were using an in-tree storage plugin for Windows nodes. This issue has been fixed. (BZ-2247163)
- Previously, there was a flaw in the SSH channel integrity. By manipulating sequence numbers during the handshake, an attacker could remove the initial messages on the secure channel without causing a MAC failure. For example, an attacker could disable the ping extension and thus disable the new countermeasure in OpenSSH 9.5 against keystroke timing attacks. This issue has been fixed. (BZ-2254210)
- Previously, the routes from a Windows Bring-Your-Own-Host (BYOH) VM to the metadata endpoint were being added as non-persistent routes, so the routes were removed when a VM was removed (deconfigured) or re-configured. This would cause the node to fail if configured again, as the metadata endpoint was unreachable. With this fix, the WMCO runs the AWS EC2 launch v2 service after removal or re-configuration. As a result, the routes are restored so that the VM can be configured into a node, as expected. (OCPBUGS-15988)
- Previously, the WMCO did not properly wait for Windows virtual machines (VMs) to finish rebooting. This led to occasional timing issues where the WMCO would attempt to interact with a node that was in the middle of a reboot, causing WMCO to log an error and restart node configuration. Now, the WMCO waits for the instance to completely reboot. (OCPBUGS-17217)
-
Previously, the WMCO configuration was missing the
DeleteEmptyDirData: true
field, which is required for draining nodes that haveemptyDir
volumes attached. As a consequence, customers that had nodes withemptyDir
volumes would see the following error in the logs:cannot delete Pods with local storage
. With this fix, theDeleteEmptyDirData: true
field was added to the node drain helper struct in the WMCO. As a result, customers are able to drain nodes withemptyDir
volumes attached. (OCPBUGS-27300) - Previously, because of a lack of synchronization between Windows machine set nodes and BYOH instances, during an update the machine set nodes and the BYOH instances could update simultaneously. This could impact running workloads. This fix introduces a locking mechanism so that machine set nodes and BYOH instances update individually. (OCPBUGS-8996)
- Previously, because of a missing secret, the WMCO could not configure proper credentials for the WICD on Nutanix clusters. As a consequence, the WMCO could not create Windows nodes. With this fix, the WMCO creates long-lived credentials for the WICD service account. As a result, the WMCO is able to configure a Windows node on Nutanix clusters. (OCPBUGS-25350)
- Previously, because of bad logic in the networking configuration script, the WICD was incorrectly reading carriage returns in the CNI configuration file as changes, and identified the file as modified. This caused the CNI configuration to be unnecessarily reloaded, potentially resulting in container restarts and brief network outages. With this fix, the WICD now reloads the CNI configuration only when the CNI configuration is actually modified. (OCPBUGS-25756)
2.2.3. Windows Machine Config Operator prerequisites
The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform.
2.2.3.1. WMCO 10.y supported platforms and Windows Server versions
The following table lists the Windows Server versions that are supported by WMCO 10.y, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform.
Platform | Supported Windows Server version |
---|---|
Amazon Web Services (AWS) |
|
Microsoft Azure |
|
VMware vSphere | Windows Server 2022, OS Build 20348.681 or later |
Google Cloud Platform (GCP) | Windows Server 2022, OS Build 20348.681 or later |
Nutanix | Windows Server 2022, OS Build 20348.681 or later |
Bare metal or provider agnostic |
|
2.2.3.2. Supported networking
Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster.
- The WMCO does not support OVN-Kubernetes without hybrid networking or OpenShift SDN.
- Dual NIC is not supported on WMCO-managed Windows instances.
Platform | Supported networking |
---|---|
Amazon Web Services (AWS) | Hybrid networking with OVN-Kubernetes |
Microsoft Azure | Hybrid networking with OVN-Kubernetes |
VMware vSphere | Hybrid networking with OVN-Kubernetes with a custom VXLAN port |
Google Cloud Platform (GCP) | Hybrid networking with OVN-Kubernetes |
Nutanix | Hybrid networking with OVN-Kubernetes |
Bare metal or provider agnostic | Hybrid networking with OVN-Kubernetes |
Hybrid networking with OVN-Kubernetes | Supported Windows Server version |
---|---|
Default VXLAN port |
|
Custom VXLAN port | Windows Server 2022, OS Build 20348.681 or later |
2.2.4. Known limitations
Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes):
The following OpenShift Container Platform features are not supported on Windows nodes:
- Image builds
- OpenShift Pipelines
- OpenShift Service Mesh
- OpenShift monitoring of user-defined projects
- OpenShift Serverless
- Horizontal Pod Autoscaling
- Vertical Pod Autoscaling
The following Red Hat features are not supported on Windows nodes:
- Dual NIC is not supported on WMCO-managed Windows instances.
- Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads.
- Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads.
- Windows nodes are not supported in clusters that are in a disconnected environment.
- Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN.
- Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States).
-
Due to a limitation within the Windows operating system,
clusterNetwork
CIDR addresses of class E, such as240.0.0.0
, are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations :
- Huge pages are not supported for Windows containers.
- Privileged containers are not supported for Windows containers.
- Kubernetes has identified several API compatibility issues.
Chapter 3. Getting support
Windows Container Support for Red Hat OpenShift is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements.
You must have this separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal.
For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers.
If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, a distribution that lacks official support.
Chapter 4. Understanding Windows container workloads
Red Hat OpenShift support for Windows Containers provides built-in support for running Microsoft Windows Server containers on OpenShift Container Platform. For those that administer heterogeneous environments with a mix of Linux and Windows workloads, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL).
Multi-tenancy for clusters that have Windows nodes is not supported. Clusters are considered multi-tenant when multiple workloads operate on shared infrastructure and resources. If one or more workloads running on an infrastructure cannot be trusted, the multi-tenant environment is considered hostile.
Hostile multi-tenant clusters introduce security concerns in all Kubernetes environments. Additional security features like pod security policies, or more fine-grained role-based access control (RBAC) for nodes, make exploiting your environment more difficult. However, if you choose to run hostile multi-tenant workloads, a hypervisor is the only security option you should use. The security domain for Kubernetes encompasses the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters.
Windows Server Containers provide resource isolation using a shared kernel but are not intended to be used in hostile multitenancy scenarios. Scenarios that involve hostile multitenancy should use Hyper-V Isolated Containers to strongly isolate tenants.
Additional resources
4.1. Windows workload management
To run Windows workloads in your cluster, you must first install the Windows Machine Config Operator (WMCO). The WMCO is a Linux-based Operator that runs on Linux-based control plane and compute nodes. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster.
Figure 4.1. WMCO design
Before deploying Windows workloads, you must create a Windows compute node and have it join the cluster. The Windows node hosts the Windows workloads in a cluster, and can run alongside other Linux-based compute nodes. You can create a Windows compute node by creating a Windows compute machine set to host Windows Server compute machines. You must apply a Windows-specific label to the compute machine set that specifies a Windows OS image.
The WMCO watches for machines with the Windows label. After a Windows compute machine set is detected and its respective machines are provisioned, the WMCO configures the underlying Windows virtual machine (VM) so that it can join the cluster as a compute node.
Figure 4.2. Mixed Windows and Linux workloads
The WMCO expects a predetermined secret in its namespace containing a private key that is used to interact with the Windows instance. WMCO checks for this secret during boot up time and creates a user data secret which you must reference in the Windows MachineSet
object that you created. Then the WMCO populates the user data secret with a public key that corresponds to the private key. With this data in place, the cluster can connect to the Windows VM using an SSH connection.
After the cluster establishes a connection with the Windows VM, you can manage the Windows node using similar practices as you would a Linux-based node.
The OpenShift Container Platform web console provides most of the same monitoring capabilities for Windows nodes that are available for Linux nodes. However, the ability to monitor workload graphs for pods running on Windows nodes is not available at this time.
Scheduling Windows workloads to a Windows node can be done with typical pod scheduling practices like taints, tolerations, and node selectors; alternatively, you can differentiate your Windows workloads from Linux workloads and other Windows-versioned workloads by using a RuntimeClass
object.
4.2. Windows node services
The following Windows-specific services are installed on each Windows node:
Service | Description |
---|---|
kubelet | Registers the Windows node and manages its status. |
Container Network Interface (CNI) plugins | Exposes networking for Windows nodes. |
Windows Instance Config Daemon (WICD) | Maintains the state of all services running on the Windows instance to ensure the instance functions as a worker node. |
Exports Prometheus metrics from Windows nodes | |
Interacts with the underlying Azure cloud platform. | |
hybrid-overlay | Creates the OpenShift Container Platform Host Network Service (HNS). |
kube-proxy | Maintains network rules on nodes allowing outside communication. |
containerd container runtime | Manages the complete container lifecycle. |
CSI Proxy | Enables CSI drivers to perform storage operations on the node, which allows containerized CSI drivers to run on Windows nodes. |
Chapter 5. Enabling Windows container workloads
Before adding Windows workloads to your cluster, you must install the Windows Machine Config Operator (WMCO), which is available in the OpenShift Container Platform OperatorHub. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster.
Dual NIC is not supported on WMCO-managed Windows instances.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). -
You have installed your cluster using installer-provisioned infrastructure, or using user-provisioned infrastructure with the
platform: none
field set in yourinstall-config.yaml
file. - You have configured hybrid networking with OVN-Kubernetes for your cluster. For more information, see Configuring hybrid networking.
- You are running an OpenShift Container Platform cluster version 4.6.8 or later.
Windows instances deployed by the WMCO are configured with the containerd container runtime. Because WMCO installs and manages the runtime, it is recommanded that you do not manually install containerd on nodes.
Additional resources
- For the comprehensive prerequisites for the Windows Machine Config Operator, see xref:../windows_containers/wmco_rn/windows-containers-release-notes-10-x-past.html#wmco-prerequisites_windows-containers-release-notes-10-x-past[Windows Machine Config Operator
5.1. Installing the Windows Machine Config Operator
You can install the Windows Machine Config Operator using either the web console or OpenShift CLI (oc
).
Due to a limitation within the Windows operating system, clusterNetwork
CIDR addresses of class E, such as 240.0.0.0
, are not compatible with Windows nodes.
5.1.1. Installing the Windows Machine Config Operator using the web console
You can use the OpenShift Container Platform web console to install the Windows Machine Config Operator (WMCO).
Dual NIC is not supported on WMCO-managed Windows instances.
Procedure
- From the Administrator perspective in the OpenShift Container Platform web console, navigate to the Operators → OperatorHub page.
-
Use the Filter by keyword box to search for
Windows Machine Config Operator
in the catalog. Click the Windows Machine Config Operator tile. - Review the information about the Operator and click Install.
On the Install Operator page:
- Select the stable channel as the Update Channel. The stable channel enables the latest stable release of the WMCO to be installed.
- The Installation Mode is preconfigured because the WMCO must be available in a single namespace only.
-
Choose the Installed Namespace for the WMCO. The default Operator recommended namespace is
openshift-windows-machine-config-operator
. - Click the Enable Operator recommended cluster monitoring on the Namespace checkbox to enable cluster monitoring for the WMCO.
Select an Approval Strategy.
- The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual strategy requires a user with appropriate credentials to approve the Operator update.
Click Install. The WMCO is now listed on the Installed Operators page.
NoteThe WMCO is installed automatically into the namespace you defined, like
openshift-windows-machine-config-operator
.- Verify that the Status shows Succeeded to confirm successful installation of the WMCO.
5.1.2. Installing the Windows Machine Config Operator using the CLI
You can use the OpenShift CLI (oc
) to install the Windows Machine Config Operator (WMCO).
Dual NIC is not supported on WMCO-managed Windows instances.
Procedure
Create a namespace for the WMCO.
Create a
Namespace
object YAML file for the WMCO. For example,wmco-namespace.yaml
:apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: "true" 2
Create the namespace:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f wmco-namespace.yaml
Create the Operator group for the WMCO.
Create an
OperatorGroup
object YAML file. For example,wmco-og.yaml
:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator
Create the Operator group:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f wmco-og.yaml
Subscribe the namespace to the WMCO.
Create a
Subscription
object YAML file. For example,wmco-sub.yaml
:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: "stable" 1 installPlanApproval: "Automatic" 2 name: "windows-machine-config-operator" source: "redhat-operators" 3 sourceNamespace: "openshift-marketplace" 4
- 1
- Specify
stable
as the channel. - 2
- Set an approval strategy. You can set
Automatic
orManual
. - 3
- Specify the
redhat-operators
catalog source, which contains thewindows-machine-config-operator
package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of theCatalogSource
object you created when you configured the Operator LifeCycle Manager (OLM). - 4
- Namespace of the catalog source. Use
openshift-marketplace
for the default OperatorHub catalog sources.
Create the subscription:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f wmco-sub.yaml
The WMCO is now installed to the
openshift-windows-machine-config-operator
.
Verify the WMCO installation:
$ oc get csv -n openshift-windows-machine-config-operator
Example output
NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded
5.2. Configuring a secret for the Windows Machine Config Operator
To run the Windows Machine Config Operator (WMCO), you must create a secret in the WMCO namespace containing a private key. This is required to allow the WMCO to communicate with the Windows virtual machine (VM).
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You created a PEM-encoded file containing an RSA key.
Procedure
Define the secret required to access the Windows VMs:
$ oc create secret generic cloud-private-key --from-file=private-key.pem=${HOME}/.ssh/<key> \ -n openshift-windows-machine-config-operator 1
- 1
- You must create the private key in the WMCO namespace, like
openshift-windows-machine-config-operator
.
It is recommended to use a different private key than the one used when installing the cluster.
5.3. Using Windows containers in a proxy-enabled cluster
The Windows Machine Config Operator (WMCO) can consume and use a cluster-wide egress proxy configuration when making external requests outside the cluster’s internal network.
This allows you to add Windows nodes and run workloads in a proxy-enabled cluster, allowing your Windows nodes to pull images from registries that are secured behind your proxy server or to make requests to off-cluster services and services that use a custom public key infrastructure.
The cluster-wide proxy affects system components only, not user workloads.
In proxy-enabled clusters, the WMCO is aware of the NO_PROXY
, HTTP_PROXY
, and HTTPS_PROXY
values that are set for the cluster. The WMCO periodically checks whether the proxy environment variables have changed. If there is a discrepancy, the WMCO reconciles and updates the proxy environment variables on the Windows instances.
Windows workloads created on Windows nodes in proxy-enabled clusters do not inherit proxy settings from the node by default, the same as with Linux nodes. Also, by default PowerShell sessions do not inherit proxy settings on Windows nodes in proxy-enabled clusters.
Additional resources
5.4. Additional resources
Chapter 6. Creating Windows machine sets
6.1. Creating a Windows machine set on AWS
You can create a Windows MachineSet
object to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
You are using a supported Windows Server as the operating system image.
Use one of the following
aws
commands, as appropriate for your Windows Server release, to query valid AMI images:Example Windows Server 2022 command
$ aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2022*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table
Example Windows Server 2019 command
$ aws ec2 describe-images --region <aws_region_name> --filters "Name=name,Values=Windows_Server-2019*English*Core*Base*" "Name=is-public,Values=true" --query "reverse(sort_by(Images, &CreationDate))[*].{name: Name, id: ImageId}" --output table
where:
- <aws_region_name>
- Specifies the name of your AWS region.
6.1.1. Machine API overview
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
-
A fundamental unit that describes the host for a node. A machine has a
providerSpec
specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. - Machine sets
MachineSet
resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change thereplicas
field on theMachineSet
resource to meet your compute need.WarningControl plane machines cannot be managed by compute machine sets.
Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines.
For more information, see “Managing control plane machines".
The following custom resources add more capabilities to your cluster:
- Machine autoscaler
The
MachineAutoscaler
resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.The
MachineAutoscaler
object takes effect after aClusterAutoscaler
object exists. BothClusterAutoscaler
andMachineAutoscaler
resources are made available by theClusterAutoscalerOperator
object.- Cluster autoscaler
This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:
- Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU
- Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods
- Set the scaling policy so that you can scale up nodes but not scale them down
- Machine health check
-
The
MachineHealthCheck
resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
6.1.2. Sample YAML for a Windows MachineSet object on AWS
This sample YAML defines a Windows MachineSet
object running on Amazon Web Services (AWS) that the Windows Machine Config Operator (WMCO) can react upon.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: ami: id: <windows_container_ami> 9 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 10 instanceType: m5a.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: windows-user-data 16 namespace: openshift-machine-api
- 1 3 5 10 13 14 15
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 6
- Specify the infrastructure ID, worker label, and zone.
- 7
- Configure the compute machine set as a Windows machine.
- 8
- Configure the Windows node as a compute machine.
- 9
- Specify the AMI ID of a supported Windows image with a container runtime installed.
- 11
- Specify the AWS zone, like
us-east-1a
. - 12
- Specify the AWS region, like
us-east-1
. - 16
- Created by the WMCO when it is configuring the first Windows machine. After that, the
windows-user-data
is available for all subsequent compute machine sets to consume.
6.1.3. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
6.1.4. Additional resources
6.2. Creating a Windows machine set on Azure
You can create a Windows MachineSet
object to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You are using a supported Windows Server as the operating system image.
6.2.1. Machine API overview
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
-
A fundamental unit that describes the host for a node. A machine has a
providerSpec
specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. - Machine sets
MachineSet
resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change thereplicas
field on theMachineSet
resource to meet your compute need.WarningControl plane machines cannot be managed by compute machine sets.
Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines.
For more information, see “Managing control plane machines".
The following custom resources add more capabilities to your cluster:
- Machine autoscaler
The
MachineAutoscaler
resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.The
MachineAutoscaler
object takes effect after aClusterAutoscaler
object exists. BothClusterAutoscaler
andMachineAutoscaler
resources are made available by theClusterAutoscalerOperator
object.- Cluster autoscaler
This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:
- Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU
- Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods
- Set the scaling policy so that you can scale up nodes but not scale them down
- Machine health check
-
The
MachineHealthCheck
resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
6.2.2. Sample YAML for a Windows MachineSet object on Azure
This sample YAML defines a Windows MachineSet
object running on Microsoft Azure that the Windows Machine Config Operator (WMCO) can react upon.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 9 offer: WindowsServer publisher: MicrosoftWindowsServer resourceID: "" sku: 2019-Datacenter-with-Containers version: latest kind: AzureMachineProviderSpec location: <location> 10 managedIdentity: <infrastructure_id>-identity 11 networkResourceGroup: <infrastructure_id>-rg 12 osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Windows publicIP: false resourceGroup: <infrastructure_id>-rg 13 subnet: <infrastructure_id>-worker-subnet userDataSecret: name: windows-user-data 14 namespace: openshift-machine-api vmSize: Standard_D2s_v3 vnet: <infrastructure_id>-vnet 15 zone: "<zone>" 16
- 1 3 5 11 12 13 15
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 6
- Specify the Windows compute machine set name. Windows machine names on Azure cannot be more than 15 characters long. Therefore, the compute machine set name cannot be more than 9 characters long, due to the way machine names are generated from it.
- 7
- Configure the compute machine set as a Windows machine.
- 8
- Configure the Windows node as a compute machine.
- 9
- Specify a
WindowsServer
image offering that defines the2019-Datacenter-with-Containers
SKU. - 10
- Specify the Azure region, like
centralus
. - 14
- Created by the WMCO when it is configuring the first Windows machine. After that, the
windows-user-data
is available for all subsequent compute machine sets to consume. - 16
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
6.2.3. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
6.2.4. Additional resources
6.3. Creating a Windows machine set on GCP
You can create a Windows MachineSet
object to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You are using a supported Windows Server as the operating system image.
6.3.1. Machine API overview
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
-
A fundamental unit that describes the host for a node. A machine has a
providerSpec
specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. - Machine sets
MachineSet
resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change thereplicas
field on theMachineSet
resource to meet your compute need.WarningControl plane machines cannot be managed by compute machine sets.
Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines.
For more information, see “Managing control plane machines".
The following custom resources add more capabilities to your cluster:
- Machine autoscaler
The
MachineAutoscaler
resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.The
MachineAutoscaler
object takes effect after aClusterAutoscaler
object exists. BothClusterAutoscaler
andMachineAutoscaler
resources are made available by theClusterAutoscalerOperator
object.- Cluster autoscaler
This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:
- Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU
- Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods
- Set the scaling policy so that you can scale up nodes but not scale them down
- Machine health check
-
The
MachineHealthCheck
resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
6.3.2. Sample YAML for a Windows MachineSet object on GCP
This sample YAML file defines a Windows MachineSet
object running on Google Cloud Platform (GCP) that the Windows Machine Config Operator (WMCO) can use.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone_suffix> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone_suffix> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <windows_server_image> 9 sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n1-standard-4 networkInterfaces: - network: <infrastructure_id>-network 10 subnetwork: <infrastructure_id>-worker-subnet projectID: <project_id> 11 region: <region> 12 serviceAccounts: - email: <infrastructure_id>-w@<project_id>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: windows-user-data 13 zone: <zone> 14
- 1 3 5 10
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 6
- Specify the infrastructure ID, worker label, and zone suffix (such as
a
). - 7
- Configure the machine set as a Windows machine.
- 8
- Configure the Windows node as a compute machine.
- 9
- Specify the full path to an image of a supported version of Windows Server.
- 11
- Specify the GCP project that this cluster was created in.
- 12
- Specify the GCP region, such as
us-central1
. - 13
- Created by the WMCO when it configures the first Windows machine. After that, the
windows-user-data
is available for all subsequent machine sets to consume. - 14
- Specify the zone within the chosen region, such as
us-central1-a
.
6.3.3. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
6.3.4. Additional resources
6.4. Creating a Windows MachineSet object on Nutanix
You can create a Windows MachineSet
object to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You are using a supported Windows Server as the operating system image.
-
You added a new DNS entry for the internal API server URL,
api-int.<cluster_name>.<base_domain>
, that points to the external API server URL,api.<cluster_name>.<base_domain>
. This can be a CNAME or an additional A record.
6.4.1. Machine API overview
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
-
A fundamental unit that describes the host for a node. A machine has a
providerSpec
specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. - Machine sets
MachineSet
resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change thereplicas
field on theMachineSet
resource to meet your compute need.WarningControl plane machines cannot be managed by compute machine sets.
Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines.
For more information, see “Managing control plane machines".
The following custom resources add more capabilities to your cluster:
- Machine autoscaler
The
MachineAutoscaler
resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.The
MachineAutoscaler
object takes effect after aClusterAutoscaler
object exists. BothClusterAutoscaler
andMachineAutoscaler
resources are made available by theClusterAutoscalerOperator
object.- Cluster autoscaler
This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:
- Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU
- Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods
- Set the scaling policy so that you can scale up nodes but not scale them down
- Machine health check
-
The
MachineHealthCheck
resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
6.4.2. Sample YAML for a Windows MachineSet object on Nutanix
This sample YAML defines a Windows MachineSet
object running on Nutanix that the Windows Machine Config Operator (WMCO) can react upon.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-windows-worker-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 9 categories: null cluster: 10 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 11 image: 12 name: <image_id> type: name kind: NutanixMachineProviderConfig 13 memorySize: 16Gi 14 project: type: "" subnets: 15 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 16 userDataSecret: name: windows-user-data 17 vcpuSockets: 4 18 vcpusPerSocket: 1 19
- 1 3 5
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 6
- Specify the infrastructure ID, worker label, and zone.
- 7
- Configure the compute machine set as a Windows machine.
- 8
- Configure the Windows node as a compute machine.
- 9
- Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are
Legacy
,SecureBoot
, orUEFI
. The default isLegacy
.NoteYou must use the
Legacy
boot type in OpenShift Container Platform 4.15. - 10
- Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is
uuid
, so there is auuid
stanza. - 11
- Specifies the secret name for the cluster. Do not change this value.
- 12
- Specifies the image to use. Use an image from an existing default compute machine set for the cluster.
- 13
- Specifies the cloud provider platform type. Do not change this value.
- 14
- Specifies the amount of memory for the cluster in Gi.
- 15
- Specifies a subnet configuration. In this example, the subnet type is
uuid
, so there is auuid
stanza. - 16
- Specifies the size of the system disk in Gi.
- 17
- Specifies the name of the secret in the user data YAML file that is in the
openshift-machine-api
namespace. Use the value that installation program populates in the default compute machine set. - 18
- Specifies the number of vCPU sockets.
- 19
- Specifies the number of vCPUs per socket.
6.4.3. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
6.4.4. Additional resources
6.5. Creating a Windows machine set on vSphere
You can create a Windows MachineSet
object to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You are using a supported Windows Server as the operating system image.
6.5.1. Machine API overview
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources.
For OpenShift Container Platform 4.15 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.15 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
- Machines
-
A fundamental unit that describes the host for a node. A machine has a
providerSpec
specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. - Machine sets
MachineSet
resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change thereplicas
field on theMachineSet
resource to meet your compute need.WarningControl plane machines cannot be managed by compute machine sets.
Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines.
For more information, see “Managing control plane machines".
The following custom resources add more capabilities to your cluster:
- Machine autoscaler
The
MachineAutoscaler
resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.The
MachineAutoscaler
object takes effect after aClusterAutoscaler
object exists. BothClusterAutoscaler
andMachineAutoscaler
resources are made available by theClusterAutoscalerOperator
object.- Cluster autoscaler
This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:
- Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU
- Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods
- Set the scaling policy so that you can scale up nodes but not scale them down
- Machine health check
-
The
MachineHealthCheck
resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
6.5.2. Preparing your vSphere environment for Windows container workloads
You must prepare your vSphere environment for Windows container workloads by creating the vSphere Windows VM golden image and enabling communication with the internal API server for the WMCO.
6.5.2.1. Creating the vSphere Windows VM golden image
Create a vSphere Windows virtual machine (VM) golden image.
Prerequisites
- You have created a private/public key pair, which is used to configure key-based authentication in the OpenSSH server. The private key must also be configured in the Windows Machine Config Operator (WMCO) namespace. This is required to allow the WMCO to communicate with the Windows VM. See the "Configuring a secret for the Windows Machine Config Operator" section for more details.
You must use Microsoft PowerShell commands in several cases when creating your Windows VM. PowerShell commands in this guide are distinguished by the PS C:\>
prefix.
Procedure
- Select a compatible Windows Server version. Currently, the Windows Machine Config Operator (WMCO) stable version supports Windows Server 2022 Long-Term Servicing Channel with the OS-level container networking patch KB5012637.
Create a new VM in the vSphere client using the VM golden image with a compatible Windows Server version. For more information about compatible versions, see the "Windows Machine Config Operator prerequisites" section of the "Red Hat OpenShift support for Windows Containers release notes."
ImportantThe virtual hardware version for your VM must meet the infrastructure requirements for OpenShift Container Platform. For more information, see the "VMware vSphere infrastructure requirements" section in the OpenShift Container Platform documentation. Also, you can refer to VMware’s documentation on virtual machine hardware versions.
- Install and configure VMware Tools version 11.0.6 or greater on the Windows VM. See the VMware Tools documentation for more information.
After installing VMware Tools on the Windows VM, verify the following:
The
C:\ProgramData\VMware\VMware Tools\tools.conf
file exists with the following entry:exclude-nics=
If the
tools.conf
file does not exist, create it with theexclude-nics
option uncommented and set as an empty value.This entry ensures the cloned vNIC generated on the Windows VM by the hybrid-overlay is not ignored.
The Windows VM has a valid IP address in vCenter:
C:\> ipconfig
The VMTools Windows service is running:
PS C:\> Get-Service -Name VMTools | Select Status, StartType
- Install and configure the OpenSSH Server on the Windows VM. See Microsoft’s documentation on installing OpenSSH for more details.
Set up SSH access for an administrative user. See Microsoft’s documentation on the Administrative user to do this.
ImportantThe public key used in the instructions must correspond to the private key you create later in the WMCO namespace that holds your secret. See the "Configuring a secret for the Windows Machine Config Operator" section for more details.
You must create a new firewall rule in the Windows VM that allows incoming connections for container logs. Run the following PowerShell command to create the firewall rule on TCP port 10250:
PS C:\> New-NetFirewallRule -DisplayName "ContainerLogsPort" -LocalPort 10250 -Enabled True -Direction Inbound -Protocol TCP -Action Allow -EdgeTraversalPolicy Allow
- Clone the Windows VM so it is a reusable image. Follow the VMware documentation on how to clone an existing virtual machine for more details.
In the cloned Windows VM, run the Windows Sysprep tool:
C:\> C:\Windows\System32\Sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:<path_to_unattend.xml> 1
- 1
- Specify the path to your
unattend.xml
file.
NoteThere is a limit on how many times you can run the
sysprep
command on a Windows image. Consult Microsoft’s documentation for more information.An example
unattend.xml
is provided, which maintains all the changes needed for the WMCO. You must modify this example; it cannot be used directly.Example 6.1. Example
unattend.xml
<?xml version="1.0" encoding="UTF-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="specialize"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Security-SPP-UX" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <SkipAutoActivation>true</SkipAutoActivation> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-SQMApi" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <CEIPEnabled>0</CEIPEnabled> </component> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <ComputerName>winhost</ComputerName> 1 </component> </settings> <settings pass="oobeSystem"> <component xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"> <AutoLogon> <Enabled>false</Enabled> 2 </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> <SkipMachineOOBE>true</SkipMachineOOBE> <SkipUserOOBE>true</SkipUserOOBE> </OOBE> <RegisteredOrganization>Organization</RegisteredOrganization> <RegisteredOwner>Owner</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <TimeZone>Eastern Standard Time</TimeZone> <UserAccounts> <AdministratorPassword> <Value>MyPassword</Value> 3 <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend>
- 1
- Specify the
ComputerName
, which must follow the Kubernetes' names specification. These specifications also apply to Guest OS customization performed on the resulting template while creating new VMs. - 2
- Disable the automatic logon to avoid the security issue of leaving an open terminal with Administrator privileges at boot. This is the default value and must not be changed.
- 3
- Replace the
MyPassword
placeholder with the password for the Administrator account. This prevents the built-in Administrator account from having a blank password by default. Follow Microsoft’s best practices for choosing a password.
After the Sysprep tool has completed, the Windows VM will power off. You must not use or power on this VM anymore.
- Convert the Windows VM to a template in vCenter.
6.5.2.1.1. Additional resources
6.5.2.2. Enabling communication with the internal API server for the WMCO on vSphere
The Windows Machine Config Operator (WMCO) downloads the Ignition config files from the internal API server endpoint. You must enable communication with the internal API server so that your Windows virtual machine (VM) can download the Ignition config files, and the kubelet on the configured VM can only communicate with the internal API server.
Prerequisites
- You have installed a cluster on vSphere.
Procedure
-
Add a new DNS entry for
api-int.<cluster_name>.<base_domain>
that points to the external API server URLapi.<cluster_name>.<base_domain>
. This can be a CNAME or an additional A record.
The external API endpoint was already created as part of the initial cluster installation on vSphere.
6.5.3. Sample YAML for a Windows MachineSet object on vSphere
This sample YAML defines a Windows MachineSet
object running on VMware vSphere that the Windows Machine Config Operator (WMCO) can react upon.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <windows_machine_set_name> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <windows_machine_set_name> 6 machine.openshift.io/os-id: Windows 7 spec: metadata: labels: node-role.kubernetes.io/worker: "" 8 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 128 9 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <windows_vm_template_name> 11 userDataSecret: name: windows-user-data 12 workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcePool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17
- 1 3 5
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 6
- Specify the Windows compute machine set name. The compute machine set name cannot be more than 9 characters long, due to the way machine names are generated in vSphere.
- 7
- Configure the compute machine set as a Windows machine.
- 8
- Configure the Windows node as a compute machine.
- 9
- Specify the size of the vSphere Virtual Machine Disk (VMDK).Note
This parameter does not set the size of the Windows partition. You can resize the Windows partition by using the
unattend.xml
file or by creating the vSphere Windows virtual machine (VM) golden image with the required disk size. - 10
- Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other Linux compute machines reside in the cluster.
- 11
- Specify the full path of the Windows vSphere VM template to use, such as
golden-images/windows-server-template
. The name must be unique.ImportantDo not specify the original VM template. The VM template must remain off and must be cloned for new Windows machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to.
- 12
- The
windows-user-data
is created by the WMCO when the first Windows machine is configured. After that, thewindows-user-data
is available for all subsequent compute machine sets to consume. - 13
- Specify the vCenter Datacenter to deploy the compute machine set on.
- 14
- Specify the vCenter Datastore to deploy the compute machine set on.
- 15
- Specify the path to the vSphere VM folder in vCenter, such as
/dc1/vm/user-inst-5ddjd
. - 16
- Optional: Specify the vSphere resource pool for your Windows VMs.
- 17
- Specify the vCenter server IP or fully qualified domain name.
6.5.4. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-windows-worker-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
6.5.5. Additional resources
Chapter 7. Scheduling Windows container workloads
You can schedule Windows workloads to Windows compute nodes.
Prerequisites
- You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
- You are using a Windows container as the OS image.
- You have created a Windows compute machine set.
7.1. Windows pod placement
Before deploying your Windows workloads to the cluster, you must configure your Windows node scheduling so pods are assigned correctly. Since you have a machine hosting your Windows node, it is managed the same as a Linux-based node. Likewise, scheduling a Windows pod to the appropriate Windows node is completed similarly, using mechanisms like taints, tolerations, and node selectors.
With multiple operating systems, and the ability to run multiple Windows OS variants in the same cluster, you must map your Windows pods to a base Windows OS variant by using a RuntimeClass
object. For example, if you have multiple Windows nodes running on different Windows Server container versions, the cluster could schedule your Windows pods to an incompatible Windows OS variant. You must have RuntimeClass
objects configured for each Windows OS variant on your cluster. Using a RuntimeClass
object is also recommended if you have only one Windows OS variant available in your cluster.
For more information, see Microsoft’s documentation on Host and container version compatibility.
Also, it is recommended that you set the spec.os.name.windows
parameter in your workload pods. The Windows Machine Config Operator (WMCO) uses this field to authoritatively identify the pod operating system for validation and is used to enforce Windows-specific pod security context constraints (SCCs). Currently, this parameter has no effect on pod scheduling. For more information about this parameter, see the Kubernetes Pods documentation.
The container base image must be the same Windows OS version and build number that is running on the node where the conainer is to be scheduled.
Also, if you upgrade the Windows nodes from one version to another, for example going from 20H2 to 2022, you must upgrade your container base image to match the new version. For more information, see Windows container version compatibility.
Additional resources
7.2. Creating a RuntimeClass object to encapsulate scheduling mechanisms
Using a RuntimeClass
object simplifies the use of scheduling mechanisms like taints and tolerations; you deploy a runtime class that encapsulates your taints and tolerations and then apply it to your pods to schedule them to the appropriate node. Creating a runtime class is also necessary in clusters that support multiple operating system variants.
Procedure
Create a
RuntimeClass
object YAML file. For example,runtime-class.yaml
:apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: "windows" - effect: NoSchedule key: os operator: Equal value: "Windows"
- 1
- Specify the
RuntimeClass
object name, which is defined in the pods you want to be managed by this runtime class. - 2
- Specify labels that must be present on nodes that support this runtime class. Pods using this runtime class can only be scheduled to a node matched by this selector. The node selector of the runtime class is merged with the existing node selector of the pod. Any conflicts prevent the pod from being scheduled to the node.
-
For Windows 2019, specify the
node.kubernetes.io/windows-build: '10.0.17763'
label. -
For Windows 2022, specify the
node.kubernetes.io/windows-build: '10.0.20348'
label.
-
For Windows 2019, specify the
- 3
- Specify tolerations to append to pods, excluding duplicates, running with this runtime class during admission. This combines the set of nodes tolerated by the pod and the runtime class.
Create the
RuntimeClass
object:$ oc create -f <file-name>.yaml
For example:
$ oc create -f runtime-class.yaml
Apply the
RuntimeClass
object to your pod to ensure it is scheduled to the appropriate operating system variant:apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1 # ...
- 1
- Specify the runtime class to manage the scheduling of your pod.
7.3. Sample Windows container workload deployment
You can deploy Windows container workloads to your cluster once you have a Windows compute node available.
This sample deployment is provided for reference only.
Example Service
object
apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer
Example Deployment
object
apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - $listener = New-Object System.Net.HttpListener; $listener.Prefixes.Add('http://*:80/'); $listener.Start();Write-Host('Listening at http://*:80/'); while ($listener.IsListening) { $context = $listener.GetContext(); $response = $context.Response; $content='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; $buffer = [System.Text.Encoding]::UTF8.GetBytes($content); $response.ContentLength64 = $buffer.Length; $response.OutputStream.Write($buffer, 0, $buffer.Length); $response.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: "ContainerAdministrator" os: name: "windows" runtimeClassName: windows2019 3
- 1
- Specify the container image to use:
mcr.microsoft.com/powershell:<tag>
ormcr.microsoft.com/windows/servercore:<tag>
. The container image must match the Windows version running on the node.-
For Windows 2019, use the
ltsc2019
tag. -
For Windows 2022, use the
ltsc2022
tag.
-
For Windows 2019, use the
- 2
- Specify the commands to execute on the container.
-
For the
mcr.microsoft.com/powershell:<tag>
container image, you must define the command aspwsh.exe
. -
For the
mcr.microsoft.com/windows/servercore:<tag>
container image, you must define the command aspowershell.exe
.
-
For the
- 3
- Specify the runtime class you created for the Windows operating system variant on your cluster.
7.4. Support for Windows CSI drivers
Red Hat OpenShift support for Windows Containers installs CSI Proxy on all Windows nodes in the cluster. CSI Proxy is a plug-in that enables CSI drivers to perform storage operations on the node.
To use persistent storage with Windows workloads, you must deploy a specific Windows CSI driver daemon set, as described in your storage provider’s documentation. By default, the WMCO does not automatically create the Windows CSI driver daemon set. See the list of supported production drivers in the Kubernetes Container Storage Interface (CSI) Documentation.
7.5. Scaling a compute machine set manually
To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set.
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets.
Prerequisites
-
Install an OpenShift Container Platform cluster and the
oc
command line. -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the compute machine sets that are in the cluster by running the following command:
$ oc get machinesets -n openshift-machine-api
The compute machine sets are listed in the form of
<clusterid>-worker-<aws-region-az>
.View the compute machines that are in the cluster by running the following command:
$ oc get machine -n openshift-machine-api
Set the annotation on the compute machine that you want to delete by running the following command:
$ oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true"
Scale the compute machine set by running one of the following commands:
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Or:
$ oc edit machineset <machineset> -n openshift-machine-api
TipYou can alternatively apply the following YAML to scale the compute machine set:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2
You can scale the compute machine set up or down. It takes several minutes for the new machines to be available.
ImportantBy default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine.
You can skip draining the node by annotating
machine.openshift.io/exclude-node-draining
in a specific machine.
Verification
Verify the deletion of the intended machine by running the following command:
$ oc get machines
Chapter 8. Windows node upgrades
You can ensure your Windows nodes have the latest updates by upgrading the Windows Machine Config Operator (WMCO).
8.1. Windows Machine Config Operator upgrades
When a new version of the Windows Machine Config Operator (WMCO) is released that is compatible with the current cluster version, the Operator is upgraded based on the upgrade channel and subscription approval strategy it was installed with when using the Operator Lifecycle Manager (OLM). The WMCO upgrade results in the Kubernetes components in the Windows machine being upgraded.
If you are upgrading to a new version of the WMCO and want to use cluster monitoring, you must have the openshift.io/cluster-monitoring=true
label present in the WMCO namespace. If you add the label to a pre-existing WMCO namespace, and there are already Windows nodes configured, restart the WMCO pod to allow monitoring graphs to display.
For a non-disruptive upgrade, the WMCO terminates the Windows machines configured by the previous version of the WMCO and recreates them using the current version. This is done by deleting the Machine
object, which results in the drain and deletion of the Windows node. To facilitate an upgrade, the WMCO adds a version annotation to all the configured nodes. During an upgrade, a mismatch in version annotation results in the deletion and recreation of a Windows machine. To have minimal service disruptions during an upgrade, the WMCO only updates one Windows machine at a time.
After the update, it is recommended that you set the spec.os.name.windows
parameter in your workload pods. The WMCO uses this field to authoritatively identify the pod operating system for validation and is used to enforce Windows-specific pod security context constraints (SCCs).
The WMCO is only responsible for updating Kubernetes components, not for Windows operating system updates. You provide the Windows image when creating the VMs; therefore, you are responsible for providing an updated image. You can provide an updated Windows image by changing the image configuration in the MachineSet
spec.
For more information on Operator upgrades using the Operator Lifecycle Manager (OLM), see Updating installed Operators.
Chapter 9. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes
Bring-Your-Own-Host (BYOH) allows for users to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users looking to mitigate major disruptions in the event that a Windows server goes offline.
9.1. Configuring a BYOH Windows instance
Creating a BYOH Windows instance requires creating a config map in the Windows Machine Config Operator (WMCO) namespace.
Prerequisites
Any Windows instances that are to be attached to the cluster as a node must fulfill the following requirements:
- The instance must be on the same network as the Linux worker nodes in the cluster.
- Port 22 must be open and running an SSH server.
-
The default shell for the SSH server must be the Windows Command shell, or
cmd.exe
. - Port 10250 must be open for log collection.
- An administrator user is present with the private key used in the secret set as an authorized SSH key.
-
If you are creating a BYOH Windows instance for an installer-provisioned infrastructure (IPI) AWS cluster, you must add a tag to the AWS instance that matches the
spec.template.spec.value.tag
value in the compute machine set for your worker nodes. For example,kubernetes.io/cluster/<cluster_id>: owned
orkubernetes.io/cluster/<cluster_id>: shared
. - If you are creating a BYOH Windows instance on vSphere, communication with the internal API server must be enabled.
The hostname of the instance must follow the RFC 1123 DNS label requirements, which include the following standards:
- Contains only lowercase alphanumeric characters or '-'.
- Starts with an alphanumeric character.
- Ends with an alphanumeric character.
Windows instances deployed by the WMCO are configured with the containerd container runtime. Because the WMCO installs and manages the runtime, it is recommended that you not manually install containerd on nodes.
Procedure
Create a ConfigMap named
windows-instances
in the WMCO namespace that describes the Windows instances to be added.NoteFormat each entry in the config map’s data section by using the address as the key while formatting the value as
username=<username>
.Example config map
kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core
- 1
- The address that the WMCO uses to reach the instance over SSH, either a DNS name or an IPv4 address. A DNS PTR record must exist for this address. It is recommended that you use a DNS name with your BYOH instance if your organization uses DHCP to assign IP addresses. If not, you need to update the
windows-instances
ConfigMap whenever the instance is assigned a new IP address. - 2
- The name of the administrator user created in the prerequisites.
9.2. Removing BYOH Windows instances
You can remove BYOH instances attached to the cluster by deleting the instance’s entry in the config map. Deleting an instance reverts that instance back to its state prior to adding to the cluster. Any logs and container runtime artifacts are not added to these instances.
For an instance to be cleanly removed, it must be accessible with the current private key provided to WMCO. For example, to remove the 10.1.42.1
instance from the previous example, the config map would be changed to the following:
kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core
Deleting windows-instances
is viewed as a request to deconstruct all Windows instances added as nodes.
Chapter 10. Removing Windows nodes
You can remove a Windows node by deleting its host Windows machine.
10.1. Deleting a specific machine
You can delete a specific machine.
Do not delete a control plane machine unless your cluster uses a control plane machine set.
Prerequisites
- Install an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the machines that are in the cluster by running the following command:
$ oc get machine -n openshift-machine-api
The command output contains a list of machines in the
<clusterid>-<role>-<cloud_region>
format.- Identify the machine that you want to delete.
Delete the machine by running the following command:
$ oc delete machine <machine> -n openshift-machine-api
ImportantBy default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine.
You can skip draining the node by annotating
machine.openshift.io/exclude-node-draining
in a specific machine.If the machine that you delete belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas.
Chapter 11. Disabling Windows container workloads
You can disable the capability to run Windows container workloads by uninstalling the Windows Machine Config Operator (WMCO) and deleting the namespace that was added by default when you installed the WMCO.
11.1. Uninstalling the Windows Machine Config Operator
You can uninstall the Windows Machine Config Operator (WMCO) from your cluster.
Prerequisites
-
Delete the Windows
Machine
objects hosting your Windows workloads.
Procedure
-
From the Operators → OperatorHub page, use the Filter by keyword box to search for
Red Hat Windows Machine Config Operator
. - Click the Red Hat Windows Machine Config Operator tile. The Operator tile indicates it is installed.
- In the Windows Machine Config Operator descriptor page, click Uninstall.
11.2. Deleting the Windows Machine Config Operator namespace
You can delete the namespace that was generated for the Windows Machine Config Operator (WMCO) by default.
Prerequisites
- The WMCO is removed from your cluster.
Procedure
Remove all Windows workloads that were created in the
openshift-windows-machine-config-operator
namespace:$ oc delete --all pods --namespace=openshift-windows-machine-config-operator
Verify that all pods in the
openshift-windows-machine-config-operator
namespace are deleted or are reporting a terminating state:$ oc get pods --namespace openshift-windows-machine-config-operator
Delete the
openshift-windows-machine-config-operator
namespace:$ oc delete namespace openshift-windows-machine-config-operator