Este contenido no está disponible en el idioma seleccionado.

Chapter 12. Managing control plane machines


12.1. About control plane machine sets

With control plane machine sets, you can automate management of the control plane machine resources within your OpenShift Container Platform cluster.

Important

Control plane machine sets cannot manage compute machines, and compute machine sets cannot manage control plane machines.

Control plane machine sets provide for control plane machines similar management capabilities as compute machine sets provide for compute machines. However, these two types of machine sets are separate custom resources defined within the Machine API and have several fundamental differences in their architecture and functionality.

12.1.1. Control Plane Machine Set Operator overview

The Control Plane Machine Set Operator uses the ControlPlaneMachineSet custom resource (CR) to automate management of the control plane machine resources within your OpenShift Container Platform cluster.

When the state of the cluster control plane machine set is set to Active, the Operator ensures that the cluster has the correct number of control plane machines with the specified configuration. This allows the automated replacement of degraded control plane machines and rollout of changes to the control plane.

A cluster has only one control plane machine set, and the Operator only manages objects in the openshift-machine-api namespace.

12.1.1.1. Control Plane Machine Set Operator limitations

The Control Plane Machine Set Operator has the following limitations:

  • Only Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Power® Virtual Server, Microsoft Azure, Nutanix, VMware vSphere, and Red Hat OpenStack Platform (RHOSP) clusters are supported.
  • Clusters that do not have preexisting machines that represent the control plane nodes cannot use a control plane machine set or enable the use of a control plane machine set after installation. Generally, preexisting control plane machines are only present if a cluster was installed using infrastructure provisioned by the installation program.

    To determine if a cluster has the required preexisting control plane machines, run the following command as a user with administrator privileges:

    $ oc get machine \
      -n openshift-machine-api \
      -l machine.openshift.io/cluster-api-machine-role=master

    Example output showing preexisting control plane machines

    NAME                    PHASE     TYPE         REGION      ZONE         AGE
    <infrastructure_id>-master-0   Running   m6i.xlarge   us-west-1   us-west-1a   5h19m
    <infrastructure_id>-master-1   Running   m6i.xlarge   us-west-1   us-west-1b   5h19m
    <infrastructure_id>-master-2   Running   m6i.xlarge   us-west-1   us-west-1a   5h19m

    Example output missing preexisting control plane machines

    No resources found in openshift-machine-api namespace.

  • The Operator requires the Machine API Operator to be operational and is therefore not supported on clusters with manually provisioned machines. When installing a OpenShift Container Platform cluster with manually provisioned machines for a platform that creates an active generated ControlPlaneMachineSet custom resource (CR), you must remove the Kubernetes manifest files that define the control plane machine set as instructed in the installation process.
  • Only clusters with three control plane machines are supported.
  • Horizontal scaling of the control plane is not supported.
  • Deploying Azure control plane machines on Ephemeral OS disks increases risk for data loss and is not supported.
  • Deploying control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs is not supported.

    Important

    Attempting to deploy control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs might cause the cluster to lose etcd quorum. A cluster that loses all control plane machines simultaneously is unrecoverable.

  • Making changes to the control plane machine set during or prior to installation is not supported. You must make any changes to the control plane machine set only after installation.

12.1.2. Additional resources

12.2. Getting started with control plane machine sets

The process for getting started with control plane machine sets depends on the state of the ControlPlaneMachineSet custom resource (CR) in your cluster.

Clusters with an active generated CR
Clusters that have a generated CR with an active state use the control plane machine set by default. No administrator action is required.
Clusters with an inactive generated CR
For clusters that include an inactive generated CR, you must review the CR configuration and activate the CR.
Clusters without a generated CR
For clusters that do not include a generated CR, you must create and activate a CR with the appropriate configuration for your cluster.

If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status.

12.2.1. Supported cloud providers

In OpenShift Container Platform 4.17, the control plane machine set is supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere clusters.

The status of the control plane machine set after installation depends on your cloud provider and the version of OpenShift Container Platform that you installed on your cluster.

Table 12.1. Control plane machine set implementation for OpenShift Container Platform 4.17
Cloud providerActive by defaultGenerated CRManual CR required

Amazon Web Services (AWS)

X [1]

X

 

Google Cloud Platform (GCP)

X [2]

X

 

Microsoft Azure

X [2]

X

 

Nutanix

X [3]

X

 

Red Hat OpenStack Platform (RHOSP)

X [3]

X

 

VMware vSphere

X [4]

X

 
  1. AWS clusters that are upgraded from version 4.11 or earlier require CR activation.
  2. GCP and Azure clusters that are upgraded from version 4.12 or earlier require CR activation.
  3. Nutanix and RHOSP clusters that are upgraded from version 4.13 or earlier require CR activation.
  4. vSphere clusters that are upgraded from version 4.15 or earlier require CR activation.

12.2.2. Checking the control plane machine set custom resource state

You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR).

Procedure

  • Determine the state of the CR by running the following command:

    $ oc get controlplanemachineset.machine.openshift.io cluster \
      --namespace openshift-machine-api
    • A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required.
    • A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated.
    • A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR.

Next steps

To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists.

  • If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster.
  • If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster.

12.2.3. Activating the control plane machine set custom resource

To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster with a generated CR, you must verify that the configuration in the CR is correct for your cluster and activate it.

Note

For more information about the parameters in the CR, see "Control plane machine set configuration".

Procedure

  1. View the configuration of the CR by running the following command:

    $ oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster
  2. Change the values of any fields that are incorrect for your cluster configuration.
  3. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes.

    Important

    To activate the CR, you must change the .spec.state field to Active in the same oc edit session that you use to update the CR configuration. If the CR is saved with the state left as Inactive, the control plane machine set generator resets the CR to its original settings.

12.2.4. Creating a control plane machine set custom resource

To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster without a generated CR, you must create the CR manually and activate it.

Note

For more information about the structure and parameters of the CR, see "Control plane machine set configuration".

Procedure

  1. Create a YAML file using the following template:

    Control plane machine set CR YAML file template

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    metadata:
      name: cluster
      namespace: openshift-machine-api
    spec:
      replicas: 3
      selector:
        matchLabels:
          machine.openshift.io/cluster-api-cluster: <cluster_id> 1
          machine.openshift.io/cluster-api-machine-role: master
          machine.openshift.io/cluster-api-machine-type: master
      state: Active 2
      strategy:
        type: RollingUpdate 3
      template:
        machineType: machines_v1beta1_machine_openshift_io
        machines_v1beta1_machine_openshift_io:
          failureDomains:
            platform: <platform> 4
            <platform_failure_domains> 5
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <cluster_id> 6
              machine.openshift.io/cluster-api-machine-role: master
              machine.openshift.io/cluster-api-machine-type: master
          spec:
            providerSpec:
              value:
                <platform_provider_spec> 7

    1
    Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
    $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
    2
    Specify the state of the Operator. When the state is Inactive, the Operator is not operational. You can activate the Operator by setting the value to Active.
    Important

    Before you activate the CR, you must ensure that its configuration is correct for your cluster requirements.

    3
    Specify the update strategy for the cluster. Valid values are OnDelete and RollingUpdate. The default value is RollingUpdate. For more information about update strategies, see "Updating the control plane configuration".
    4
    Specify your cloud provider platform name. Valid values are AWS, Azure, GCP, Nutanix, VSphere, and OpenStack.
    5
    Add the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider.
    6
    Specify the infrastructure ID.
    7
    Add the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider.
  2. Refer to the sample YAML for a control plane machine set CR and populate your file with values that are appropriate for your cluster configuration.
  3. Refer to the sample failure domain configuration and sample provider specification for your cloud provider and update those sections of your file with the appropriate values.
  4. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes.
  5. Create the CR from your YAML file by running the following command:

    $ oc create -f <control_plane_machine_set>.yaml

    where <control_plane_machine_set> is the name of the YAML file that contains the CR configuration.

12.3. Managing control plane machines with control plane machine sets

Control plane machine sets automate several essential aspects of control plane management.

12.3.1. Updating the control plane configuration

You can make changes to the configuration of the machines in the control plane by updating the specification in the control plane machine set custom resource (CR).

The Control Plane Machine Set Operator monitors the control plane machines and compares their configuration with the specification in the control plane machine set CR. When there is a discrepancy between the specification in the CR and the configuration of a control plane machine, the Operator marks that control plane machine for replacement.

Note

For more information about the parameters in the CR, see "Control plane machine set configuration".

Prerequisites

  • Your cluster has an activated and functioning Control Plane Machine Set Operator.

Procedure

  1. Edit your control plane machine set CR by running the following command:

    $ oc edit controlplanemachineset.machine.openshift.io cluster \
      -n openshift-machine-api
  2. Change the values of any fields that you want to update in your cluster configuration.
  3. Save your changes.

Next steps

  • For clusters that use the default RollingUpdate update strategy, the control plane machine set propagates changes to your control plane configuration automatically.
  • For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually.

12.3.1.1. Automatic updates to the control plane configuration

The RollingUpdate update strategy automatically propagates changes to your control plane configuration. This update strategy is the default configuration for the control plane machine set.

For clusters that use the RollingUpdate update strategy, the Operator creates a replacement control plane machine with the configuration that is specified in the CR. When the replacement control plane machine is ready, the Operator deletes the control plane machine that is marked for replacement. The replacement machine then joins the control plane.

If multiple control plane machines are marked for replacement, the Operator protects etcd health during replacement by repeating this replacement process one machine at a time until it has replaced each machine.

12.3.1.2. Manual updates to the control plane configuration

You can use the OnDelete update strategy to propagate changes to your control plane configuration by replacing machines manually. Manually replacing machines allows you to test changes to your configuration on a single machine before applying the changes more broadly.

For clusters that are configured to use the OnDelete update strategy, the Operator creates a replacement control plane machine when you delete an existing machine. When the replacement control plane machine is ready, the etcd Operator allows the existing machine to be deleted. The replacement machine then joins the control plane.

If multiple control plane machines are deleted, the Operator creates all of the required replacement machines simultaneously. The Operator maintains etcd health by preventing more than one machine being removed from the control plane at once.

12.3.2. Replacing a control plane machine

To replace a control plane machine in a cluster that has a control plane machine set, you delete the machine manually. The control plane machine set replaces the deleted machine with one using the specification in the control plane machine set custom resource (CR).

Prerequisites

  • If your cluster runs on Red Hat OpenStack Platform (RHOSP) and you need to evacuate a compute server, such as for an upgrade, you must disable the RHOSP compute node that the machine runs on by running the following command:

    $ openstack compute service set <target_node_host_name> nova-compute --disable

    For more information, see Preparing to migrate in the RHOSP documentation.

Procedure

  1. List the control plane machines in your cluster by running the following command:

    $ oc get machines \
      -l machine.openshift.io/cluster-api-machine-role==master \
      -n openshift-machine-api
  2. Delete a control plane machine by running the following command:

    $ oc delete machine \
      -n openshift-machine-api \
      <control_plane_machine_name> 1
    1
    Specify the name of the control plane machine to delete.
    Note

    If you delete multiple control plane machines, the control plane machine set replaces them according to the configured update strategy:

    • For clusters that use the default RollingUpdate update strategy, the Operator replaces one machine at a time until each machine is replaced.
    • For clusters that are configured to use the OnDelete update strategy, the Operator creates all of the required replacement machines simultaneously.

    Both strategies maintain etcd health during control plane machine replacement.

12.3.3. Additional resources

12.4. Control plane machine set configuration

This example YAML snippet shows the base structure for a control plane machine set custom resource (CR).

12.4.1. Sample YAML for a control plane machine set custom resource

The base of the ControlPlaneMachineSet CR is structured the same way for all platforms.

Sample ControlPlaneMachineSet CR YAML file

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster 1
  namespace: openshift-machine-api
spec:
  replicas: 3 2
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <cluster_id> 3
      machine.openshift.io/cluster-api-machine-role: master
      machine.openshift.io/cluster-api-machine-type: master
  state: Active 4
  strategy:
    type: RollingUpdate 5
  template:
    machineType: machines_v1beta1_machine_openshift_io
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        platform: <platform> 6
        <platform_failure_domains> 7
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <cluster_id>
          machine.openshift.io/cluster-api-machine-role: master
          machine.openshift.io/cluster-api-machine-type: master
      spec:
        providerSpec:
          value:
            <platform_provider_spec> 8

1
Specifies the name of the ControlPlaneMachineSet CR, which is cluster. Do not change this value.
2
Specifies the number of control plane machines. Only clusters with three control plane machines are supported, so the replicas value is 3. Horizontal scaling is not supported. Do not change this value.
3
Specifies the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
4
Specifies the state of the Operator. When the state is Inactive, the Operator is not operational. You can activate the Operator by setting the value to Active.
Important

Before you activate the Operator, you must ensure that the ControlPlaneMachineSet CR configuration is correct for your cluster requirements. For more information about activating the Control Plane Machine Set Operator, see "Getting started with control plane machine sets".

5
Specifies the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate. The default value is RollingUpdate. For more information about update strategies, see "Updating the control plane configuration".
6
Specifies the cloud provider platform name. Do not change this value.
7
Specifies the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider.
8
Specifies the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider.

12.4.2. Provider-specific configuration options

The <platform_provider_spec> and <platform_failure_domains> sections of the control plane machine set manifests are provider specific. For provider-specific configuration options for your cluster, see the following resources:

12.5. Configuration options for control plane machines

12.5.1. Control plane configuration options for Amazon Web Services

You can change the configuration of your Amazon Web Services (AWS) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

12.5.1.1. Sample YAML for configuring Amazon Web Services clusters

The following example YAML snippets show provider specification and failure domain configurations for an AWS cluster.

12.5.1.1.1. Sample AWS provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Sample AWS providerSpec values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            ami:
              id: ami-<ami_id_string> 1
            apiVersion: machine.openshift.io/v1beta1
            blockDevices:
            - ebs: 2
                encrypted: true
                iops: 0
                kmsKey:
                  arn: ""
                volumeSize: 120
                volumeType: gp3
            credentialsSecret:
              name: aws-cloud-credentials 3
            deviceIndex: 0
            iamInstanceProfile:
              id: <cluster_id>-master-profile 4
            instanceType: m6i.xlarge 5
            kind: AWSMachineProviderConfig 6
            loadBalancers: 7
            - name: <cluster_id>-int
              type: network
            - name: <cluster_id>-ext
              type: network
            metadata:
              creationTimestamp: null
            metadataServiceOptions: {}
            placement: 8
              region: <region> 9
              availabilityZone: "" 10
              tenancy: 11
            securityGroups:
            - filters:
              - name: tag:Name
                values:
                - <cluster_id>-master-sg 12
            subnet: {} 13
            userDataSecret:
              name: master-user-data 14

1
Specifies the Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region.
2
Specifies the configuration of an encrypted EBS volume.
3
Specifies the secret name for the cluster. Do not change this value.
4
Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value.
5
Specifies the AWS instance type for the control plane.
6
Specifies the cloud provider platform type. Do not change this value.
7
Specifies the internal (int) and external (ext) load balancers for the cluster.
Note

You can omit the external (ext) load balancer parameters on private OpenShift Container Platform clusters.

8
Specifies where to create the control plane instance in AWS.
9
Specifies the AWS region for the cluster.
10
This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain.
11
Specifies the AWS Dedicated Instance configuration for the control plane. For more information, see AWS documentation about Dedicated Instances. The following values are valid:
  • default: The Dedicated Instance runs on shared hardware.
  • dedicated: The Dedicated Instance runs on single-tenant hardware.
  • host: The Dedicated Instance runs on a Dedicated Host, which is an isolated server with configurations that you can control.
12
Specifies the control plane machines security group.
13
This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain.
Note

If the failure domain configuration does not specify a value, the value in the provider specification is used. Configuring a subnet in the failure domain overwrites the subnet value in the provider specification.

14
Specifies the control plane user data secret. Do not change this value.
12.5.1.1.2. Sample AWS failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing AWS concept of an Availability Zone (AZ). The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use.

Sample AWS failure domain values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        aws:
        - placement:
            availabilityZone: <aws_zone_a> 1
          subnet: 2
            filters:
            - name: tag:Name
              values:
              - <cluster_id>-private-<aws_zone_a> 3
            type: Filters 4
        - placement:
            availabilityZone: <aws_zone_b> 5
          subnet:
            filters:
            - name: tag:Name
              values:
              - <cluster_id>-private-<aws_zone_b> 6
            type: Filters
        platform: AWS 7
# ...

1
Specifies an AWS availability zone for the first failure domain.
2
Specifies a subnet configuration. In this example, the subnet type is Filters, so there is a filters stanza.
3
Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone.
4
Specifies the subnet type. The allowed values are: ARN, Filters and ID. The default value is Filters.
5
Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone.
6
Specifies the cluster’s infrastructure ID and the AWS availability zone for the additional failure domain.
7
Specifies the cloud provider platform name. Do not change this value.

12.5.1.2. Enabling Amazon Web Services features for control plane machines

You can enable features by updating values in the control plane machine set.

12.5.1.2.1. Restricting the API server to private

After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Have access to the web console as a user with admin privileges.

Procedure

  1. In the web portal or console for your cloud provider, take the following actions:

    1. Locate and delete the appropriate load balancer component:

      • For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
    2. Delete the api.$clustername.$yourdomain DNS entry in the public zone.
  2. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource:

    # ...
    providerSpec:
      value:
    # ...
        loadBalancers:
        - name: lk4pj-ext 1
          type: network 2
        - name: lk4pj-int
          type: network
    # ...
    1
    Delete the name value for the external load balancer, which ends in -ext.
    2
    Delete the type value for the external load balancer.
12.5.1.2.2. Changing the Amazon Web Services instance type by using a control plane machine set

You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR).

Prerequisites

  • Your AWS cluster uses a control plane machine set.

Procedure

  1. Edit the following line under the providerSpec field:

    providerSpec:
      value:
        ...
        instanceType: <compatible_aws_instance_type> 1
    1
    Specify a larger AWS instance type with the same base as the previous selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge.
  2. Save your changes.
12.5.1.2.3. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets

You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group.

EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group.

Prerequisites

  • You created a placement group in the AWS console.

    Note

    Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. The control plane machine set spreads the control plane machines across multiple failure domains when possible. To use placement groups for the control plane, you must use a placement group type that can span multiple Availability Zones.

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.
  2. Edit the following lines under the providerSpec field:

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              instanceType: <supported_instance_type> 1
              networkInterfaceType: EFA 2
              placement:
                availabilityZone: <zone> 3
                region: <region> 4
              placementGroupName: <placement_group> 5
              placementGroupPartition: <placement_group_partition_number> 6
    # ...
    1
    Specify an instance type that supports EFAs.
    2
    Specify the EFA network interface type.
    3
    Specify the zone, for example, us-east-1a.
    4
    Specify the region, for example, us-east-1.
    5
    Specify the name of the existing AWS placement group to deploy machines in.
    6
    Optional: Specify the partition number of the existing AWS placement group to deploy machines in.

Verification

  • In the AWS console, find a machine that the machine set created and verify the following in the machine properties:

    • The placement group field has the value that you specified for the placementGroupName parameter in the machine set.
    • The partition number field has the value that you specified for the placementGroupPartition parameter in the machine set.
    • The interface type field indicates that it uses an EFA.
12.5.1.2.4. Machine set options for the Amazon EC2 Instance Metadata Service

You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2.

Note

Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later.

Important

Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2.

12.5.1.2.4.1. Configuring IMDS by using machine sets

You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines.

Prerequisites

  • To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later.

Procedure

  • Add or edit the following lines under the providerSpec field:

    providerSpec:
      value:
        metadataServiceOptions:
          authentication: Required 1
    1
    To require IMDSv2, set the parameter value to Required. To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional. If no value is specified, both IMDSv1 and IMDSv2 are allowed.
12.5.1.2.5. Machine sets that deploy machines as Dedicated Instances

You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.

Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware.

12.5.1.2.5.1. Creating Dedicated Instances by using machine sets

You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS.

Procedure

  • Specify a dedicated tenancy under the providerSpec field:

    providerSpec:
      placement:
        tenancy: dedicated

12.5.2. Control plane configuration options for Microsoft Azure

You can change the configuration of your Microsoft Azure control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

12.5.2.1. Sample YAML for configuring Microsoft Azure clusters

The following example YAML snippets show provider specification and failure domain configurations for an Azure cluster.

12.5.2.1.1. Sample Azure provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Sample Azure providerSpec values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            acceleratedNetworking: true
            apiVersion: machine.openshift.io/v1beta1
            credentialsSecret:
              name: azure-cloud-credentials 1
              namespace: openshift-machine-api
            diagnostics: {}
            image: 2
              offer: ""
              publisher: ""
              resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3
              sku: ""
              version: ""
            internalLoadBalancer: <cluster_id>-internal 4
            kind: AzureMachineProviderSpec 5
            location: <region> 6
            managedIdentity: <cluster_id>-identity
            metadata:
              creationTimestamp: null
              name: <cluster_id>
            networkResourceGroup: <cluster_id>-rg
            osDisk: 7
              diskSettings: {}
              diskSizeGB: 1024
              managedDisk:
                storageAccountType: Premium_LRS
              osType: Linux
            publicIP: false
            publicLoadBalancer: <cluster_id> 8
            resourceGroup: <cluster_id>-rg
            subnet: <cluster_id>-master-subnet 9
            userDataSecret:
              name: master-user-data 10
            vmSize: Standard_D8s_v3
            vnet: <cluster_id>-vnet
            zone: "1" 11

1
Specifies the secret name for the cluster. Do not change this value.
2
Specifies the image details for your control plane machine set.
3
Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix.
4
Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs.
5
Specifies the cloud provider platform type. Do not change this value.
6
Specifies the region to place control plane machines on.
7
Specifies the disk configuration for the control plane.
8
Specifies the public load balancer for the control plane.
Note

You can omit the publicLoadBalancer parameter on private OpenShift Container Platform clusters that have user-defined outbound routing.

9
Specifies the subnet for the control plane.
10
Specifies the control plane user data secret. Do not change this value.
11
Specifies the zone configuration for clusters that use a single zone for all failure domains.
Note

If the cluster is configured to use a different zone for each failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using different zones for each failure domain, the Control Plane Machine Set Operator ignores it.

12.5.2.1.2. Sample Azure failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name. An Azure cluster uses a single subnet that spans multiple zones.

Sample Azure failure domain values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        azure:
        - zone: "1" 1
        - zone: "2"
        - zone: "3"
        platform: Azure 2
# ...

1
Each instance of zone specifies an Azure availability zone for a failure domain.
Note

If the cluster is configured to use a single zone for all failure domains, the zone parameter is configured in the provider specification instead of in the failure domain configuration.

2
Specifies the cloud provider platform name. Do not change this value.

12.5.2.2. Enabling Microsoft Azure features for control plane machines

You can enable features by updating values in the control plane machine set.

12.5.2.2.1. Restricting the API server to private

After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Have access to the web console as a user with admin privileges.

Procedure

  1. In the web portal or console for your cloud provider, take the following actions:

    1. Locate and delete the appropriate load balancer component:
    2. Delete the api.$clustername.$yourdomain DNS entry in the public zone.
  2. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource:

    # ...
    providerSpec:
      value:
    # ...
        loadBalancers:
        - name: lk4pj-ext 1
          type: network 2
        - name: lk4pj-int
          type: network
    # ...
    1
    Delete the name value for the external load balancer, which ends in -ext.
    2
    Delete the type value for the external load balancer.
12.5.2.2.2. Using the Azure Marketplace offering

You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following:

  • While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher.
  • The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image.
Important

Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances.

Prerequisites

  • You have installed the Azure CLI client (az).
  • Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.

Procedure

  1. Display all of the available OpenShift Container Platform images by running one of the following commands:

    • North America:

      $  az vm image list --all --offer rh-ocp-worker --publisher redhat -o table

      Example output

      Offer          Publisher       Sku                 Urn                                                             Version
      -------------  --------------  ------------------  --------------------------------------------------------------  -----------------
      rh-ocp-worker  RedHat          rh-ocp-worker       RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409              4.15.2024072409
      rh-ocp-worker  RedHat          rh-ocp-worker-gen1  RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409         4.15.2024072409

    • EMEA:

      $  az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table

      Example output

      Offer          Publisher       Sku                 Urn                                                                     Version
      -------------  --------------  ------------------  --------------------------------------------------------------          -----------------
      rh-ocp-worker  redhat-limited  rh-ocp-worker       redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409              4.15.2024072409
      rh-ocp-worker  redhat-limited  rh-ocp-worker-gen1  redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409         4.15.2024072409

    Note

    Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process.

  2. Inspect the image for your offer by running one of the following commands:

    • North America:

      $ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  3. Review the terms of the offer by running one of the following commands:

    • North America:

      $ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  4. Accept the terms of the offering by running one of the following commands:

    • North America:

      $ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  5. Record the image details of your offer, specifically the values for publisher, offer, sku, and version.
  6. Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer:

    Sample providerSpec image values for Azure Marketplace machines

    providerSpec:
      value:
        image:
          offer: rh-ocp-worker
          publisher: redhat
          resourceID: ""
          sku: rh-ocp-worker
          type: MarketplaceWithPlan
          version: 413.92.2023101700

12.5.2.2.3. Enabling Azure boot diagnostics

You can enable boot diagnostics on Azure machines that your machine set creates.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  • Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file:

    • For an Azure Managed storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: AzureManaged 1
      1
      Specifies an Azure Managed storage account.
    • For an Azure Unmanaged storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: CustomerManaged 1
            customerManaged:
              storageAccountURI: https://<storage-account>.blob.core.windows.net 2
      1
      Specifies an Azure Unmanaged storage account.
      2
      Replace <storage-account> with the name of your storage account.
      Note

      Only the Azure Blob Storage data service is supported.

Verification

  • On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.
12.5.2.2.4. Machine sets that deploy machines with ultra disks as data disks

You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.

12.5.2.2.4.1. Creating machines with ultra disks by using machine sets

You can deploy machines with ultra disks on Azure by editing your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Create a custom secret in the openshift-machine-api namespace using the master data secret by running the following command:

    $ oc -n openshift-machine-api \
    get secret <role>-user-data \ 1
    --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2
    1
    Replace <role> with master.
    2
    Specify userData.txt as the name of the new custom secret.
  2. In a text editor, open the userData.txt file and locate the final } character in the file.

    1. On the immediately preceding line, add a ,.
    2. Create a new line after the , and add the following configuration details:

      "storage": {
        "disks": [ 1
          {
            "device": "/dev/disk/azure/scsi1/lun0", 2
            "partitions": [ 3
              {
                "label": "lun0p1", 4
                "sizeMiB": 1024, 5
                "startMiB": 0
              }
            ]
          }
        ],
        "filesystems": [ 6
          {
            "device": "/dev/disk/by-partlabel/lun0p1",
            "format": "xfs",
            "path": "/var/lib/lun0p1"
          }
        ]
      },
      "systemd": {
        "units": [ 7
          {
            "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8
            "enabled": true,
            "name": "var-lib-lun0p1.mount"
          }
        ]
      }
      1
      The configuration details for the disk that you want to attach to a node as an ultra disk.
      2
      Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0, specify lun0. You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set.
      3
      The configuration details for a new partition on the disk.
      4
      Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0.
      5
      Specify the total size in MiB of the partition.
      6
      Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition.
      7
      Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each.
      8
      For Where, specify the value of storage.filesystems.path. For What, specify the value of storage.filesystems.device.
  3. Extract the disabling template value to a file called disableTemplating.txt by running the following command:

    $ oc -n openshift-machine-api get secret <role>-user-data \ 1
    --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt
    1
    Replace <role> with master.
  4. Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command:

    $ oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1
    --from-file=userData=userData.txt \
    --from-file=disableTemplating=disableTemplating.txt
    1
    For <role>-user-data-x5, specify the name of the secret. Replace <role> with master.
  5. Edit your control plane machine set CR by running the following command:

    $ oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster
  6. Add the following lines in the positions indicated:

    apiVersion: machine.openshift.io/v1beta1
    kind: ControlPlaneMachineSet
    spec:
      template:
        spec:
          metadata:
            labels:
              disk: ultrassd 1
          providerSpec:
            value:
              ultraSSDCapability: Enabled 2
              dataDisks: 3
              - nameSuffix: ultrassd
                lun: 0
                diskSizeGB: 4
                deletionPolicy: Delete
                cachingType: None
                managedDisk:
                  storageAccountType: UltraSSD_LRS
              userDataSecret:
                name: <role>-user-data-x5 4
    1
    Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value.
    2 3
    These lines enable the use of ultra disks. For dataDisks, include the entire stanza.
    4
    Specify the user data secret created earlier. Replace <role> with master.
  7. Save your changes.

    • For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration.
    • For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually.

Verification

  1. Validate that the machines are created by running the following command:

    $ oc get machines

    The machines should be in the Running state.

  2. For a machine that is running and has a node attached, validate the partition by running the following command:

    $ oc debug node/<node-name> -- chroot /host lsblk

    In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.

Next steps

  • To use an ultra disk on the control plane, reconfigure your workload to use the control plane’s ultra disk mount point.
12.5.2.2.4.2. Troubleshooting resources for machine sets that enable ultra disks

Use the information in this section to understand and recover from issues you might encounter.

12.5.2.2.4.2.1. Incorrect ultra disk configuration

If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails.

For example, if the ultraSSDCapability parameter is set to Disabled, but an ultra disk is specified in the dataDisks parameter, the following error message appears:

StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
  • To resolve this issue, verify that your machine set configuration is correct.
12.5.2.2.4.2.2. Unsupported disk parameters

If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message:

failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>."
  • To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct.
12.5.2.2.4.2.3. Unable to delete disks

If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired.

12.5.2.2.5. Enabling customer-managed encryption keys for a machine set

You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.

An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.

Procedure

  • Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example:

    providerSpec:
      value:
        osDisk:
          diskSizeGB: 128
          managedDisk:
            diskEncryptionSet:
              id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name>
            storageAccountType: Premium_LRS
12.5.2.2.6. Configuring trusted launch for Azure virtual machines by using machine sets
Important

Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Container Platform 4.17 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.

Note

Some feature combinations result in an invalid configuration.

Table 12.2. UEFI feature combination compatibility
Secure Boot[1]vTPM[2]Valid configuration

Enabled

Enabled

Yes

Enabled

Disabled

Yes

Enabled

Omitted

Yes

Disabled

Enabled

Yes

Omitted

Enabled

Yes

Disabled

Disabled

No

Omitted

Disabled

No

Omitted

Omitted

No

  1. Using the secureBoot field.
  2. Using the virtualizedTrustedPlatformModule field.

For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines.

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.
  2. Edit the following section under the providerSpec field to provide a valid configuration:

    Sample valid configuration with UEFI Secure Boot and vTPM enabled

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        machines_v1beta1_machine_openshift_io:
          spec:
            providerSpec:
              value:
                securityProfile:
                  settings:
                    securityType: TrustedLaunch 1
                    trustedLaunch:
                      uefiSettings: 2
                        secureBoot: Enabled 3
                        virtualizedTrustedPlatformModule: Enabled 4
    # ...

    1
    Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations.
    2
    Specifies which UEFI security features to use. This section is required for all valid configurations.
    3
    Enables UEFI Secure Boot.
    4
    Enables the use of a vTPM.

Verification

  • On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured.
12.5.2.2.7. Configuring Azure confidential virtual machines by using machine sets
Important

Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Container Platform 4.17 supports Azure confidential virtual machines (VMs).

Note

Confidential VMs are currently not supported on 64-bit ARM architectures.

By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.

Warning

Not all instance types support confidential VMs. Do not change the instance type for a control plane machine set that is configured to use confidential VMs to a type that is incompatible. Using an incompatible instance type can cause your cluster to become unstable.

For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines.

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.
  2. Edit the following section under the providerSpec field:

    Sample configuration

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              osDisk:
                # ...
                managedDisk:
                  securityProfile: 1
                    securityEncryptionType: VMGuestStateOnly 2
                # ...
              securityProfile: 3
                settings:
                    securityType: ConfidentialVM 4
                    confidentialVM:
                      uefiSettings: 5
                        secureBoot: Disabled 6
                        virtualizedTrustedPlatformModule: Enabled 7
              vmSize: Standard_DC16ads_v5 8
    # ...

    1
    Specifies security profile settings for the managed disk when using a confidential VM.
    2
    Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM.
    3
    Specifies security profile settings for the confidential VM.
    4
    Enables the use of confidential VMs. This value is required for all valid configurations.
    5
    Specifies which UEFI security features to use. This section is required for all valid configurations.
    6
    Disables UEFI Secure Boot.
    7
    Enables the use of a vTPM.
    8
    Specifies an instance type that supports confidential VMs.

Verification

  • On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured.
12.5.2.2.8. Accelerated Networking for Microsoft Azure VMs

Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled after installation.

12.5.2.2.8.1. Limitations

Consider the following limitations when deciding whether to use Accelerated Networking:

  • Accelerated Networking is only supported on clusters where the Machine API is operational.
  • Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation.

12.5.2.2.9. Configuring Capacity Reservation by using machine sets

OpenShift Container Platform version 4.17 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters.

You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds.

For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation.

Note

You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the previous machine set deployed.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You installed the OpenShift CLI (oc).
  • You created a Capacity Reservation group.

    For more information, see the Microsoft Azure documentation Create a Capacity Reservation.

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.
  2. Edit the following section under the providerSpec field:

    Sample configuration

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        machines_v1beta1_machine_openshift_io:
          spec:
            providerSpec:
              value:
                capacityReservationGroupID: <capacity_reservation_group> 1
    # ...

    1
    Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on.

Verification

  • To verify machine deployment, list the machines that the machine set created by running the following command:

    $ oc get machine \
      -n openshift-machine-api \
      -l machine.openshift.io/cluster-api-machine-role=master

    In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation.

12.5.2.2.9.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster

You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster where the Machine API is operational.

Procedure

  • Add the following to the providerSpec field:

    providerSpec:
      value:
        acceleratedNetworking: true 1
        vmSize: <azure-vm-size> 2
    1
    This line enables Accelerated Networking.
    2
    Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation.

Verification

  • On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled.

12.5.3. Control plane configuration options for Google Cloud Platform

You can change the configuration of your Google Cloud Platform (GCP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

12.5.3.1. Sample YAML for configuring Google Cloud Platform clusters

The following example YAML snippets show provider specification and failure domain configurations for a GCP cluster.

12.5.3.1.1. Sample GCP provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

Values obtained by using the OpenShift CLI

In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.

Infrastructure ID

The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Image path

The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:

$ oc -n openshift-machine-api \
  -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \
  get ControlPlaneMachineSet/cluster

Sample GCP providerSpec values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1beta1
            canIPForward: false
            credentialsSecret:
              name: gcp-cloud-credentials 1
            deletionProtection: false
            disks:
            - autoDelete: true
              boot: true
              image: <path_to_image> 2
              labels: null
              sizeGb: 200
              type: pd-ssd
            kind: GCPMachineProviderSpec 3
            machineType: e2-standard-4
            metadata:
              creationTimestamp: null
            metadataServiceOptions: {}
            networkInterfaces:
            - network: <cluster_id>-network
              subnetwork: <cluster_id>-master-subnet
            projectID: <project_name> 4
            region: <region> 5
            serviceAccounts: 6
            - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com
              scopes:
              - https://www.googleapis.com/auth/cloud-platform
            shieldedInstanceConfig: {}
            tags:
            - <cluster_id>-master
            targetPools:
            - <cluster_id>-api
            userDataSecret:
              name: master-user-data 7
            zone: "" 8

1
Specifies the secret name for the cluster. Do not change this value.
2
Specifies the path to the image that was used to create the disk.

To use a GCP Marketplace image, specify the offer to use:

  • OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736
  • OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736
  • OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736
3
Specifies the cloud provider platform type. Do not change this value.
4
Specifies the name of the GCP project that you use for your cluster.
5
Specifies the GCP region for the cluster.
6
Specifies a single service account. Multiple service accounts are not supported.
7
Specifies the control plane user data secret. Do not change this value.
8
This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain.
12.5.3.1.2. Sample GCP failure domain configuration

The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use.

Sample GCP failure domain values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        gcp:
        - zone: <gcp_zone_a> 1
        - zone: <gcp_zone_b> 2
        - zone: <gcp_zone_c>
        - zone: <gcp_zone_d>
        platform: GCP 3
# ...

1
Specifies a GCP zone for the first failure domain.
2
Specifies an additional failure domain. Further failure domains are added the same way.
3
Specifies the cloud provider platform name. Do not change this value.

12.5.3.2. Enabling Google Cloud Platform features for control plane machines

You can enable features by updating values in the control plane machine set.

12.5.3.2.1. Configuring persistent disk types by using machine sets

You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file.

For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks.

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.
  2. Edit the following line under the providerSpec field:

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              disks:
                type: pd-ssd 1
    1
    Control plane nodes must use the pd-ssd disk type.

Verification

  • Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type.
12.5.3.2.2. Configuring Confidential VM by using machine sets

By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys.

For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM.

Note

Confidential VMs are currently not supported on 64-bit ARM architectures.

Important

OpenShift Container Platform 4.17 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP).

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.
  2. Edit the following section under the providerSpec field:

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              confidentialCompute: Enabled 1
              onHostMaintenance: Terminate 2
              machineType: n2d-standard-8 3
    ...
    1
    Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled.
    2
    Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate, which stops the VM. Confidential VM does not support live VM migration.
    3
    Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types.

Verification

  • On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured.
12.5.3.2.3. Configuring Shielded VM options by using machine sets

By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys.

For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM.

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.
  2. Edit the following section under the providerSpec field:

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              shieldedInstanceConfig: 1
                integrityMonitoring: Enabled 2
                secureBoot: Disabled 3
                virtualizedTrustedPlatformModule: Enabled 4
    # ...
    1
    In this section, specify any Shielded VM options that you want.
    2
    Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled.
    Note

    When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM).

    3
    Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled.
    4
    Specify whether vTPM is enabled. Valid values are Disabled or Enabled.

Verification

  • Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured.
12.5.3.2.4. Enabling customer-managed encryption keys for a machine set

Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer’s data. By default, Compute Engine encrypts this data by using Compute Engine keys.

You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key.

Note

If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern.

Procedure

  1. To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location:

    $ gcloud kms keys add-iam-policy-binding <key_name> \
      --keyring <key_ring_name> \
      --location <key_ring_location> \
      --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com” \
      --role roles/cloudkms.cryptoKeyEncrypterDecrypter
  2. Configure the encryption key under the providerSpec field in your machine set YAML file. For example:

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              disks:
              - type:
                encryptionKey:
                  kmsKey:
                    name: machine-encryption-key 1
                    keyRing: openshift-encrpytion-ring 2
                    location: global 3
                    projectID: openshift-gcp-project 4
                  kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5
    1
    The name of the customer-managed encryption key that is used for the disk encryption.
    2
    The name of the KMS key ring that the KMS key belongs to.
    3
    The GCP location in which the KMS key ring exists.
    4
    Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used.
    5
    Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used.

    When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key.

12.5.4. Control plane configuration options for Nutanix

You can change the configuration of your Nutanix control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

12.5.4.1. Sample YAML for configuring Nutanix clusters

The following example YAML snippet shows a provider specification configuration for a Nutanix cluster.

12.5.4.1.1. Sample Nutanix provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Values obtained by using the OpenShift CLI

In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.

Infrastructure ID

The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Sample Nutanix providerSpec values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1
            bootType: "" 1
            categories: 2
            - key: <category_name>
              value: <category_value>
            cluster: 3
              type: uuid
              uuid: <cluster_uuid>
            credentialsSecret:
              name: nutanix-credentials 4
            image: 5
              name: <cluster_id>-rhcos
              type: name
            kind: NutanixMachineProviderConfig 6
            memorySize: 16Gi 7
            metadata:
              creationTimestamp: null
            project: 8
              type: name
              name: <project_name>
            subnets: 9
            - type: uuid
              uuid: <subnet_uuid>
            systemDiskSize: 120Gi 10
            userDataSecret:
              name: master-user-data 11
            vcpuSockets: 8 12
            vcpusPerSocket: 1 13

1
Specifies the boot type that the control plane machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are Legacy, SecureBoot, or UEFI. The default is Legacy.
Note

You must use the Legacy boot type in OpenShift Container Platform 4.17.

2
Specifies one or more Nutanix Prism categories to apply to control plane machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management.
3
Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid, so there is a uuid stanza.
Note

Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations.

If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.

4
Specifies the secret name for the cluster. Do not change this value.
5
Specifies the image that was used to create the disk.
6
Specifies the cloud provider platform type. Do not change this value.
7
Specifies the memory allocated for the control plane machines.
8
Specifies the Nutanix project that you use for your cluster. In this example, the project type is name, so there is a name stanza.
9
Specifies a subnet configuration. In this example, the subnet type is uuid, so there is a uuid stanza.
Note

Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations.

If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.

10
Specifies the VM disk size for the control plane machines.
11
Specifies the control plane user data secret. Do not change this value.
12
Specifies the number of vCPU sockets allocated for the control plane machines.
13
Specifies the number of vCPUs for each control plane vCPU socket.
12.5.4.1.2. Failure domains for Nutanix clusters

To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required:

  1. Modify the cluster infrastructure custom resource (CR).
  2. Modify the cluster control plane machine set CR.
  3. Modify or replace the compute machine set CRs.

For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content.

12.5.5. Control plane configuration options for Red Hat OpenStack Platform

You can change the configuration of your Red Hat OpenStack Platform (RHOSP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

12.5.5.1. Sample YAML for configuring Red Hat OpenStack Platform (RHOSP) clusters

The following example YAML snippets show provider specification and failure domain configurations for an RHOSP cluster.

12.5.5.1.1. Sample RHOSP provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Sample OpenStack providerSpec values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1alpha1
            cloudName: openstack
            cloudsSecret:
              name: openstack-cloud-credentials 1
              namespace: openshift-machine-api
            flavor: m1.xlarge 2
            image: ocp1-2g2xs-rhcos
            kind: OpenstackProviderSpec 3
            metadata:
              creationTimestamp: null
            networks:
            - filter: {}
              subnets:
              - filter:
                  name: ocp1-2g2xs-nodes
                  tags: openshiftClusterID=ocp1-2g2xs
            securityGroups:
            - filter: {}
              name: ocp1-2g2xs-master 4
            serverGroupName: ocp1-2g2xs-master
            serverMetadata:
              Name: ocp1-2g2xs-master
              openshiftClusterID: ocp1-2g2xs
            tags:
            - openshiftClusterID=ocp1-2g2xs
            trunk: true
            userDataSecret:
              name: master-user-data

1
The secret name for the cluster. Do not change this value.
2
The RHOSP flavor type for the control plane.
3
The RHOSP cloud provider platform type. Do not change this value.
4
The control plane machines security group.
12.5.5.1.2. Sample RHOSP failure domain configuration

The control plane machine set concept of a failure domain is analogous to the existing Red Hat OpenStack Platform (RHOSP) concept of an availability zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

The following example demonstrates the use of multiple Nova availability zones as well as Cinder availability zones.

Sample OpenStack failure domain values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        platform: OpenStack
        openstack:
        - availabilityZone: nova-az0
          rootVolume:
            availabilityZone: cinder-az0
        - availabilityZone: nova-az1
          rootVolume:
            availabilityZone: cinder-az1
        - availabilityZone: nova-az2
          rootVolume:
            availabilityZone: cinder-az2
# ...

12.5.5.2. Enabling Red Hat OpenStack Platform (RHOSP) features for control plane machines

You can enable features by updating values in the control plane machine set.

12.5.5.2.1. Changing the RHOSP compute flavor by using a control plane machine set

You can change the Red Hat OpenStack Platform (RHOSP) compute service (Nova) flavor that your control plane machines use by updating the specification in the control plane machine set custom resource.

In RHOSP, flavors define the compute, memory, and storage capacity of computing instances. By increasing or decreasing the flavor size, you can scale your control plane vertically.

Prerequisites

  • Your RHOSP cluster uses a control plane machine set.

Procedure

  1. Edit the following line under the providerSpec field:

    providerSpec:
      value:
    # ...
        flavor: m1.xlarge 1
    1
    Specify a RHOSP flavor type that has the same base as the existing selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge. You can choose larger or smaller flavors depending on your vertical scaling needs.
  2. Save your changes.

After you save your changes, machines are replaced with ones that use the flavor you chose.

12.5.6. Control plane configuration options for VMware vSphere

You can change the configuration of your VMware vSphere control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

12.5.6.1. Sample YAML for configuring VMware vSphere clusters

The following example YAML snippets show provider specification and failure domain configurations for a vSphere cluster.

12.5.6.1.1. Sample VMware vSphere provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Sample vSphere providerSpec values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1beta1
            credentialsSecret:
              name: vsphere-cloud-credentials 1
            diskGiB: 120 2
            kind: VSphereMachineProviderSpec 3
            memoryMiB: 16384 4
            metadata:
              creationTimestamp: null
            network: 5
              devices:
              - networkName: <vm_network_name>
            numCPUs: 4 6
            numCoresPerSocket: 4 7
            snapshot: ""
            template: <vm_template_name> 8
            userDataSecret:
              name: master-user-data 9
            workspace: 10
              datacenter: <vcenter_data_center_name> 11
              datastore: <vcenter_datastore_name> 12
              folder: <path_to_vcenter_vm_folder> 13
              resourcePool: <vsphere_resource_pool> 14
              server: <vcenter_server_ip> 15

1
Specifies the secret name for the cluster. Do not change this value.
2
Specifies the VM disk size for the control plane machines.
3
Specifies the cloud provider platform type. Do not change this value.
4
Specifies the memory allocated for the control plane machines.
5
Specifies the network on which the control plane is deployed.
Note

If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.

6
Specifies the number of CPUs allocated for the control plane machines.
7
Specifies the number of cores for each control plane CPU.
8
Specifies the vSphere VM template to use, such as user-5ddjd-rhcos.
Note

If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.

9
Specifies the control plane user data secret. Do not change this value.
10
Specifies the workspace details for the control plane.
Note

If the cluster is configured to use a failure domain, these parameters are configured in the failure domain. If you specify these values in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores them.

11
Specifies the vCenter data center for the control plane.
12
Specifies the vCenter datastore for the control plane.
13
Specifies the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd.
14
Specifies the vSphere resource pool for your VMs.
15
Specifies the vCenter server IP or fully qualified domain name.
12.5.6.1.2. Sample VMware vSphere failure domain configuration

On VMware vSphere infrastructure, the cluster-wide infrastructure Custom Resource Definition (CRD), infrastructures.config.openshift.io, defines failure domains for your cluster. The providerSpec in the ControlPlaneMachineSet custom resource (CR) specifies names for failure domains that the control plane machine set uses to ensure control plane nodes are deployed to the appropriate failure domain. A failure domain is an infrastructure resource made up of a control plane machine set, a vCenter data center, vCenter datastore, and a network.

By using a failure domain resource, you can use a control plane machine set to deploy control plane machines on separate clusters or data centers. A control plane machine set also balances control plane machines across defined failure domains to provide fault tolerance capabilities to your infrastructure.

Note

If you modify the ProviderSpec configuration in the ControlPlaneMachineSet CR, the control plane machine set updates all control plane machines deployed on the primary infrastructure and each failure domain infrastructure.

Sample VMware vSphere failure domain values

apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains: 1
        platform: VSphere
        vsphere: 2
        - name: <failure_domain_name1>
        - name: <failure_domain_name2>
# ...

1
Specifies the vCenter location for OpenShift Container Platform cluster nodes.
2
Specifies failure domains by name for the control plane machine set.
Important

Each name field value in this section must match the corresponding value in the failureDomains.name field of the cluster-wide infrastructure CRD. You can find the value of the failureDomains.name field by running the following command:

$ oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name}

The name field is the only supported failure domain field that you can specify in the ControlPlaneMachineSet CR.

For an example of a cluster-wide infrastructure CRD that defines resources for each failure domain, see "Specifying multiple regions and zones for your cluster on vSphere."

12.5.6.2. Enabling VMware vSphere features for control plane machines

You can enable features by updating values in the control plane machine set.

12.5.6.2.1. Adding tags to machines by using machine sets

OpenShift Container Platform adds a cluster-specific tag to each virtual machine (VM) that it creates. The installation program uses these tags to select the VMs to delete when uninstalling a cluster.

In addition to the cluster-specific tags assigned to VMs, you can configure a machine set to add up to 10 additional vSphere tags to the VMs it provisions.

Prerequisites

  • You have access to an OpenShift Container Platform cluster installed on vSphere using an account with cluster-admin permissions.
  • You have access to the VMware vCenter console associated with your cluster.
  • You have created a tag in the vCenter console.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Use the vCenter console to find the tag ID for any tag that you want to add to your machines:

    1. Log in to the vCenter console.
    2. From the Home menu, click Tags & Custom Attributes.
    3. Select a tag that you want to add to your machines.
    4. Use the browser URL for the tag that you select to identify the tag ID.

      Example tag URL

      https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions

      Example tag ID

      urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL

  2. In a text editor, open the YAML file for an existing machine set or create a new one.
  3. Edit the following lines under the providerSpec field:

    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              tagIDs: 1
              - <tag_id_value> 2
    # ...
    1
    Specify a list of up to 10 tags to add to the machines that this machine set provisions.
    2
    Specify the value of the tag that you want to add to your machines. For example, urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL.

12.6. Control plane resiliency and recovery

You can use the control plane machine set to improve the resiliency of the control plane for your OpenShift Container Platform cluster.

12.6.1. High availability and fault tolerance with failure domains

When possible, the control plane machine set spreads the control plane machines across multiple failure domains. This configuration provides high availability and fault tolerance within the control plane. This strategy can help protect the control plane when issues arise within the infrastructure provider.

12.6.1.1. Failure domain platform support and configuration

The control plane machine set concept of a failure domain is analogous to existing concepts on cloud providers. Not all platforms support the use of failure domains.

Table 12.3. Failure domain support matrix
Cloud providerSupport for failure domainsProvider nomenclature

Amazon Web Services (AWS)

X

Availability Zone (AZ)

Google Cloud Platform (GCP)

X

zone

Microsoft Azure

X

Azure availability zone

Nutanix

X

failure domain

Red Hat OpenStack Platform (RHOSP)

X

OpenStack Nova availability zones and OpenStack Cinder availability zones

VMware vSphere

X

failure domain mapped to a vSphere Zone [1]

  1. For more information, see "Regions and zones for a VMware vCenter".

The failure domain configuration in the control plane machine set custom resource (CR) is platform-specific. For more information about failure domain parameters in the CR, see the sample failure domain configuration for your provider.

12.6.1.2. Balancing control plane machines

The control plane machine set balances control plane machines across the failure domains that are specified in the custom resource (CR).

When possible, the control plane machine set uses each failure domain equally to ensure appropriate fault tolerance. If there are fewer failure domains than control plane machines, failure domains are selected for reuse alphabetically by name. For clusters with no failure domains specified, all control plane machines are placed within a single failure domain.

Some changes to the failure domain configuration cause the control plane machine set to rebalance the control plane machines. For example, if you add failure domains to a cluster with fewer failure domains than control plane machines, the control plane machine set rebalances the machines across all available failure domains.

12.6.2. Recovery of failed control plane machines

The Control Plane Machine Set Operator automates the recovery of control plane machines. When a control plane machine is deleted, the Operator creates a replacement with the configuration that is specified in the ControlPlaneMachineSet custom resource (CR).

For clusters that use control plane machine sets, you can configure a machine health check. The machine health check deletes unhealthy control plane machines so that they are replaced.

Important

If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1.

This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress.

If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish.

Additional resources

12.6.3. Quorum protection with machine lifecycle hooks

For OpenShift Container Platform clusters that use the Machine API Operator, the etcd Operator uses lifecycle hooks for the machine deletion phase to implement a quorum protection mechanism.

By using a preDrain lifecycle hook, the etcd Operator can control when the pods on a control plane machine are drained and removed. To protect etcd quorum, the etcd Operator prevents the removal of an etcd member until it migrates that member onto a new node within the cluster.

This mechanism allows the etcd Operator precise control over the members of the etcd quorum and allows the Machine API Operator to safely create and remove control plane machines without specific operational knowledge of the etcd cluster.

12.6.3.1. Control plane deletion with quorum protection processing order

When a control plane machine is replaced on a cluster that uses a control plane machine set, the cluster temporarily has four control plane machines. When the fourth control plane node joins the cluster, the etcd Operator starts a new etcd member on the replacement node. When the etcd Operator observes that the old control plane machine is marked for deletion, it stops the etcd member on the old node and promotes the replacement etcd member to join the quorum of the cluster.

The control plane machine Deleting phase proceeds in the following order:

  1. A control plane machine is slated for deletion.
  2. The control plane machine enters the Deleting phase.
  3. To satisfy the preDrain lifecycle hook, the etcd Operator takes the following actions:

    1. The etcd Operator waits until a fourth control plane machine is added to the cluster as an etcd member. This new etcd member has a state of Running but not ready until it receives the full database update from the etcd leader.
    2. When the new etcd member receives the full database update, the etcd Operator promotes the new etcd member to a voting member and removes the old etcd member from the cluster.

    After this transition is complete, it is safe for the old etcd pod and its data to be removed, so the preDrain lifecycle hook is removed.

  4. The control plane machine status condition Drainable is set to True.
  5. The machine controller attempts to drain the node that is backed by the control plane machine.

    • If draining fails, Drained is set to False and the machine controller attempts to drain the node again.
    • If draining succeeds, Drained is set to True.
  6. The control plane machine status condition Drained is set to True.
  7. If no other Operators have added a preTerminate lifecycle hook, the control plane machine status condition Terminable is set to True.
  8. The machine controller removes the instance from the infrastructure provider.
  9. The machine controller deletes the Node object.

YAML snippet demonstrating the etcd quorum protection preDrain lifecycle hook

apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
  ...
spec:
  lifecycleHooks:
    preDrain:
    - name: EtcdQuorumOperator 1
      owner: clusteroperator/etcd 2
  ...

1
The name of the preDrain lifecycle hook.
2
The hook-implementing controller that manages the preDrain lifecycle hook.

12.7. Troubleshooting the control plane machine set

Use the information in this section to understand and recover from issues you might encounter.

12.7.1. Checking the control plane machine set custom resource state

You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR).

Procedure

  • Determine the state of the CR by running the following command:

    $ oc get controlplanemachineset.machine.openshift.io cluster \
      --namespace openshift-machine-api
    • A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required.
    • A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated.
    • A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR.

Next steps

To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists.

  • If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster.
  • If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster.

12.7.2. Adding a missing Azure internal load balancer

The internalLoadBalancer parameter is required in both the ControlPlaneMachineSet and control plane Machine custom resources (CRs) for Azure. If this parameter is not preconfigured on your cluster, you must add it to both CRs.

For more information about where this parameter is located in the Azure provider specification, see the sample Azure provider specification. The placement in the control plane Machine CR is similar.

Procedure

  1. List the control plane machines in your cluster by running the following command:

    $ oc get machines \
      -l machine.openshift.io/cluster-api-machine-role==master \
      -n openshift-machine-api
  2. For each control plane machine, edit the CR by running the following command:

    $ oc edit machine <control_plane_machine_name>
  3. Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes.
  4. Edit your control plane machine set CR by running the following command:

    $ oc edit controlplanemachineset.machine.openshift.io cluster \
      -n openshift-machine-api
  5. Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes.

Next steps

  • For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration.
  • For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually.

12.7.3. Recovering a degraded etcd Operator

Certain situations can cause the etcd Operator to become degraded.

For example, while performing remediation, the machine health check might delete a control plane machine that is hosting etcd. If the etcd member is not reachable at that time, the etcd Operator becomes degraded.

When the etcd Operator is degraded, manual intervention is required to force the Operator to remove the failed member and restore the cluster state.

Procedure

  1. List the control plane machines in your cluster by running the following command:

    $ oc get machines \
      -l machine.openshift.io/cluster-api-machine-role==master \
      -n openshift-machine-api \
      -o wide

    Any of the following conditions might indicate a failed control plane machine:

    • The STATE value is stopped.
    • The PHASE value is Failed.
    • The PHASE value is Deleting for more than ten minutes.
    Important

    Before continuing, ensure that your cluster has two healthy control plane machines. Performing the actions in this procedure on more than one control plane machine risks losing etcd quorum and can cause data loss.

    If you have lost the majority of your control plane hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure "Restoring to a previous cluster state" instead of this procedure.

  2. Edit the machine CR for the failed control plane machine by running the following command:

    $ oc edit machine <control_plane_machine_name>
  3. Remove the contents of the lifecycleHooks parameter from the failed control plane machine and save your changes.

    The etcd Operator removes the failed machine from the cluster and can then safely add new etcd members.

12.7.4. Upgrading clusters that run on RHOSP

For clusters that run on Red Hat OpenStack Platform (RHOSP) that were created with OpenShift Container Platform 4.13 or earlier, you might have to perform post-upgrade tasks before you can use control plane machine sets.

12.7.4.1. Configuring RHOSP clusters that have machines with root volume availability zones after an upgrade

For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true:

  • The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier.
  • The cluster infrastructure is installer-provisioned.
  • Machines were distributed across multiple availability zones.
  • Machines were configured to use root volumes for which block storage availability zones were not defined.

To understand why this procedure is necessary, see Solution #7024383.

Procedure

  1. For all control plane machines, edit the provider spec for all control plane machines that match the environment. For example, to edit the machine master-0, enter the following command:

    $ oc edit machine/<cluster_id>-master-0 -n openshift-machine-api

    where:

    <cluster_id>
    Specifies the ID of the upgraded cluster.
  2. In the provider spec, set the value of the property rootVolume.availabilityZone to the volume of the availability zone you want to use.

    An example RHOSP provider spec

    providerSpec:
      value:
        apiVersion: machine.openshift.io/v1alpha1
        availabilityZone: az0
          cloudName: openstack
        cloudsSecret:
          name: openstack-cloud-credentials
          namespace: openshift-machine-api
        flavor: m1.xlarge
        image: rhcos-4.14
        kind: OpenstackProviderSpec
        metadata:
          creationTimestamp: null
        networks:
        - filter: {}
          subnets:
          - filter:
              name: refarch-lv7q9-nodes
              tags: openshiftClusterID=refarch-lv7q9
        rootVolume:
            availabilityZone: nova 1
            diskSize: 30
            sourceUUID: rhcos-4.12
            volumeType: fast-0
        securityGroups:
        - filter: {}
          name: refarch-lv7q9-master
        serverGroupName: refarch-lv7q9-master
        serverMetadata:
          Name: refarch-lv7q9-master
          openshiftClusterID: refarch-lv7q9
        tags:
        - openshiftClusterID=refarch-lv7q9
        trunk: true
        userDataSecret:
          name: master-user-data

    1
    Set the zone name as this value.
    Note

    If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration.

    In your RHOSP cluster, find the availability zone of the root volumes for your machines and use that as the value.

  3. Run the following command to retrieve information about the control plane machine set resource:

    $ oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api
  4. Run the following command to edit the resource:

    $ oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api
  5. For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster.

Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator.

12.7.4.2. Configuring RHOSP clusters that have control plane machines with availability zones after an upgrade

For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true:

  • The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier.
  • The cluster infrastructure is installer-provisioned.
  • Control plane machines were distributed across multiple compute availability zones.

To understand why this procedure is necessary, see Solution #7013893.

Procedure

  1. For the master-1 and master-2 control plane machines, open the provider specs for editing. For example, to edit the first machine, enter the following command:

    $ oc edit machine/<cluster_id>-master-1 -n openshift-machine-api

    where:

    <cluster_id>
    Specifies the ID of the upgraded cluster.
  2. For the master-1 and master-2 control plane machines, edit the value of the serverGroupName property in their provider specs to match that of the machine master-0.

    An example RHOSP provider spec

    providerSpec:
      value:
        apiVersion: machine.openshift.io/v1alpha1
        availabilityZone: az0
          cloudName: openstack
        cloudsSecret:
          name: openstack-cloud-credentials
          namespace: openshift-machine-api
        flavor: m1.xlarge
        image: rhcos-4.17
        kind: OpenstackProviderSpec
        metadata:
          creationTimestamp: null
        networks:
        - filter: {}
          subnets:
          - filter:
              name: refarch-lv7q9-nodes
              tags: openshiftClusterID=refarch-lv7q9
        securityGroups:
        - filter: {}
          name: refarch-lv7q9-master
        serverGroupName: refarch-lv7q9-master-az0 1
        serverMetadata:
          Name: refarch-lv7q9-master
          openshiftClusterID: refarch-lv7q9
        tags:
        - openshiftClusterID=refarch-lv7q9
        trunk: true
        userDataSecret:
          name: master-user-data

    1
    This value must match for machines master-0, master-1, and master-3.
    Note

    If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration.

    In your RHOSP cluster, find the server group that your control plane instances are in and use that as the value.

  3. Run the following command to retrieve information about the control plane machine set resource:

    $ oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api
  4. Run the following command to edit the resource:

    $ oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api
  5. For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster.

Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator.

12.8. Disabling the control plane machine set

The .spec.state field in an activated ControlPlaneMachineSet custom resource (CR) cannot be changed from Active to Inactive. To disable the control plane machine set, you must delete the CR so that it is removed from the cluster.

When you delete the CR, the Control Plane Machine Set Operator performs cleanup operations and disables the control plane machine set. The Operator then removes the CR from the cluster and creates an inactive control plane machine set with default settings.

12.8.1. Deleting the control plane machine set

To stop managing control plane machines with the control plane machine set on your cluster, you must delete the ControlPlaneMachineSet custom resource (CR).

Procedure

  • Delete the control plane machine set CR by running the following command:

    $ oc delete controlplanemachineset.machine.openshift.io cluster \
      -n openshift-machine-api

Verification

  • Check the control plane machine set custom resource state. A result of Inactive indicates that the removal and replacement process is successful. A ControlPlaneMachineSet CR exists but is not activated.

12.8.2. Checking the control plane machine set custom resource state

You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR).

Procedure

  • Determine the state of the CR by running the following command:

    $ oc get controlplanemachineset.machine.openshift.io cluster \
      --namespace openshift-machine-api
    • A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required.
    • A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated.
    • A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR.

12.8.3. Re-enabling the control plane machine set

To re-enable the control plane machine set, you must ensure that the configuration in the CR is correct for your cluster and activate it.

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.