Chapter 2. Managing compute machines with the Machine API


2.1. Creating a compute machine set on Alibaba Cloud

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Alibaba Cloud. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.1.1. Sample YAML for a compute machine set custom resource on Alibaba Cloud

This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
    machine.openshift.io/cluster-api-machine-role: <role> 2
    machine.openshift.io/cluster-api-machine-type: <role> 3
  name: <infrastructure_id>-<role>-<zone> 4
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 6
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7
        machine.openshift.io/cluster-api-machine-role: <role> 8
        machine.openshift.io/cluster-api-machine-type: <role> 9
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 10
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: ""
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1
          credentialsSecret:
            name: alibabacloud-credentials
          imageId: <image_id> 11
          instanceType: <instance_type> 12
          kind: AlibabaCloudMachineProviderConfig
          ramRoleName: <infrastructure_id>-role-worker 13
          regionId: <region> 14
          resourceGroup: 15
            id: <resource_group_id>
            type: ID
          securityGroups:
          - tags: 16
            - Key: Name
              Value: <infrastructure_id>-sg-<role>
            type: Tags
          systemDisk: 17
            category: cloud_essd
            size: <disk_size>
          tag: 18
          - Key: kubernetes.io/cluster/<infrastructure_id>
            Value: owned
          userDataSecret:
            name: <user_data_secret> 19
          vSwitch:
            tags: 20
            - Key: Name
              Value: <infrastructure_id>-vswitch-<zone>
            type: Tags
          vpcId: ""
          zoneId: <zone> 21
1 5 7
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 3 8 9
Specify the node label to add.
4 6 10
Specify the infrastructure ID, node label, and zone.
11
Specify the image to use. Use an image from an existing default compute machine set for the cluster.
12
Specify the instance type you want to use for the compute machine set.
13
Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set.
14
Specify the region to place machines on.
15
Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one.
16 18 20
Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed.
17
Specify the type and size of the root disk. Use the category value that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes for size.
19
Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set.
21
Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.

2.1.1.1. Machine set parameters for Alibaba Cloud usage statistics

The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups, tag, and vSwitch parameters of the spec.template.spec.providerSpec.value list.

When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed.

The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required.

Tags in spec.template.spec.providerSpec.value.securityGroups

spec:
  template:
    spec:
      providerSpec:
        value:
          securityGroups:
          - tags:
            - Key: kubernetes.io/cluster/<infrastructure_id> 1
              Value: owned
            - Key: GISV
              Value: ocp
            - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2
              Value: ocp
            - Key: Name
              Value: <infrastructure_id>-sg-<role> 3
            type: Tags

1 2
Optional: This tag is applied even when not specified in the compute machine set.
3
Required.

where:

  • <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster.
  • <role> is the node label to add.

Tags in spec.template.spec.providerSpec.value.tag

spec:
  template:
    spec:
      providerSpec:
        value:
          tag:
          - Key: kubernetes.io/cluster/<infrastructure_id> 1
            Value: owned
          - Key: GISV 2
            Value: ocp
          - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3
            Value: ocp

2 3
Optional: This tag is applied even when not specified in the compute machine set.
1
Required.

where <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster.

Tags in spec.template.spec.providerSpec.value.vSwitch

spec:
  template:
    spec:
      providerSpec:
        value:
          vSwitch:
            tags:
            - Key: kubernetes.io/cluster/<infrastructure_id> 1
              Value: owned
            - Key: GISV 2
              Value: ocp
            - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3
              Value: ocp
            - Key: Name
              Value: <infrastructure_id>-vswitch-<zone> 4
            type: Tags

1 2 3
Optional: This tag is applied even when not specified in the compute machine set.
4
Required.

where:

  • <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster.
  • <zone> is the zone within your region to place machines on.

2.1.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.2. Creating a compute machine set on AWS

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.2.1. Sample YAML for a compute machine set custom resource on AWS

This sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
  name: <infrastructure_id>-<role>-<zone> 2
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5
        machine.openshift.io/cluster-api-machine-role: <role> 6
        machine.openshift.io/cluster-api-machine-type: <role> 7
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: "" 9
      providerSpec:
        value:
          ami:
            id: ami-046fe691f52a953f9 10
          apiVersion: awsproviderconfig.openshift.io/v1beta1
          blockDevices:
            - ebs:
                iops: 0
                volumeSize: 120
                volumeType: gp2
          credentialsSecret:
            name: aws-cloud-credentials
          deviceIndex: 0
          iamInstanceProfile:
            id: <infrastructure_id>-worker-profile 11
          instanceType: m6i.large
          kind: AWSMachineProviderConfig
          placement:
            availabilityZone: <zone> 12
            region: <region> 13
          securityGroups:
            - filters:
                - name: tag:Name
                  values:
                    - <infrastructure_id>-worker-sg 14
          subnet:
            filters:
              - name: tag:Name
                values:
                  - <infrastructure_id>-private-<zone> 15
          tags:
            - name: kubernetes.io/cluster/<infrastructure_id> 16
              value: owned
            - name: <custom_tag_name> 17
              value: <custom_tag_value> 18
          userDataSecret:
            name: worker-user-data
1 3 5 11 14 16
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 4 8
Specify the infrastructure ID, role node label, and zone.
6 7 9
Specify the role node label to add.
10
Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region.
$ oc -n openshift-machine-api \
    -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \
    get machineset/<infrastructure_id>-<role>-<zone>
17 18
Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:admin-email@example.com.
Note

Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file.

12
Specify the zone, for example, us-east-1a.
13
Specify the region, for example, us-east-1.
15
Specify the infrastructure ID and zone.

2.2.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml
  4. If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets.

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.2.3. Machine set options for the Amazon EC2 Instance Metadata Service

You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2.

Note

Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later.

To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values. You can also edit an existing machine set to create new machines with your preferred IMDS configuration when the machine set is scaled up.

Important

Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2.

2.2.3.1. Configuring IMDS by using machine sets

You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines.

Prerequisites

  • To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later.

Procedure

  • Add or edit the following lines under the providerSpec field:

    providerSpec:
      value:
        metadataServiceOptions:
          authentication: Required 1
    1
    To require IMDSv2, set the parameter value to Required. To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional. If no value is specified, both IMDSv1 and IMDSv2 are allowed.

2.2.4. Machine sets that deploy machines as Dedicated Instances

You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.

Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware.

2.2.4.1. Creating Dedicated Instances by using machine sets

You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS.

Procedure

  • Specify a dedicated tenancy under the providerSpec field:

    providerSpec:
      placement:
        tenancy: dedicated

2.2.5. Machine sets that deploy machines as Spot Instances

You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.

AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning.

Interruptions can occur when using Spot Instances for the following reasons:

  • The instance price exceeds your maximum price
  • The demand for Spot Instances increases
  • The supply of Spot Instances decreases

When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot Instance.

2.2.5.1. Creating Spot Instances by using compute machine sets

You can launch a Spot Instance on AWS by adding spotMarketOptions to your compute machine set YAML file.

Procedure

  • Add the following line under the providerSpec field:

    providerSpec:
      value:
        spotMarketOptions: {}

    You can optionally set the spotMarketOptions.maxPrice field to limit the cost of the Spot Instance. For example you can set maxPrice: '2.50'.

    If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price.

    Note

    It is strongly recommended to use the default On-Demand price as the maxPrice value and to not set the maximum price for Spot Instances.

2.2.6. Adding a GPU node to an existing OpenShift Container Platform cluster

You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider.

For more information about the supported instance types, see the following NVIDIA documentation:

Procedure

  1. View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OpenShift Container Platform role.

    $ oc get nodes

    Example output

    NAME                                        STATUS   ROLES                  AGE     VERSION
    ip-10-0-52-50.us-east-2.compute.internal    Ready    worker                 3d17h   v1.25.4+86bd4ff
    ip-10-0-58-24.us-east-2.compute.internal    Ready    control-plane,master   3d17h   v1.25.4+86bd4ff
    ip-10-0-68-148.us-east-2.compute.internal   Ready    worker                 3d17h   v1.25.4+86bd4ff
    ip-10-0-68-68.us-east-2.compute.internal    Ready    control-plane,master   3d17h   v1.25.4+86bd4ff
    ip-10-0-72-170.us-east-2.compute.internal   Ready    control-plane,master   3d17h   v1.25.4+86bd4ff
    ip-10-0-74-50.us-east-2.compute.internal    Ready    worker                 3d17h   v1.25.4+86bd4ff

  2. View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones.

    $ oc get machinesets -n openshift-machine-api

    Example output

    NAME                                        DESIRED   CURRENT   READY   AVAILABLE   AGE
    preserve-dsoc12r4-ktjfc-worker-us-east-2a   1         1         1       1           3d11h
    preserve-dsoc12r4-ktjfc-worker-us-east-2b   2         2         2       2           3d11h

  3. View the machines that exist in the openshift-machine-api namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone.

    $ oc get machines -n openshift-machine-api | grep worker

    Example output

    preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r      Running   m5.xlarge   us-east-2   us-east-2a   3d11h
    preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w      Running   m5.xlarge   us-east-2   us-east-2b   3d11h
    preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw      Running   m5.xlarge   us-east-2   us-east-2b   3d11h

  4. Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.

    $ oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>
  5. Edit the JSON file and make the following changes to the new MachineSet definition:

    • Replace worker with gpu. This will be the name of the new machine set.
    • Change the instance type of the new MachineSet definition to g4dn, which includes an NVIDIA Tesla T4 GPU. To learn more about AWS g4dn instance types, see Accelerated Computing.

      $ jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
      
      "g4dn.xlarge"

      The <output_file.json> file is saved as preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json.

  6. Update the following fields in preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json:

    • .metadata.name to a name containing gpu.
    • .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
    • .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
    • .spec.template.spec.providerSpec.value.instanceType to g4dn.xlarge.
  7. To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command:

    $ oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -

    Example output

    10c10
    
    < "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a",
    ---
    > "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a",
    
    21c21
    
    < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a"
    ---
    > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a"
    
    31c31
    
    < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a"
    ---
    > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a"
    
    60c60
    
    < "instanceType": "g4dn.xlarge",
    ---
    > "instanceType": "m5.xlarge",

  8. Create the GPU-enabled compute machine set from the definition by running the following command:

    $ oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json

    Example output

    machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created

Verification

  1. View the machine set you created by running the following command:

    $ oc -n openshift-machine-api get machinesets | grep gpu

    The MachineSet replica count is set to 1 so a new Machine object is created automatically.

    Example output

    preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a   1         1         1       1           4m21s

  2. View the Machine object that the machine set created by running the following command:

    $ oc -n openshift-machine-api get machines | grep gpu

    Example output

    preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a    running    g4dn.xlarge   us-east-2   us-east-2a  4m36s

Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.

2.2.7. Deploying the Node Feature Discovery Operator

After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform.

Procedure

  1. Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console.
  2. After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace.
  3. Verify that the Operator is installed and running by running the following command:

    $ oc get pods -n openshift-nfd

    Example output

    NAME                                       READY    STATUS     RESTARTS   AGE
    
    nfd-controller-manager-8646fcbb65-x5qgk    2/2      Running 7  (8h ago)   1d

  4. Browse to the installed Oerator in the console and select Create Node Feature Discovery.
  5. Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them.

Verification

  1. After a successful build, verify that a NFD pod is running on each nodes by running the following command:

    $ oc get pods -n openshift-nfd

    Example output

    NAME                                       READY   STATUS      RESTARTS        AGE
    nfd-controller-manager-8646fcbb65-x5qgk    2/2     Running     7 (8h ago)      12d
    nfd-master-769656c4cb-w9vrv                1/1     Running     0               12d
    nfd-worker-qjxb2                           1/1     Running     3 (3d14h ago)   12d
    nfd-worker-xtz9b                           1/1     Running     5 (3d14h ago)   12d

    The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de.

  2. View the NVIDIA GPU discovered by the NFD Operator by running the following command:

    $ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'

    Example output

    Roles: worker
    
    feature.node.kubernetes.io/pci-1013.present=true
    
    feature.node.kubernetes.io/pci-10de.present=true
    
    feature.node.kubernetes.io/pci-1d0f.present=true

    10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.

2.3. Creating a compute machine set on Azure

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.3.1. Sample YAML for a compute machine set custom resource on Azure

This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
    machine.openshift.io/cluster-api-machine-role: <role> 2
    machine.openshift.io/cluster-api-machine-type: <role>
  name: <infrastructure_id>-<role>-<region> 3
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region>
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <role>
        machine.openshift.io/cluster-api-machine-type: <role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region>
    spec:
      metadata:
        creationTimestamp: null
        labels:
          machine.openshift.io/cluster-api-machineset: <machineset_name>
          node-role.kubernetes.io/<role>: ""
      providerSpec:
        value:
          apiVersion: azureproviderconfig.openshift.io/v1beta1
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          image: 4
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5
            sku: ""
            version: ""
          internalLoadBalancer: ""
          kind: AzureMachineProviderSpec
          location: <region> 6
          managedIdentity: <infrastructure_id>-identity
          metadata:
            creationTimestamp: null
          natRule: null
          networkResourceGroup: ""
          osDisk:
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: ""
          resourceGroup: <infrastructure_id>-rg
          sshPrivateKey: ""
          sshPublicKey: ""
          tags:
            - name: <custom_tag_name> 7
              value: <custom_tag_value>
          subnet: <infrastructure_id>-<role>-subnet
          userDataSecret:
            name: worker-user-data
          vmSize: Standard_D4s_v3
          vnet: <infrastructure_id>-vnet
          zone: "1" 8
1
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

You can obtain the subnet by running the following command:

$  oc -n openshift-machine-api \
    -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \
    get machineset/<infrastructure_id>-worker-centralus1

You can obtain the vnet by running the following command:

$  oc -n openshift-machine-api \
    -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \
    get machineset/<infrastructure_id>-worker-centralus1
2
Specify the node label to add.
3
Specify the infrastructure ID, node label, and region.
4
Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image".
5
Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix.
6
Specify the region to place machines on.
7
Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field.
8
Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.

2.3.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.3.3. Selecting an Azure Marketplace image

You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following:

  • While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher.
  • The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image.
Important

Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances.

Prerequisites

  • You have installed the Azure CLI client (az).
  • Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.

Procedure

  1. Display all of the available OpenShift Container Platform images by running one of the following commands:

    • North America:

      $  az vm image list --all --offer rh-ocp-worker --publisher redhat -o table

      Example output

      Offer          Publisher       Sku                 Urn                                                             Version
      -------------  --------------  ------------------  --------------------------------------------------------------  --------------
      rh-ocp-worker  RedHat          rh-ocp-worker       RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100               4.8.2021122100
      rh-ocp-worker  RedHat          rh-ocp-worker-gen1  RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100         4.8.2021122100

    • EMEA:

      $  az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table

      Example output

      Offer          Publisher       Sku                 Urn                                                             Version
      -------------  --------------  ------------------  --------------------------------------------------------------  --------------
      rh-ocp-worker  redhat-limited  rh-ocp-worker       redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100       4.8.2021122100
      rh-ocp-worker  redhat-limited  rh-ocp-worker-gen1  redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100  4.8.2021122100

    Note

    Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.8. If required, your VMs are automatically upgraded as part of the installation process.

  2. Inspect the image for your offer by running one of the following commands:

    • North America:

      $ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  3. Review the terms of the offer by running one of the following commands:

    • North America:

      $ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  4. Accept the terms of the offering by running one of the following commands:

    • North America:

      $ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  5. Record the image details of your offer, specifically the values for publisher, offer, sku, and version.
  6. Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer:

    Sample providerSpec image values for Azure Marketplace machines

    providerSpec:
      value:
        image:
          offer: rh-ocp-worker
          publisher: redhat
          resourceID: ""
          sku: rh-ocp-worker
          type: MarketplaceWithPlan
          version: 4.8.2021122100

2.3.4. Enabling Azure boot diagnostics

You can enable boot diagnostics on Azure machines that your machine set creates.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  • Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file:

    • For an Azure Managed storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: AzureManaged 1
      1
      Specifies an Azure Managed storage account.
    • For an Azure Unmanaged storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: CustomerManaged 1
            customerManaged:
              storageAccountURI: https://<storage-account>.blob.core.windows.net 2
      1
      Specifies an Azure Unmanaged storage account.
      2
      Replace <storage-account> with the name of your storage account.
      Note

      Only the Azure Blob Storage data service is supported.

Verification

  • On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.

2.3.5. Machine sets that deploy machines as Spot VMs

You can save on costs by creating a compute machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.

Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when Azure issues the termination warning.

Interruptions can occur when using Spot VMs for the following reasons:

  • The instance price exceeds your maximum price
  • The supply of Spot VMs decreases
  • Azure needs capacity back

When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot VM.

2.3.5.1. Creating Spot VMs by using compute machine sets

You can launch a Spot VM on Azure by adding spotVMOptions to your compute machine set YAML file.

Procedure

  • Add the following line under the providerSpec field:

    providerSpec:
      value:
        spotVMOptions: {}

    You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765'. If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price.

    Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice. However, an instance can still be evicted due to capacity restrictions.

Note

It is strongly recommended to use the default standard VM price as the maxPrice value and to not set the maximum price for Spot VMs.

2.3.6. Machine sets that deploy machines on Ephemeral OS disks

You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging.

Additional resources

2.3.6.1. Creating machines on Ephemeral OS disks by using compute machine sets

You can launch machines on Ephemeral OS disks on Azure by editing your compute machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Edit the custom resource (CR) by running the following command:

    $ oc edit machineset <machine-set-name>

    where <machine-set-name> is the compute machine set that you want to provision machines on Ephemeral OS disks.

  2. Add the following to the providerSpec field:

    providerSpec:
      value:
        ...
        osDisk:
           ...
           diskSettings: 1
             ephemeralStorageLocation: Local 2
           cachingType: ReadOnly 3
           managedDisk:
             storageAccountType: Standard_LRS 4
           ...
    1 2 3
    These lines enable the use of Ephemeral OS disks.
    4
    Ephemeral OS disks are only supported for VMs or scale set instances that use the Standard LRS storage account type.
    Important

    The implementation of Ephemeral OS disk support in OpenShift Container Platform only supports the CacheDisk placement type. Do not change the placement configuration setting.

  3. Create a compute machine set using the updated configuration:

    $ oc create -f <machine-set-config>.yaml

Verification

  • On the Microsoft Azure portal, review the Overview page for a machine deployed by the compute machine set, and verify that the Ephemeral OS disk field is set to OS cache placement.

2.3.7. Machine sets that deploy machines with ultra disks as data disks

You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.

You can also create a persistent volume claim (PVC) that dynamically binds to a storage class backed by Azure ultra disks and mounts them to pods.

Note

Data disks do not support the ability to specify disk throughput or disk IOPS. You can configure these properties by using PVCs.

2.3.7.1. Creating machines with ultra disks by using machine sets

You can deploy machines with ultra disks on Azure by editing your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Create a custom secret in the openshift-machine-api namespace using the worker data secret by running the following command:

    $ oc -n openshift-machine-api \
    get secret <role>-user-data \ 1
    --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2
    1
    Replace <role> with worker.
    2
    Specify userData.txt as the name of the new custom secret.
  2. In a text editor, open the userData.txt file and locate the final } character in the file.

    1. On the immediately preceding line, add a ,.
    2. Create a new line after the , and add the following configuration details:

      "storage": {
        "disks": [ 1
          {
            "device": "/dev/disk/azure/scsi1/lun0", 2
            "partitions": [ 3
              {
                "label": "lun0p1", 4
                "sizeMiB": 1024, 5
                "startMiB": 0
              }
            ]
          }
        ],
        "filesystems": [ 6
          {
            "device": "/dev/disk/by-partlabel/lun0p1",
            "format": "xfs",
            "path": "/var/lib/lun0p1"
          }
        ]
      },
      "systemd": {
        "units": [ 7
          {
            "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8
            "enabled": true,
            "name": "var-lib-lun0p1.mount"
          }
        ]
      }
      1
      The configuration details for the disk that you want to attach to a node as an ultra disk.
      2
      Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0, specify lun0. You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set.
      3
      The configuration details for a new partition on the disk.
      4
      Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0.
      5
      Specify the total size in MiB of the partition.
      6
      Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition.
      7
      Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each.
      8
      For Where, specify the value of storage.filesystems.path. For What, specify the value of storage.filesystems.device.
  3. Extract the disabling template value to a file called disableTemplating.txt by running the following command:

    $ oc -n openshift-machine-api get secret <role>-user-data \ 1
    --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt
    1
    Replace <role> with worker.
  4. Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command:

    $ oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1
    --from-file=userData=userData.txt \
    --from-file=disableTemplating=disableTemplating.txt
    1
    For <role>-user-data-x5, specify the name of the secret. Replace <role> with worker.
  5. Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command:

    $ oc edit machineset <machine-set-name>

    where <machine-set-name> is the machine set that you want to provision machines with ultra disks.

  6. Add the following lines in the positions indicated:

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    spec:
      template:
        spec:
          metadata:
            labels:
              disk: ultrassd 1
          providerSpec:
            value:
              ultraSSDCapability: Enabled 2
              dataDisks: 3
              - nameSuffix: ultrassd
                lun: 0
                diskSizeGB: 4
                deletionPolicy: Delete
                cachingType: None
                managedDisk:
                  storageAccountType: UltraSSD_LRS
              userDataSecret:
                name: <role>-user-data-x5 4
    1
    Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value.
    2 3
    These lines enable the use of ultra disks. For dataDisks, include the entire stanza.
    4
    Specify the user data secret created earlier. Replace <role> with worker.
  7. Create a machine set using the updated configuration by running the following command:

    $ oc create -f <machine-set-name>.yaml

Verification

  1. Validate that the machines are created by running the following command:

    $ oc get machines

    The machines should be in the Running state.

  2. For a machine that is running and has a node attached, validate the partition by running the following command:

    $ oc debug node/<node-name> -- chroot /host lsblk

    In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.

Next steps

  • To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ssd-benchmark1
    spec:
      containers:
      - name: ssd-benchmark1
        image: nginx
        ports:
          - containerPort: 80
            name: "http-server"
        volumeMounts:
        - name: lun0p1
          mountPath: "/tmp"
      volumes:
        - name: lun0p1
          hostPath:
            path: /var/lib/lun0p1
            type: DirectoryOrCreate
      nodeSelector:
        disktype: ultrassd

2.3.7.2. Troubleshooting resources for machine sets that enable ultra disks

Use the information in this section to understand and recover from issues you might encounter.

2.3.7.2.1. Incorrect ultra disk configuration

If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails.

For example, if the ultraSSDCapability parameter is set to Disabled, but an ultra disk is specified in the dataDisks parameter, the following error message appears:

StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
  • To resolve this issue, verify that your machine set configuration is correct.
2.3.7.2.2. Unsupported disk parameters

If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message:

failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>."
  • To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct.
2.3.7.2.3. Unable to delete disks

If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired.

2.3.8. Enabling customer-managed encryption keys for a machine set

You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.

An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.

Procedure

  • Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example:

    providerSpec:
      value:
        osDisk:
          diskSizeGB: 128
          managedDisk:
            diskEncryptionSet:
              id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name>
            storageAccountType: Premium_LRS

2.3.9. Accelerated Networking for Microsoft Azure VMs

Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation.

2.3.9.1. Limitations

Consider the following limitations when deciding whether to use Accelerated Networking:

  • Accelerated Networking is only supported on clusters where the Machine API is operational.
  • Although the minimum requirement for an Azure worker node is two vCPUs, Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation.

  • When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.

2.3.10. Adding a GPU node to an existing OpenShift Container Platform cluster

You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Azure cloud provider.

The following table lists the validated instance types:

vmSizeNVIDIA GPU acceleratorMaximum number of GPUsArchitecture

Standard_NC24s_v3

V100

4

x86

Standard_NC4as_T4_v3

T4

1

x86

ND A100 v4

A100

8

x86

Note

By default, Azure subscriptions do not have a quota for the Azure instance types with GPU. Customers have to request a quota increase for the Azure instance families listed above.

Procedure

  1. View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones.

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                              DESIRED   CURRENT   READY   AVAILABLE   AGE
    myclustername-worker-centralus1   1         1         1       1           6h9m
    myclustername-worker-centralus2   1         1         1       1           6h9m
    myclustername-worker-centralus3   1         1         1       1           6h9m

  2. Make a copy of one of the existing compute MachineSet definitions and output the result to a YAML file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.

    $ oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml
  3. View the content of the machineset:

    $ cat machineset-azure.yaml

    Example machineset-azure.yaml file

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    metadata:
      annotations:
        machine.openshift.io/GPU: "0"
        machine.openshift.io/memoryMb: "16384"
        machine.openshift.io/vCPU: "4"
      creationTimestamp: "2023-02-06T14:08:19Z"
      generation: 1
      labels:
        machine.openshift.io/cluster-api-cluster: myclustername
        machine.openshift.io/cluster-api-machine-role: worker
        machine.openshift.io/cluster-api-machine-type: worker
      name: myclustername-worker-centralus1
      namespace: openshift-machine-api
      resourceVersion: "23601"
      uid: acd56e0c-7612-473a-ae37-8704f34b80de
    spec:
      replicas: 1
      selector:
        matchLabels:
          machine.openshift.io/cluster-api-cluster: myclustername
          machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
      template:
        metadata:
          labels:
            machine.openshift.io/cluster-api-cluster: myclustername
            machine.openshift.io/cluster-api-machine-role: worker
            machine.openshift.io/cluster-api-machine-type: worker
            machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
        spec:
          lifecycleHooks: {}
          metadata: {}
          providerSpec:
            value:
              acceleratedNetworking: true
              apiVersion: machine.openshift.io/v1beta1
              credentialsSecret:
                name: azure-cloud-credentials
                namespace: openshift-machine-api
              diagnostics: {}
              image:
                offer: ""
                publisher: ""
                resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest
                sku: ""
                version: ""
              kind: AzureMachineProviderSpec
              location: centralus
              managedIdentity: myclustername-identity
              metadata:
                creationTimestamp: null
              networkResourceGroup: myclustername-rg
              osDisk:
                diskSettings: {}
                diskSizeGB: 128
                managedDisk:
                  storageAccountType: Premium_LRS
                osType: Linux
              publicIP: false
              publicLoadBalancer: myclustername
              resourceGroup: myclustername-rg
              spotVMOptions: {}
              subnet: myclustername-worker-subnet
              userDataSecret:
                name: worker-user-data
              vmSize: Standard_D4s_v3
              vnet: myclustername-vnet
              zone: "1"
    status:
      availableReplicas: 1
      fullyLabeledReplicas: 1
      observedGeneration: 1
      readyReplicas: 1
      replicas: 1

  4. Make a copy of the machineset-azure.yaml file by running the following command:

    $ cp machineset-azure.yaml machineset-azure-gpu.yaml
  5. Update the following fields in machineset-azure-gpu.yaml:

    • Change .metadata.name to a name containing gpu.
    • Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
    • Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
    • Change .spec.template.spec.providerSpec.value.vmSize to Standard_NC4as_T4_v3.

      Example machineset-azure-gpu.yaml file

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        annotations:
          machine.openshift.io/GPU: "1"
          machine.openshift.io/memoryMb: "28672"
          machine.openshift.io/vCPU: "4"
        creationTimestamp: "2023-02-06T20:27:12Z"
        generation: 1
        labels:
          machine.openshift.io/cluster-api-cluster: myclustername
          machine.openshift.io/cluster-api-machine-role: worker
          machine.openshift.io/cluster-api-machine-type: worker
        name: myclustername-nc4ast4-gpu-worker-centralus1
        namespace: openshift-machine-api
        resourceVersion: "166285"
        uid: 4eedce7f-6a57-4abe-b529-031140f02ffa
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: myclustername
            machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: myclustername
              machine.openshift.io/cluster-api-machine-role: worker
              machine.openshift.io/cluster-api-machine-type: worker
              machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
          spec:
            lifecycleHooks: {}
            metadata: {}
            providerSpec:
              value:
                acceleratedNetworking: true
                apiVersion: machine.openshift.io/v1beta1
                credentialsSecret:
                  name: azure-cloud-credentials
                  namespace: openshift-machine-api
                diagnostics: {}
                image:
                  offer: ""
                  publisher: ""
                  resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest
                  sku: ""
                  version: ""
                kind: AzureMachineProviderSpec
                location: centralus
                managedIdentity: myclustername-identity
                metadata:
                  creationTimestamp: null
                networkResourceGroup: myclustername-rg
                osDisk:
                  diskSettings: {}
                  diskSizeGB: 128
                  managedDisk:
                    storageAccountType: Premium_LRS
                  osType: Linux
                publicIP: false
                publicLoadBalancer: myclustername
                resourceGroup: myclustername-rg
                spotVMOptions: {}
                subnet: myclustername-worker-subnet
                userDataSecret:
                  name: worker-user-data
                vmSize: Standard_NC4as_T4_v3
                vnet: myclustername-vnet
                zone: "1"
      status:
        availableReplicas: 1
        fullyLabeledReplicas: 1
        observedGeneration: 1
        readyReplicas: 1
        replicas: 1

  6. To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command:

    $ diff machineset-azure.yaml machineset-azure-gpu.yaml

    Example output

    14c14
    <   name: myclustername-worker-centralus1
    ---
    >   name: myclustername-nc4ast4-gpu-worker-centralus1
    23c23
    <       machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
    ---
    >       machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
    30c30
    <         machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
    ---
    >         machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
    67c67
    <           vmSize: Standard_D4s_v3
    ---
    >           vmSize: Standard_NC4as_T4_v3

  7. Create the GPU-enabled compute machine set from the definition file by running the following command:

    $ oc create -f machineset-azure-gpu.yaml

    Example output

    machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created

  8. View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones.

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                               DESIRED   CURRENT   READY   AVAILABLE   AGE
    clustername-n6n4r-nc4ast4-gpu-worker-centralus1    1         1         1       1           122m
    clustername-n6n4r-worker-centralus1                1         1         1       1           8h
    clustername-n6n4r-worker-centralus2                1         1         1       1           8h
    clustername-n6n4r-worker-centralus3                1         1         1       1           8h

  9. View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.

    $ oc get machines -n openshift-machine-api

    Example output

    NAME                                                PHASE     TYPE                   REGION      ZONE   AGE
    myclustername-master-0                              Running   Standard_D8s_v3        centralus   2      6h40m
    myclustername-master-1                              Running   Standard_D8s_v3        centralus   1      6h40m
    myclustername-master-2                              Running   Standard_D8s_v3        centralus   3      6h40m
    myclustername-nc4ast4-gpu-worker-centralus1-w9bqn   Running      centralus   1      21m
    myclustername-worker-centralus1-rbh6b               Running   Standard_D4s_v3        centralus   1      6h38m
    myclustername-worker-centralus2-dbz7w               Running   Standard_D4s_v3        centralus   2      6h38m
    myclustername-worker-centralus3-p9b8c               Running   Standard_D4s_v3        centralus   3      6h38m

  10. View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Azure region and OpenShift Container Platform role.

    $ oc get nodes

    Example output

    NAME                                                STATUS   ROLES                  AGE     VERSION
    myclustername-master-0                              Ready    control-plane,master   6h39m   v1.25.4+a34b9e9
    myclustername-master-1                              Ready    control-plane,master   6h41m   v1.25.4+a34b9e9
    myclustername-master-2                              Ready    control-plane,master   6h39m   v1.25.4+a34b9e9
    myclustername-nc4ast4-gpu-worker-centralus1-w9bqn   Ready    worker                 14m     v1.25.4+a34b9e9
    myclustername-worker-centralus1-rbh6b               Ready    worker                 6h29m   v1.25.4+a34b9e9
    myclustername-worker-centralus2-dbz7w               Ready    worker                 6h29m   v1.25.4+a34b9e9
    myclustername-worker-centralus3-p9b8c               Ready    worker                 6h31m   v1.25.4+a34b9e9

  11. View the list of compute machine sets:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                   DESIRED   CURRENT   READY   AVAILABLE   AGE
    myclustername-worker-centralus1        1         1         1       1           8h
    myclustername-worker-centralus2        1         1         1       1           8h
    myclustername-worker-centralus3        1         1         1       1           8h

  12. Create the GPU-enabled compute machine set from the definition file by running the following command:

    $ oc create -f machineset-azure-gpu.yaml
  13. View the list of compute machine sets:

    oc get machineset -n openshift-machine-api

    Example output

    NAME                                          DESIRED   CURRENT   READY   AVAILABLE   AGE
    myclustername-nc4ast4-gpu-worker-centralus1   1         1         1       1           121m
    myclustername-worker-centralus1               1         1         1       1           8h
    myclustername-worker-centralus2               1         1         1       1           8h
    myclustername-worker-centralus3               1         1         1       1           8h

Verification

  1. View the machine set you created by running the following command:

    $ oc get machineset -n openshift-machine-api | grep gpu

    The MachineSet replica count is set to 1 so a new Machine object is created automatically.

    Example output

    myclustername-nc4ast4-gpu-worker-centralus1   1         1         1       1           121m

  2. View the Machine object that the machine set created by running the following command:

    $ oc -n openshift-machine-api get machines | grep gpu

    Example output

    myclustername-nc4ast4-gpu-worker-centralus1-w9bqn   Running   Standard_NC4as_T4_v3   centralus   1      21m

Note

There is no need to specify a namespace for the node. The node definition is cluster scoped.

2.3.11. Deploying the Node Feature Discovery Operator

After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform.

Procedure

  1. Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console.
  2. After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace.
  3. Verify that the Operator is installed and running by running the following command:

    $ oc get pods -n openshift-nfd

    Example output

    NAME                                       READY    STATUS     RESTARTS   AGE
    
    nfd-controller-manager-8646fcbb65-x5qgk    2/2      Running 7  (8h ago)   1d

  4. Browse to the installed Oerator in the console and select Create Node Feature Discovery.
  5. Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them.

Verification

  1. After a successful build, verify that a NFD pod is running on each nodes by running the following command:

    $ oc get pods -n openshift-nfd

    Example output

    NAME                                       READY   STATUS      RESTARTS        AGE
    nfd-controller-manager-8646fcbb65-x5qgk    2/2     Running     7 (8h ago)      12d
    nfd-master-769656c4cb-w9vrv                1/1     Running     0               12d
    nfd-worker-qjxb2                           1/1     Running     3 (3d14h ago)   12d
    nfd-worker-xtz9b                           1/1     Running     5 (3d14h ago)   12d

    The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de.

  2. View the NVIDIA GPU discovered by the NFD Operator by running the following command:

    $ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'

    Example output

    Roles: worker
    
    feature.node.kubernetes.io/pci-1013.present=true
    
    feature.node.kubernetes.io/pci-10de.present=true
    
    feature.node.kubernetes.io/pci-1d0f.present=true

    10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.

2.3.11.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster

You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster where the Machine API is operational.

Procedure

  • Add the following to the providerSpec field:

    providerSpec:
      value:
        acceleratedNetworking: true 1
        vmSize: <azure-vm-size> 2
    1
    This line enables Accelerated Networking.
    2
    Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation.

Next steps

  • To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.

Verification

  • On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled.

2.4. Creating a compute machine set on Azure Stack Hub

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure Stack Hub. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.4.1. Sample YAML for a compute machine set custom resource on Azure Stack Hub

This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
    machine.openshift.io/cluster-api-machine-role: <role> 2
    machine.openshift.io/cluster-api-machine-type: <role> 3
  name: <infrastructure_id>-<role>-<region> 4
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7
        machine.openshift.io/cluster-api-machine-role: <role> 8
        machine.openshift.io/cluster-api-machine-type: <role> 9
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10
    spec:
      metadata:
        creationTimestamp: null
        labels:
          node-role.kubernetes.io/<role>: "" 11
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1beta1
          availabilitySet: <availability_set> 12
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          image:
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13
            sku: ""
            version: ""
          internalLoadBalancer: ""
          kind: AzureMachineProviderSpec
          location: <region> 14
          managedIdentity: <infrastructure_id>-identity 15
          metadata:
            creationTimestamp: null
          natRule: null
          networkResourceGroup: ""
          osDisk:
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: ""
          resourceGroup: <infrastructure_id>-rg 16
          sshPrivateKey: ""
          sshPublicKey: ""
          subnet: <infrastructure_id>-<role>-subnet 17 18
          userDataSecret:
            name: worker-user-data 19
          vmSize: Standard_DS4_v2
          vnet: <infrastructure_id>-vnet 20
          zone: "1" 21
1 5 7 13 15 16 17 20
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

You can obtain the subnet by running the following command:

$  oc -n openshift-machine-api \
    -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \
    get machineset/<infrastructure_id>-worker-centralus1

You can obtain the vnet by running the following command:

$  oc -n openshift-machine-api \
    -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \
    get machineset/<infrastructure_id>-worker-centralus1
2 3 8 9 11 18 19
Specify the node label to add.
4 6 10
Specify the infrastructure ID, node label, and region.
14
Specify the region to place machines on.
21
Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
12
Specify the availability set for the cluster.

2.4.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.
  • Create an availability set in which to deploy Azure Stack Hub compute machines.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <availabilitySet>, <clusterID>, and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.4.3. Enabling Azure boot diagnostics

You can enable boot diagnostics on Azure machines that your machine set creates.

Prerequisites

  • Have an existing Microsoft Azure Stack Hub cluster.

Procedure

  • Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file:

    • For an Azure Managed storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: AzureManaged 1
      1
      Specifies an Azure Managed storage account.
    • For an Azure Unmanaged storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: CustomerManaged 1
            customerManaged:
              storageAccountURI: https://<storage-account>.blob.core.windows.net 2
      1
      Specifies an Azure Unmanaged storage account.
      2
      Replace <storage-account> with the name of your storage account.
      Note

      Only the Azure Blob Storage data service is supported.

Verification

  • On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.

2.4.4. Enabling customer-managed encryption keys for a machine set

You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.

An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.

Procedure

  • Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example:

    providerSpec:
      value:
        osDisk:
          diskSizeGB: 128
          managedDisk:
            diskEncryptionSet:
              id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name>
            storageAccountType: Premium_LRS

2.5. Creating a compute machine set on GCP

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.5.1. Sample YAML for a compute machine set custom resource on GCP

This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
  name: <infrastructure_id>-w-a
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <role> 2
        machine.openshift.io/cluster-api-machine-type: <role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: ""
      providerSpec:
        value:
          apiVersion: gcpprovider.openshift.io/v1beta1
          canIPForward: false
          credentialsSecret:
            name: gcp-cloud-credentials
          deletionProtection: false
          disks:
          - autoDelete: true
            boot: true
            image: <path_to_image> 3
            labels: null
            sizeGb: 128
            type: pd-ssd
          gcpMetadata: 4
          - key: <custom_metadata_key>
            value: <custom_metadata_value>
          kind: GCPMachineProviderSpec
          machineType: n1-standard-4
          metadata:
            creationTimestamp: null
          networkInterfaces:
          - network: <infrastructure_id>-network
            subnetwork: <infrastructure_id>-worker-subnet
          projectID: <project_name> 5
          region: us-central1
          serviceAccounts: 6
          - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
            scopes:
            - https://www.googleapis.com/auth/cloud-platform
          tags:
            - <infrastructure_id>-worker
          userDataSecret:
            name: worker-user-data
          zone: us-central1-a
1
For <infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2
For <node>, specify the node label to add.
3
Specify the path to the image that is used in current compute machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
$ oc -n openshift-machine-api \
    -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \
    get machineset/<infrastructure_id>-worker-a

To use a GCP Marketplace image, specify the offer to use:

  • OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145
  • OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145
  • OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145
4
Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata.
5
For <project_name>, specify the name of the GCP project that you use for your cluster.
6
Specifies a single service account. Multiple service accounts are not supported.

2.5.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.5.3. Configuring persistent disk types by using compute machine sets

You can configure the type of persistent disk that a compute machine set deploys machines on by editing the compute machine set YAML file.

For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks.

Procedure

  1. In a text editor, open the YAML file for an existing compute machine set or create a new one.
  2. Edit the following line under the providerSpec field:

    providerSpec:
      value:
        disks:
          type: <pd-disk-type> 1
    1
    Specify the disk persistent type. Valid values are pd-ssd, pd-standard, and pd-balanced. The default value is pd-standard.

Verification

  • On the Google Cloud console, review the details for a machine deployed by the compute machine set and verify that the Type field matches the configured disk type.

2.5.4. Machine sets that deploy machines as preemptible VM instances

You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.

GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine.

Interruptions can occur when using preemptible VM instances for the following reasons:

  • There is a system or maintenance event
  • The supply of preemptible VM instances decreases
  • The instance reaches the end of the allotted 24-hour period for preemptible VM instances

When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a preemptible VM instance.

2.5.4.1. Creating preemptible VM instances by using compute machine sets

You can launch a preemptible VM instance on GCP by adding preemptible to your compute machine set YAML file.

Procedure

  • Add the following line under the providerSpec field:

    providerSpec:
      value:
        preemptible: true

    If preemptible is set to true, the machine is labelled as an interruptable-instance after the instance is launched.

2.5.5. Enabling customer-managed encryption keys for a compute machine set

Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer’s data. By default, Compute Engine encrypts this data by using Compute Engine keys.

You can enable encryption with a customer-managed key by using the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key.

Note

If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern.

Procedure

  1. Run the following command with your KMS key name, key ring name, and location to allow a specific service account to use your KMS key and to grant the service account the correct IAM role:

    gcloud kms keys add-iam-policy-binding <key_name> \
      --keyring <key_ring_name> \
      --location <key_ring_location> \
      --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com” \
      --role roles/cloudkms.cryptoKeyEncrypterDecrypter
  2. Configure the encryption key under the providerSpec field in your compute machine set YAML file. For example:

    providerSpec:
      value:
        # ...
        disks:
        - type:
          # ...
          encryptionKey:
            kmsKey:
              name: machine-encryption-key 1
              keyRing: openshift-encrpytion-ring 2
              location: global 3
              projectID: openshift-gcp-project 4
            kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5
    1
    The name of the customer-managed encryption key that is used for the disk encryption.
    2
    The name of the KMS key ring that the KMS key belongs to.
    3
    The GCP location in which the KMS key ring exists.
    4
    Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the compute machine set projectID in which the compute machine set was created is used.
    5
    Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used.

    After a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key.

2.5.6. Enabling GPU support for a compute machine set

Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OpenShift Container Platform on GCP supports NVIDIA GPU models in the A2 and N1 machine series.

Table 2.1. Supported GPU configurations
Model nameGPU typeMachine types [1]

NVIDIA A100

nvidia-tesla-a100

  • a2-highgpu-1g
  • a2-highgpu-2g
  • a2-highgpu-4g
  • a2-highgpu-8g
  • a2-megagpu-16g

NVIDIA K80

nvidia-tesla-k80

  • n1-standard-1
  • n1-standard-2
  • n1-standard-4
  • n1-standard-8
  • n1-standard-16
  • n1-standard-32
  • n1-standard-64
  • n1-standard-96
  • n1-highmem-2
  • n1-highmem-4
  • n1-highmem-8
  • n1-highmem-16
  • n1-highmem-32
  • n1-highmem-64
  • n1-highmem-96
  • n1-highcpu-2
  • n1-highcpu-4
  • n1-highcpu-8
  • n1-highcpu-16
  • n1-highcpu-32
  • n1-highcpu-64
  • n1-highcpu-96

NVIDIA P100

nvidia-tesla-p100

NVIDIA P4

nvidia-tesla-p4

NVIDIA T4

nvidia-tesla-t4

NVIDIA V100

nvidia-tesla-v100

  1. For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series, A2 machine series, and GPU regions and zones availability.

You can define which supported GPU to use for an instance by using the Machine API.

You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators.

Note

GPUs for graphics workloads are not supported.

Procedure

  1. In a text editor, open the YAML file for an existing compute machine set or create a new one.
  2. Specify a GPU configuration under the providerSpec field in your compute machine set YAML file. See the following examples of valid configurations:

    Example configuration for the A2 machine series:

      providerSpec:
        value:
          machineType: a2-highgpu-1g 1
          onHostMaintenance: Terminate 2
          restartPolicy: Always 3

    1
    Specify the machine type. Ensure that the machine type is included in the A2 machine series.
    2
    When using GPU support, you must set onHostMaintenance to Terminate.
    3
    Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never.

    Example configuration for the N1 machine series:

    providerSpec:
      value:
        gpus:
        - count: 1 1
          type: nvidia-tesla-p100 2
        machineType: n1-standard-1 3
        onHostMaintenance: Terminate 4
        restartPolicy: Always 5

    1
    Specify the number of GPUs to attach to the machine.
    2
    Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible.
    3
    Specify the machine type. Ensure that the machine type and GPU type are compatible.
    4
    When using GPU support, you must set onHostMaintenance to Terminate.
    5
    Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never.

2.5.7. Adding a GPU node to an existing OpenShift Container Platform cluster

You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider.

The following table lists the validated instance types:

Instance typeNVIDIA GPU acceleratorMaximum number of GPUsArchitecture

a2-highgpu-1g

A100

1

x86

n1-standard-4

T4

1

x86

Procedure

  1. Make a copy of an existing MachineSet.
  2. In the new copy, change the machine set name in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset.
  3. Change the instance type to add the following two lines to the newly copied MachineSet:

    machineType: a2-highgpu-1g
    onHostMaintenance: Terminate

    Example a2-highgpu-1g.json file

    {
        "apiVersion": "machine.openshift.io/v1beta1",
        "kind": "MachineSet",
        "metadata": {
            "annotations": {
                "machine.openshift.io/GPU": "0",
                "machine.openshift.io/memoryMb": "16384",
                "machine.openshift.io/vCPU": "4"
            },
            "creationTimestamp": "2023-01-13T17:11:02Z",
            "generation": 1,
            "labels": {
                "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p"
            },
            "name": "myclustername-2pt9p-worker-gpu-a",
            "namespace": "openshift-machine-api",
            "resourceVersion": "20185",
            "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd"
        },
        "spec": {
            "replicas": 1,
            "selector": {
                "matchLabels": {
                    "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p",
                    "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
                }
            },
            "template": {
                "metadata": {
                    "labels": {
                        "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p",
                        "machine.openshift.io/cluster-api-machine-role": "worker",
                        "machine.openshift.io/cluster-api-machine-type": "worker",
                        "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
                    }
                },
                "spec": {
                    "lifecycleHooks": {},
                    "metadata": {},
                    "providerSpec": {
                        "value": {
                            "apiVersion": "machine.openshift.io/v1beta1",
                            "canIPForward": false,
                            "credentialsSecret": {
                                "name": "gcp-cloud-credentials"
                            },
                            "deletionProtection": false,
                            "disks": [
                                {
                                    "autoDelete": true,
                                    "boot": true,
                                    "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64",
                                    "labels": null,
                                    "sizeGb": 128,
                                    "type": "pd-ssd"
                                }
                            ],
                            "kind": "GCPMachineProviderSpec",
                            "machineType": "a2-highgpu-1g",
                            "onHostMaintenance": "Terminate",
                            "metadata": {
                                "creationTimestamp": null
                            },
                            "networkInterfaces": [
                                {
                                    "network": "myclustername-2pt9p-network",
                                    "subnetwork": "myclustername-2pt9p-worker-subnet"
                                }
                            ],
                            "preemptible": true,
                            "projectID": "myteam",
                            "region": "us-central1",
                            "serviceAccounts": [
                                {
                                    "email": "myclustername-2pt9p-w@myteam.iam.gserviceaccount.com",
                                    "scopes": [
                                        "https://www.googleapis.com/auth/cloud-platform"
                                    ]
                                }
                            ],
                            "tags": [
                                "myclustername-2pt9p-worker"
                            ],
                            "userDataSecret": {
                                "name": "worker-user-data"
                            },
                            "zone": "us-central1-a"
                        }
                    }
                }
            }
        },
        "status": {
            "availableReplicas": 1,
            "fullyLabeledReplicas": 1,
            "observedGeneration": 1,
            "readyReplicas": 1,
            "replicas": 1
        }
    }

  4. View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OpenShift Container Platform role.

    $ oc get nodes

    Example output

    NAME                                                             STATUS     ROLES                  AGE     VERSION
    myclustername-2pt9p-master-0.c.openshift-qe.internal             Ready      control-plane,master   8h      v1.25.4+77bec7a
    myclustername-2pt9p-master-1.c.openshift-qe.internal             Ready      control-plane,master   8h      v1.25.4+77bec7a
    myclustername-2pt9p-master-2.c.openshift-qe.internal             Ready      control-plane,master   8h      v1.25.4+77bec7a
    myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal       Ready      worker                 8h      v1.25.4+77bec7a
    myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal       Ready      worker                 8h      v1.25.4+77bec7a
    myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal       Ready      worker                 8h      v1.25.4+77bec7a
    myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal   Ready      worker                 4h35m   v1.25.4+77bec7a

  5. View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones.

    $ oc get machinesets -n openshift-machine-api

    Example output

    NAME                               DESIRED   CURRENT   READY   AVAILABLE   AGE
    myclustername-2pt9p-worker-a       1         1         1       1           8h
    myclustername-2pt9p-worker-b       1         1         1       1           8h
    myclustername-2pt9p-worker-c       1         1                             8h
    myclustername-2pt9p-worker-f       0         0                             8h

  6. View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.

    $ oc get machines -n openshift-machine-api | grep worker

    Example output

    myclustername-2pt9p-worker-a-mxtnz       Running   n2-standard-4   us-central1   us-central1-a   8h
    myclustername-2pt9p-worker-b-9pzzn       Running   n2-standard-4   us-central1   us-central1-b   8h
    myclustername-2pt9p-worker-c-6pbg6       Running   n2-standard-4   us-central1   us-central1-c   8h

  7. Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.

    $ oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json  > <output_file.json>
  8. Edit the JSON file to make the following changes to the new MachineSet definition:

    • Rename the machine set name by inserting the substring gpu in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset.
    • Change the machineType of the new MachineSet definition to a2-highgpu-1g, which includes an NVIDIA A100 GPU.

      jq .spec.template.spec.providerSpec.value.machineType ocp_4.12_machineset-a2-highgpu-1g.json
      
      "a2-highgpu-1g"

      The <output_file.json> file is saved as ocp_4.12_machineset-a2-highgpu-1g.json.

  9. Update the following fields in ocp_4.12_machineset-a2-highgpu-1g.json:

    • Change .metadata.name to a name containing gpu.
    • Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
    • Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
    • Change .spec.template.spec.providerSpec.value.MachineType to a2-highgpu-1g.
    • Add the following line under machineType: `"onHostMaintenance": "Terminate". For example:

      "machineType": "a2-highgpu-1g",
      "onHostMaintenance": "Terminate",
  10. To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command:

    $ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.12_machineset-a2-highgpu-1g.json -

    Example output

    15c15
    <         "name": "myclustername-2pt9p-worker-gpu-a",
    ---
    >         "name": "myclustername-2pt9p-worker-a",
    25c25
    <                 "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
    ---
    >                 "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a"
    34c34
    <                     "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
    ---
    >                     "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a"
    59,60c59
    <                         "machineType": "a2-highgpu-1g",
    <                         "onHostMaintenance": "Terminate",
    ---
    >                         "machineType": "n2-standard-4",

  11. Create the GPU-enabled compute machine set from the definition file by running the following command:

    $ oc create -f ocp_4.12_machineset-a2-highgpu-1g.json

    Example output

    machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created

Verification

  1. View the machine set you created by running the following command:

    $ oc -n openshift-machine-api get machinesets | grep gpu

    The MachineSet replica count is set to 1 so a new Machine object is created automatically.

    Example output

    myclustername-2pt9p-worker-gpu-a   1         1         1       1           5h24m

  2. View the Machine object that the machine set created by running the following command:

    $ oc -n openshift-machine-api get machines | grep gpu

    Example output

    myclustername-2pt9p-worker-gpu-a-wxcr6   Running   a2-highgpu-1g   us-central1   us-central1-a   5h25m

Note

Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.

2.5.8. Deploying the Node Feature Discovery Operator

After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform.

Procedure

  1. Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console.
  2. After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace.
  3. Verify that the Operator is installed and running by running the following command:

    $ oc get pods -n openshift-nfd

    Example output

    NAME                                       READY    STATUS     RESTARTS   AGE
    
    nfd-controller-manager-8646fcbb65-x5qgk    2/2      Running 7  (8h ago)   1d

  4. Browse to the installed Oerator in the console and select Create Node Feature Discovery.
  5. Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them.

Verification

  1. After a successful build, verify that a NFD pod is running on each nodes by running the following command:

    $ oc get pods -n openshift-nfd

    Example output

    NAME                                       READY   STATUS      RESTARTS        AGE
    nfd-controller-manager-8646fcbb65-x5qgk    2/2     Running     7 (8h ago)      12d
    nfd-master-769656c4cb-w9vrv                1/1     Running     0               12d
    nfd-worker-qjxb2                           1/1     Running     3 (3d14h ago)   12d
    nfd-worker-xtz9b                           1/1     Running     5 (3d14h ago)   12d

    The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de.

  2. View the NVIDIA GPU discovered by the NFD Operator by running the following command:

    $ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'

    Example output

    Roles: worker
    
    feature.node.kubernetes.io/pci-1013.present=true
    
    feature.node.kubernetes.io/pci-10de.present=true
    
    feature.node.kubernetes.io/pci-1d0f.present=true

    10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.

2.6. Creating a compute machine set on IBM Cloud

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Cloud. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.6.1. Sample YAML for a compute machine set custom resource on IBM Cloud

This sample YAML defines a compute machine set that runs in a specified IBM Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
    machine.openshift.io/cluster-api-machine-role: <role> 2
    machine.openshift.io/cluster-api-machine-type: <role> 3
  name: <infrastructure_id>-<role>-<region> 4
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7
        machine.openshift.io/cluster-api-machine-role: <role> 8
        machine.openshift.io/cluster-api-machine-type: <role> 9
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: ""
      providerSpec:
        value:
          apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1
          credentialsSecret:
            name: ibmcloud-credentials
          image: <infrastructure_id>-rhcos 11
          kind: IBMCloudMachineProviderSpec
          primaryNetworkInterface:
              securityGroups:
              - <infrastructure_id>-sg-cluster-wide
              - <infrastructure_id>-sg-openshift-net
              subnet: <infrastructure_id>-subnet-compute-<zone> 12
          profile: <instance_profile> 13
          region: <region> 14
          resourceGroup: <resource_group> 15
          userDataSecret:
              name: <role>-user-data 16
          vpc: <vpc_name> 17
          zone: <zone> 18
1 5 7
The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 3 8 9 16
The node label to add.
4 6 10
The infrastructure ID, node label, and region.
11
The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation.
12
The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify.
13
14
Specify the region to place machines on.
15
The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID.
17
The VPC name.
18
Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.

2.6.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.7. Creating a compute machine set on Nutanix

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.7.1. Sample YAML for a compute machine set custom resource on Nutanix

This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
    machine.openshift.io/cluster-api-machine-role: <role> 2
    machine.openshift.io/cluster-api-machine-type: <role> 3
  name: <infrastructure_id>-<role>-<zone> 4
  namespace: openshift-machine-api
  annotations: 5
    machine.openshift.io/memoryMb: "16384"
    machine.openshift.io/vCPU: "4"
spec:
  replicas: 3
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 6
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 8
        machine.openshift.io/cluster-api-machine-role: <role> 9
        machine.openshift.io/cluster-api-machine-type: <role> 10
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 11
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: ""
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1
          cluster:
            type: uuid
            uuid: <cluster_uuid>
          credentialsSecret:
            name: nutanix-credentials
          image:
            name: <infrastructure_id>-rhcos 12
            type: name
          kind: NutanixMachineProviderConfig
          memorySize: 16Gi 13
          subnets:
          - type: uuid
            uuid: <subnet_uuid>
          systemDiskSize: 120Gi 14
          userDataSecret:
            name: <user_data_secret> 15
          vcpuSockets: 4 16
          vcpusPerSocket: 1 17
1 6 8
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 3 9 10
Specify the node label to add.
4 7 11
Specify the infrastructure ID, node label, and zone.
5
Annotations for the cluster autoscaler.
12
Specify the image to use. Use an image from an existing default compute machine set for the cluster.
13
Specify the amount of memory for the cluster in Gi.
14
Specify the size of the system disk in Gi.
15
Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set.
16
Specify the number of vCPU sockets.
17
Specify the number of vCPUs per socket.

2.7.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.8. Creating a compute machine set on OpenStack

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.8.1. Sample YAML for a compute machine set custom resource on RHOSP

This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
    machine.openshift.io/cluster-api-machine-role: <role> 2
    machine.openshift.io/cluster-api-machine-type: <role> 3
  name: <infrastructure_id>-<role> 4
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas>
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7
        machine.openshift.io/cluster-api-machine-role: <role> 8
        machine.openshift.io/cluster-api-machine-type: <role> 9
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10
    spec:
      providerSpec:
        value:
          apiVersion: openstackproviderconfig.openshift.io/v1alpha1
          cloudName: openstack
          cloudsSecret:
            name: openstack-cloud-credentials
            namespace: openshift-machine-api
          flavor: <nova_flavor>
          image: <glance_image_name_or_location>
          serverGroupID: <optional_UUID_of_server_group> 11
          kind: OpenstackProviderSpec
          networks: 12
          - filter: {}
            subnets:
            - filter:
                name: <subnet_name>
                tags: openshiftClusterID=<infrastructure_id> 13
          primarySubnet: <rhosp_subnet_UUID> 14
          securityGroups:
          - filter: {}
            name: <infrastructure_id>-worker 15
          serverMetadata:
            Name: <infrastructure_id>-worker 16
            openshiftClusterID: <infrastructure_id> 17
          tags:
          - openshiftClusterID=<infrastructure_id> 18
          trunk: true
          userDataSecret:
            name: worker-user-data 19
          availabilityZone: <optional_openstack_availability_zone>
1 5 7 13 15 16 17 18
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 3 8 9 19
Specify the node label to add.
4 6 10
Specify the infrastructure ID and node label.
11
To set a server group policy for the MachineSet, enter the value that is returned from creating a server group. For most deployments, anti-affinity or soft-anti-affinity policies are recommended.
12
Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the primarySubnet value.
14
Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file.

2.8.2. Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP

If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create compute machine sets that use that technology.

This sample YAML defines a compute machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: ""

In this sample, infrastructure_id is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role is the node label to add.

The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports list.

Note

Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a compute machine set custom resource on RHOSP".

An example compute machine set that uses SR-IOV networks

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id>
    machine.openshift.io/cluster-api-machine-role: <node_role>
    machine.openshift.io/cluster-api-machine-type: <node_role>
  name: <infrastructure_id>-<node_role>
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas>
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <node_role>
        machine.openshift.io/cluster-api-machine-type: <node_role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
    spec:
      metadata:
      providerSpec:
        value:
          apiVersion: openstackproviderconfig.openshift.io/v1alpha1
          cloudName: openstack
          cloudsSecret:
            name: openstack-cloud-credentials
            namespace: openshift-machine-api
          flavor: <nova_flavor>
          image: <glance_image_name_or_location>
          serverGroupID: <optional_UUID_of_server_group>
          kind: OpenstackProviderSpec
          networks:
            - subnets:
              - UUID: <machines_subnet_UUID>
          ports:
            - networkID: <radio_network_UUID> 1
              nameSuffix: radio
              fixedIPs:
                - subnetID: <radio_subnet_UUID> 2
              tags:
                - sriov
                - radio
              vnicType: direct 3
              portSecurity: false 4
            - networkID: <uplink_network_UUID> 5
              nameSuffix: uplink
              fixedIPs:
                - subnetID: <uplink_subnet_UUID> 6
              tags:
                - sriov
                - uplink
              vnicType: direct 7
              portSecurity: false 8
          primarySubnet: <machines_subnet_UUID>
          securityGroups:
          - filter: {}
            name: <infrastructure_id>-<node_role>
          serverMetadata:
            Name: <infrastructure_id>-<node_role>
            openshiftClusterID: <infrastructure_id>
          tags:
          - openshiftClusterID=<infrastructure_id>
          trunk: true
          userDataSecret:
            name: <node_role>-user-data
          availabilityZone: <optional_openstack_availability_zone>

1 5
Enter a network UUID for each port.
2 6
Enter a subnet UUID for each port.
3 7
The value of the vnicType parameter must be direct for each port.
4 8
The value of the portSecurity parameter must be false for each port.

You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it.

Important

After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter:

$ oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true"
Note

Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix>. The nameSuffix field is required in port definitions.

You can enable trunking for each port.

Optionally, you can add tags to ports as part of their tags lists.

2.8.3. Sample YAML for SR-IOV deployments where port security is disabled

To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a compute machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports list. This difference from the standard SR-IOV compute machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces.

Ports that you define for machines subnets require:

  • Allowed address pairs for the API and ingress virtual IP ports
  • The compute security group
  • Attachment to the machines network and subnet
Note

Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP".

An example compute machine set that uses SR-IOV networks and has port security disabled

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id>
    machine.openshift.io/cluster-api-machine-role: <node_role>
    machine.openshift.io/cluster-api-machine-type: <node_role>
  name: <infrastructure_id>-<node_role>
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas>
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <node_role>
        machine.openshift.io/cluster-api-machine-type: <node_role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
    spec:
      metadata: {}
      providerSpec:
        value:
          apiVersion: openstackproviderconfig.openshift.io/v1alpha1
          cloudName: openstack
          cloudsSecret:
            name: openstack-cloud-credentials
            namespace: openshift-machine-api
          flavor: <nova_flavor>
          image: <glance_image_name_or_location>
          kind: OpenstackProviderSpec
          ports:
            - allowedAddressPairs: 1
              - ipAddress: <API_VIP_port_IP>
              - ipAddress: <ingress_VIP_port_IP>
              fixedIPs:
                - subnetID: <machines_subnet_UUID> 2
              nameSuffix: nodes
              networkID: <machines_network_UUID> 3
              securityGroups:
                  - <compute_security_group_UUID> 4
            - networkID: <SRIOV_network_UUID>
              nameSuffix: sriov
              fixedIPs:
                - subnetID: <SRIOV_subnet_UUID>
              tags:
                - sriov
              vnicType: direct
              portSecurity: False
          primarySubnet: <machines_subnet_UUID>
          serverMetadata:
            Name: <infrastructure_ID>-<node_role>
            openshiftClusterID: <infrastructure_id>
          tags:
          - openshiftClusterID=<infrastructure_id>
          trunk: false
          userDataSecret:
            name: worker-user-data

1
Specify allowed address pairs for the API and ingress ports.
2 3
Specify the machines network and subnet.
4
Specify the compute machines security group.
Note

Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix>. The nameSuffix field is required in port definitions.

You can enable trunking for each port.

Optionally, you can add tags to ports as part of their tags lists.

If your cluster uses Kuryr and the RHOSP SR-IOV network has port security disabled, the primary port for compute machines must have:

  • The value of the spec.template.spec.providerSpec.value.networks.portSecurityEnabled parameter set to false.
  • For each subnet, the value of the spec.template.spec.providerSpec.value.networks.subnets.portSecurityEnabled parameter set to false.
  • The value of spec.template.spec.providerSpec.value.securityGroups set to empty: [].

An example section of a compute machine set for a cluster on Kuryr that uses SR-IOV and has port security disabled

...
          networks:
            - subnets:
              - uuid: <machines_subnet_UUID>
                portSecurityEnabled: false
              portSecurityEnabled: false
          securityGroups: []
...

In that case, you can apply the compute security group to the primary VM interface after the VM is created. For example, from a command line:

$ openstack port set --enable-port-security --security-group <infrastructure_id>-<node_role> <main_port_ID>
Important

After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter:

$ oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true"

2.8.4. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.9. Creating a compute machine set on RHV

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat Virtualization (RHV). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.9.1. Sample YAML for a compute machine set custom resource on RHV

This sample YAML defines a compute machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
    machine.openshift.io/cluster-api-machine-role: <role> 2
    machine.openshift.io/cluster-api-machine-type: <role> 3
  name: <infrastructure_id>-<role> 4
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas> 5
  Selector: 6
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9
        machine.openshift.io/cluster-api-machine-role: <role> 10
        machine.openshift.io/cluster-api-machine-type: <role> 11
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: "" 13
      providerSpec:
        value:
          apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1
          cluster_id: <ovirt_cluster_id> 14
          template_name: <ovirt_template_name> 15
          sparse: <boolean_value> 16
          format: <raw_or_cow> 17
          cpu: 18
            sockets: <number_of_sockets> 19
            cores: <number_of_cores> 20
            threads: <number_of_threads> 21
          memory_mb: <memory_size> 22
          guaranteed_memory_mb:  <memory_size> 23
          os_disk: 24
            size_gb: <disk_size> 25
            storage_domain_id: <storage_domain_UUID> 26
          network_interfaces: 27
            vnic_profile_id:  <vnic_profile_id> 28
          credentialsSecret:
            name: ovirt-credentials 29
          kind: OvirtMachineProviderSpec
          type: <workload_type> 30
          auto_pinning_policy: <auto_pinning_policy> 31
          hugepages: <hugepages> 32
          affinityGroupsNames:
            - compute 33
          userDataSecret:
            name: worker-user-data
1 7 9
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 3 10 11 13
Specify the node label to add.
4 8 12
Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters.
5
Specify the number of machines to create.
6
Selector for the machines.
14
Specify the UUID for the RHV cluster to which this VM instance belongs.
15
Specify the RHV VM template to use to create the machine.
16
Setting this option to false enables preallocation of disks. The default is true. Setting sparse to true with format set to raw is not available for block storage domains. The raw format writes the entire virtual disk to the underlying physical disk.
17
Can be set to cow or raw. The default is cow. The cow format is optimized for virtual machines.
Note

Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage.

18
Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads.
19
Optional: Specify the number of sockets for a VM.
20
Optional: Specify the number of cores per socket.
21
Optional: Specify the number of threads per core.
22
Optional: Specify the size of a VM’s memory in MiB.
23
Optional: Specify the size of a virtual machine’s guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see Memory Ballooning and Optimization Settings Explained.
Note

If you are using a version earlier than RHV 4.4.8, see Guaranteed memory requirements for OpenShift on Red Hat Virtualization clusters.

24
Optional: Root disk of the node.
25
Optional: Specify the size of the bootable disk in GiB.
26
Optional: Specify the UUID of the storage domain for the compute node’s disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default)
27
Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones.
28
Optional: Specify the vNIC profile ID.
29
Specify the name of the secret object that holds the RHV credentials.
30
Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop, server (default), high_performance. high_performance improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide.
31
Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none, resize_and_pin. For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide.
32
Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576. For more information, see Configuring Huge Pages in the Virtual Machine Management Guide.
33
Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt.
Note

Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template.

2.9.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.10. Creating a compute machine set on vSphere

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.10.1. Sample YAML for a compute machine set custom resource on vSphere

This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  creationTimestamp: null
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
  name: <infrastructure_id>-<role> 2
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5
        machine.openshift.io/cluster-api-machine-role: <role> 6
        machine.openshift.io/cluster-api-machine-type: <role> 7
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8
    spec:
      metadata:
        creationTimestamp: null
        labels:
          node-role.kubernetes.io/<role>: "" 9
      providerSpec:
        value:
          apiVersion: vsphereprovider.openshift.io/v1beta1
          credentialsSecret:
            name: vsphere-cloud-credentials
          diskGiB: 120
          kind: VSphereMachineProviderSpec
          memoryMiB: 8192
          metadata:
            creationTimestamp: null
          network:
            devices:
            - networkName: "<vm_network_name>" 10
          numCPUs: 4
          numCoresPerSocket: 1
          snapshot: ""
          template: <vm_template_name> 11
          userDataSecret:
            name: worker-user-data
          workspace:
            datacenter: <vcenter_datacenter_name> 12
            datastore: <vcenter_datastore_name> 13
            folder: <vcenter_vm_folder_path> 14
            resourcepool: <vsphere_resource_pool> 15
            server: <vcenter_server_ip> 16
1 3 5
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 4 8
Specify the infrastructure ID and node label.
6 7 9
Specify the node label to add.
10
Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster.
11
Specify the vSphere VM template to use, such as user-5ddjd-rhcos.
12
Specify the vCenter Datacenter to deploy the compute machine set on.
13
Specify the vCenter Datastore to deploy the compute machine set on.
14
Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd.
15
Specify the vSphere resource pool for your VMs.
16
Specify the vCenter server IP or fully qualified domain name.

2.10.2. Minimum required vCenter privileges for compute machine set management

To manage compute machine sets in an OpenShift Container Platform cluster on vCenter, you must use an account with privileges to read, create, and delete the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.

If you cannot use an account with global administrative privileges, you must create roles to grant the minimum required privileges. The following table lists the minimum vCenter roles and privileges that are required to create, scale, and delete compute machine sets and to delete machines in your OpenShift Container Platform cluster.

Example 2.1. Minimum vCenter roles and privileges required for compute machine set management

vSphere object for roleWhen requiredRequired privileges

vSphere vCenter

Always

InventoryService.Tagging.AttachTag
InventoryService.Tagging.CreateCategory
InventoryService.Tagging.CreateTag
InventoryService.Tagging.DeleteCategory
InventoryService.Tagging.DeleteTag
InventoryService.Tagging.EditCategory
InventoryService.Tagging.EditTag
Sessions.ValidateSession
StorageProfile.Update1
StorageProfile.View1

vSphere vCenter Cluster

Always

Resource.AssignVMToPool

vSphere Datastore

Always

Datastore.AllocateSpace
Datastore.Browse

vSphere Port Group

Always

Network.Assign

Virtual Machine Folder

Always

VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.Memory
VirtualMachine.Config.Settings
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone

vSphere vCenter Datacenter

If the installation program creates the virtual machine folder

Resource.AssignVMToPool
VirtualMachine.Provisioning.DeployTemplate

1 The StorageProfile.Update and StorageProfile.View permissions are required only for storage backends that use the Container Storage Interface (CSI).

The following table details the permissions and propagation settings that are required for compute machine set management.

Example 2.2. Required permissions and propagation settings

vSphere objectFolder typePropagate to childrenPermissions required

vSphere vCenter

Always

Not required

Listed required privileges

vSphere vCenter Datacenter

Existing folder

Not required

ReadOnly permission

Installation program creates the folder

Required

Listed required privileges

vSphere vCenter Cluster

Always

Required

Listed required privileges

vSphere vCenter Datastore

Always

Not required

Listed required privileges

vSphere Switch

Always

Not required

ReadOnly permission

vSphere Port Group

Always

Not required

Listed required privileges

vSphere vCenter Virtual Machine Folder

Existing folder

Required

Listed required privileges

For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.

2.10.3. Requirements for clusters with user-provisioned infrastructure to use compute machine sets

To use compute machine sets on clusters that have user-provisioned infrastructure, you must ensure that you cluster configuration supports using the Machine API.

Obtaining the infrastructure ID

To create compute machine sets, you must be able to supply the infrastructure ID for your cluster.

Procedure

  • To obtain the infrastructure ID for your cluster, run the following command:

    $ oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'
Satisfying vSphere credentials requirements

To use compute machine sets, the Machine API must be able to interact with vCenter. Credentials that authorize the Machine API components to interact with vCenter must exist in a secret in the openshift-machine-api namespace.

Procedure

  1. To determine whether the required credentials exist, run the following command:

    $ oc get secret \
      -n openshift-machine-api vsphere-cloud-credentials \
      -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'

    Sample output

    <vcenter-server>.password=<openshift-user-password>
    <vcenter-server>.username=<openshift-user>

    where <vcenter-server> is the IP address or fully qualified domain name (FQDN) of the vCenter server and <openshift-user> and <openshift-user-password> are the OpenShift Container Platform administrator credentials to use.

  2. If the secret does not exist, create it by running the following command:

    $ oc create secret generic vsphere-cloud-credentials \
      -n openshift-machine-api \
      --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>
Satisfying Ignition configuration requirements

Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the machine-config-server address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator.

By default, this configuration is stored in the worker-user-data secret in the machine-api-operator namespace. Compute machine sets reference the secret during the machine creation process.

Procedure

  1. To determine whether the required secret exists, run the following command:

    $ oc get secret \
      -n openshift-machine-api worker-user-data \
      -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'

    Sample output

    disableTemplating: false
    userData: 1
      {
        "ignition": {
          ...
          },
        ...
      }

    1
    The full output is omitted here, but should have this format.
  2. If the secret does not exist, create it by running the following command:

    $ oc create secret generic worker-user-data \
      -n openshift-machine-api \
      --from-file=<installation_directory>/worker.ign

    where <installation_directory> is the directory that was used to store your installation assets during cluster installation.

2.10.4. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Note

Clusters that are installed with user-provisioned infrastructure have a different networking stack than clusters with infrastructure that is provisioned by the installation program. As a result of this difference, automatic load balancer management is unsupported on clusters that have user-provisioned infrastructure. For these clusters, a compute machine set can only create worker and infra type machines.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.
  • Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified.
  • If your cluster uses user-provisioned infrastructure, you have satisfied the specific Machine API requirements for that configuration.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
    3. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values:

      Example vSphere providerSpec values

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      ...
      template:
        ...
        spec:
          providerSpec:
            value:
              apiVersion: machine.openshift.io/v1beta1
              credentialsSecret:
                name: vsphere-cloud-credentials 1
              diskGiB: 120
              kind: VSphereMachineProviderSpec
              memoryMiB: 16384
              network:
                devices:
                  - networkName: "<vm_network_name>"
              numCPUs: 4
              numCoresPerSocket: 4
              snapshot: ""
              template: <vm_template_name> 2
              userDataSecret:
                name: worker-user-data 3
              workspace:
                datacenter: <vcenter_datacenter_name>
                datastore: <vcenter_datastore_name>
                folder: <vcenter_vm_folder_path>
                resourcepool: <vsphere_resource_pool>
                server: <vcenter_server_address> 4

      1
      The name of the secret in the openshift-machine-api namespace that contains the required vCenter credentials.
      2
      The name of the RHCOS VM template for your cluster that was created during installation.
      3
      The name of the secret in the openshift-machine-api namespace that contains the required Ignition configuration credentials.
      4
      The IP address or fully qualified domain name (FQDN) of the vCenter server.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

2.11. Creating a compute machine set on bare metal

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on bare metal. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

Important

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

2.11.1. Sample YAML for a compute machine set custom resource on bare metal

This sample YAML defines a compute machine set that runs on bare metal and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  creationTimestamp: null
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
  name: <infrastructure_id>-<role> 2
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5
        machine.openshift.io/cluster-api-machine-role: <role> 6
        machine.openshift.io/cluster-api-machine-type: <role> 7
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8
    spec:
      metadata:
        creationTimestamp: null
        labels:
          node-role.kubernetes.io/<role>: "" 9
      providerSpec:
        value:
          apiVersion: baremetal.cluster.k8s.io/v1alpha1
          hostSelector: {}
          image:
            checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10
            url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11
          kind: BareMetalMachineProviderSpec
          metadata:
            creationTimestamp: null
          userData:
            name: worker-user-data
1 3 5
Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 4 8
Specify the infrastructure ID and node label.
6 7 9
Specify the node label to add.
10
Edit the checksum URL to use the API VIP address.
11
Edit the url URL to use the API VIP address.

2.11.2. Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api

      Example output

      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m

    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml

      Example output

      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1
        name: <infrastructure_id>-<role> 2
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: 3
              ...

      1
      The cluster infrastructure ID.
      2
      A default node label.
      Note

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3
      The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml

Verification

  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api

    Example output

    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.