Chapter 2. Managing compute machines with the Machine API
2.1. Creating a compute machine set on Alibaba Cloud
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Alibaba Cloud. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.1.1. Sample YAML for a compute machine set custom resource on Alibaba Cloud
This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: "" zoneId: <zone> 21
- 1 5 7
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (
oc
) installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 3 8 9
- Specify the node label to add.
- 4 6 10
- Specify the infrastructure ID, node label, and zone.
- 11
- Specify the image to use. Use an image from an existing default compute machine set for the cluster.
- 12
- Specify the instance type you want to use for the compute machine set.
- 13
- Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set.
- 14
- Specify the region to place machines on.
- 15
- Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one.
- 16 18 20
- Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed.
- 17
- Specify the type and size of the root disk. Use the
category
value that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes forsize
. - 19
- Specify the name of the secret in the user data YAML file that is in the
openshift-machine-api
namespace. Use the value that the installer populates in the default compute machine set. - 21
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
2.1.1.1. Machine set parameters for Alibaba Cloud usage statistics
The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups
, tag
, and vSwitch
parameters of the spec.template.spec.providerSpec.value
list.
When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed.
The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required.
Tags in spec.template.spec.providerSpec.value.securityGroups
spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags
Tags in spec.template.spec.providerSpec.value.tag
spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp
Tags in spec.template.spec.providerSpec.value.vSwitch
spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags
2.1.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.2. Creating a compute machine set on AWS
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.2.1. Sample YAML for a compute machine set custom resource on AWS
This sample YAML defines a compute machine set that runs in the us-east-1a
Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data
- 1 3 5 11 14 16
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 8
- Specify the infrastructure ID, role node label, and zone.
- 6 7 9
- Specify the role node label to add.
- 10
- Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region.
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone>
- 17 18
- Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a
name:value
pair ofEmail:admin-email@example.com
.NoteCustom tags can also be specified during installation in the
install-config.yml
file. If theinstall-config.yml
file and the machine set include a tag with the samename
data, the value for the tag from the machine set takes priority over the value for the tag in theinstall-config.yml
file. - 12
- Specify the zone, for example,
us-east-1a
. - 13
- Specify the region, for example,
us-east-1
. - 15
- Specify the infrastructure ID and zone.
2.2.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
- If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets.
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.2.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.2.4. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets
You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group.
EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group.
Prerequisites
You created a placement group in the AWS console.
NoteEnsure that the rules and limitations for the type of placement group that you create are compatible with your intended use case.
Procedure
- In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following lines under the
providerSpec
field:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 # ...
Verification
In the AWS console, find a machine that the machine set created and verify the following in the machine properties:
-
The placement group field has the value that you specified for the
placementGroupName
parameter in the machine set. - The interface type field indicates that it uses an EFA.
-
The placement group field has the value that you specified for the
2.2.5. Machine set options for the Amazon EC2 Instance Metadata Service
You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2.
Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later.
To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values. You can also edit an existing machine set to create new machines with your preferred IMDS configuration when the machine set is scaled up.
Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2.
2.2.5.1. Configuring IMDS by using machine sets
You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication
in the machine set YAML file for your machines.
Prerequisites
- To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later.
Procedure
Add or edit the following lines under the
providerSpec
field:providerSpec: value: metadataServiceOptions: authentication: Required 1
- 1
- To require IMDSv2, set the parameter value to
Required
. To allow the use of both IMDSv1 and IMDSv2, set the parameter value toOptional
. If no value is specified, both IMDSv1 and IMDSv2 are allowed.
2.2.6. Machine sets that deploy machines as Dedicated Instances
You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.
Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware.
2.2.6.1. Creating Dedicated Instances by using machine sets
You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy
field in your machine set YAML file to launch a Dedicated Instance on AWS.
Procedure
Specify a dedicated tenancy under the
providerSpec
field:providerSpec: placement: tenancy: dedicated
2.2.7. Machine sets that deploy machines as Spot Instances
You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning.
Interruptions can occur when using Spot Instances for the following reasons:
- The instance price exceeds your maximum price
- The demand for Spot Instances increases
- The supply of Spot Instances decreases
When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas
quantity, the compute machine set creates a machine that requests a Spot Instance.
2.2.7.1. Creating Spot Instances by using compute machine sets
You can launch a Spot Instance on AWS by adding spotMarketOptions
to your compute machine set YAML file.
Procedure
Add the following line under the
providerSpec
field:providerSpec: value: spotMarketOptions: {}
You can optionally set the
spotMarketOptions.maxPrice
field to limit the cost of the Spot Instance. For example you can setmaxPrice: '2.50'
.If the
maxPrice
is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price.NoteIt is strongly recommended to use the default On-Demand price as the
maxPrice
value and to not set the maximum price for Spot Instances.
2.2.8. Adding a GPU node to an existing OpenShift Container Platform cluster
You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider.
For more information about the supported instance types, see the following NVIDIA documentation:
Procedure
View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OpenShift Container Platform role.
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.27.3 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.27.3 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.27.3 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.27.3 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.27.3 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.27.3
View the machines and machine sets that exist in the
openshift-machine-api
namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones.$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h
View the machines that exist in the
openshift-machine-api
namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone.$ oc get machines -n openshift-machine-api | grep worker
Example output
preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h
Make a copy of one of the existing compute
MachineSet
definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.$ oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>
Edit the JSON file and make the following changes to the new
MachineSet
definition:-
Replace
worker
withgpu
. This will be the name of the new machine set. Change the instance type of the new
MachineSet
definition tog4dn
, which includes an NVIDIA Tesla T4 GPU. To learn more about AWSg4dn
instance types, see Accelerated Computing.$ jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json "g4dn.xlarge"
The
<output_file.json>
file is saved aspreserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
.
-
Replace
Update the following fields in
preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
:-
.metadata.name
to a name containinggpu
. -
.spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
. -
.spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
. -
.spec.template.spec.providerSpec.value.instanceType
tog4dn.xlarge
.
-
To verify your changes, perform a
diff
of the original compute definition and the new GPU-enabled node definition by running the following command:$ oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -
Example output
10c10 < "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a", --- > "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a", 21c21 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 31c31 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 60c60 < "instanceType": "g4dn.xlarge", --- > "instanceType": "m5.xlarge",
Create the GPU-enabled compute machine set from the definition by running the following command:
$ oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
Example output
machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created
Verification
View the machine set you created by running the following command:
$ oc -n openshift-machine-api get machinesets | grep gpu
The MachineSet replica count is set to
1
so a newMachine
object is created automatically.Example output
preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s
View the
Machine
object that the machine set created by running the following command:$ oc -n openshift-machine-api get machines | grep gpu
Example output
preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s
Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.
2.2.9. Deploying the Node Feature Discovery Operator
After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform.
Procedure
- Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console.
-
After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the
nfd-master
andnfd-worker
pods, onenfd-worker
pod for each compute node, in theopenshift-nfd
namespace. Verify that the Operator is installed and running by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d
- Browse to the installed Oerator in the console and select Create Node Feature Discovery.
-
Select Create to build a NFD custom resource. This creates NFD pods in the
openshift-nfd
namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them.
Verification
After a successful build, verify that a NFD pod is running on each nodes by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d
The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID
10de
.View the NVIDIA GPU discovered by the NFD Operator by running the following command:
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'
Example output
Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true
10de
appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.
2.3. Creating a compute machine set on Azure
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.3.1. Sample YAML for a compute machine set custom resource on Azure
This sample YAML defines a compute machine set that runs in the 1
Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8
- 1
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
You can obtain the subnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1
You can obtain the vnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1
- 2
- Specify the node label to add.
- 3
- Specify the infrastructure ID, node label, and region.
- 4
- Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image".
- 5
- Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a
-gen2
suffix, while V1 images have the same name without the suffix. - 6
- Specify the region to place machines on.
- 7
- Optional: Specify custom tags in your machine set. Provide the tag name in
<custom_tag_name>
field and the corresponding tag value in<custom_tag_value>
field. - 8
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
2.3.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.3.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.3.4. Using the Azure Marketplace offering
You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following:
-
While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify
redhat
as the publisher. If you are located in EMEA, specifyredhat-limited
as the publisher. -
The offer includes a
rh-ocp-worker
SKU and arh-ocp-worker-gen1
SKU. Therh-ocp-worker
SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with therh-ocp-worker-gen1
SKU. Therh-ocp-worker-gen1
SKU represents a Hyper-V version 1 VM image.
Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances.
Prerequisites
-
You have installed the Azure CLI client
(az)
. - Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.
Procedure
Display all of the available OpenShift Container Platform images by running one of the following commands:
North America:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat -o table
Example output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700
EMEA:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table
Example output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700
NoteRegardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.13. If required, your VMs are automatically upgraded as part of the installation process.
Inspect the image for your offer by running one of the following commands:
North America:
$ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
EMEA:
$ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Review the terms of the offer by running one of the following commands:
North America:
$ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
EMEA:
$ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Accept the terms of the offering by running one of the following commands:
North America:
$ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
EMEA:
$ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
-
Record the image details of your offer, specifically the values for
publisher
,offer
,sku
, andversion
. Add the following parameters to the
providerSpec
section of your machine set YAML file using the image details for your offer:Sample
providerSpec
image values for Azure Marketplace machinesproviderSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700
2.3.5. Enabling Azure boot diagnostics
You can enable boot diagnostics on Azure machines that your machine set creates.
Prerequisites
- Have an existing Microsoft Azure cluster.
Procedure
Add the
diagnostics
configuration that is applicable to your storage type to theproviderSpec
field in your machine set YAML file:For an Azure Managed storage account:
providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1
- 1
- Specifies an Azure Managed storage account.
For an Azure Unmanaged storage account:
providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2
NoteOnly the Azure Blob Storage data service is supported.
Verification
- On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.
2.3.6. Machine sets that deploy machines as Spot VMs
You can save on costs by creating a compute machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when Azure issues the termination warning.
Interruptions can occur when using Spot VMs for the following reasons:
- The instance price exceeds your maximum price
- The supply of Spot VMs decreases
- Azure needs capacity back
When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the compute machine set replicas
quantity, the compute machine set creates a machine that requests a Spot VM.
2.3.6.1. Creating Spot VMs by using compute machine sets
You can launch a Spot VM on Azure by adding spotVMOptions
to your compute machine set YAML file.
Procedure
Add the following line under the
providerSpec
field:providerSpec: value: spotVMOptions: {}
You can optionally set the
spotVMOptions.maxPrice
field to limit the cost of the Spot VM. For example you can setmaxPrice: '0.98765'
. If themaxPrice
is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to-1
and charges up to the standard VM price.Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default
maxPrice
. However, an instance can still be evicted due to capacity restrictions.
It is strongly recommended to use the default standard VM price as the maxPrice
value and to not set the maximum price for Spot VMs.
2.3.7. Machine sets that deploy machines on Ephemeral OS disks
You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging.
Additional resources
- For more information, see the Microsoft Azure documentation about Ephemeral OS disks for Azure VMs.
2.3.7.1. Creating machines on Ephemeral OS disks by using compute machine sets
You can launch machines on Ephemeral OS disks on Azure by editing your compute machine set YAML file.
Prerequisites
- Have an existing Microsoft Azure cluster.
Procedure
Edit the custom resource (CR) by running the following command:
$ oc edit machineset <machine-set-name>
where
<machine-set-name>
is the compute machine set that you want to provision machines on Ephemeral OS disks.Add the following to the
providerSpec
field:providerSpec: value: ... osDisk: ... diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4 ...
ImportantThe implementation of Ephemeral OS disk support in OpenShift Container Platform only supports the
CacheDisk
placement type. Do not change theplacement
configuration setting.Create a compute machine set using the updated configuration:
$ oc create -f <machine-set-config>.yaml
Verification
-
On the Microsoft Azure portal, review the Overview page for a machine deployed by the compute machine set, and verify that the
Ephemeral OS disk
field is set toOS cache placement
.
2.3.8. Machine sets that deploy machines with ultra disks as data disks
You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.
You can also create a persistent volume claim (PVC) that dynamically binds to a storage class backed by Azure ultra disks and mounts them to pods.
Data disks do not support the ability to specify disk throughput or disk IOPS. You can configure these properties by using PVCs.
Additional resources
2.3.8.1. Creating machines with ultra disks by using machine sets
You can deploy machines with ultra disks on Azure by editing your machine set YAML file.
Prerequisites
- Have an existing Microsoft Azure cluster.
Procedure
Create a custom secret in the
openshift-machine-api
namespace using theworker
data secret by running the following command:$ oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2
In a text editor, open the
userData.txt
file and locate the final}
character in the file.-
On the immediately preceding line, add a
,
. Create a new line after the
,
and add the following configuration details:"storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] }
- 1
- The configuration details for the disk that you want to attach to a node as an ultra disk.
- 2
- Specify the
lun
value that is defined in thedataDisks
stanza of the machine set you are using. For example, if the machine set containslun: 0
, specifylun0
. You can initialize multiple data disks by specifying multiple"disks"
entries in this configuration file. If you specify multiple"disks"
entries, ensure that thelun
value for each matches the value in the machine set. - 3
- The configuration details for a new partition on the disk.
- 4
- Specify a label for the partition. You might find it helpful to use hierarchical names, such as
lun0p1
for the first partition oflun0
. - 5
- Specify the total size in MiB of the partition.
- 6
- Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition.
- 7
- Specify a
systemd
unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple"partitions"
entries in this configuration file. If you specify multiple"partitions"
entries, you must specify asystemd
unit for each. - 8
- For
Where
, specify the value ofstorage.filesystems.path
. ForWhat
, specify the value ofstorage.filesystems.device
.
-
On the immediately preceding line, add a
Extract the disabling template value to a file called
disableTemplating.txt
by running the following command:$ oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt
- 1
- Replace
<role>
withworker
.
Combine the
userData.txt
file anddisableTemplating.txt
file to create a data secret file by running the following command:$ oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt
- 1
- For
<role>-user-data-x5
, specify the name of the secret. Replace<role>
withworker
.
Copy an existing Azure
MachineSet
custom resource (CR) and edit it by running the following command:$ oc edit machineset <machine-set-name>
where
<machine-set-name>
is the machine set that you want to provision machines with ultra disks.Add the following lines in the positions indicated:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4
Create a machine set using the updated configuration by running the following command:
$ oc create -f <machine-set-name>.yaml
Verification
Validate that the machines are created by running the following command:
$ oc get machines
The machines should be in the
Running
state.For a machine that is running and has a node attached, validate the partition by running the following command:
$ oc debug node/<node-name> -- chroot /host lsblk
In this command,
oc debug node/<node-name>
starts a debugging shell on the node<node-name>
and passes a command with--
. The passed commandchroot /host
provides access to the underlying host OS binaries, andlsblk
shows the block devices that are attached to the host OS machine.
Next steps
To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd
2.3.8.2. Troubleshooting resources for machine sets that enable ultra disks
Use the information in this section to understand and recover from issues you might encounter.
2.3.8.2.1. Incorrect ultra disk configuration
If an incorrect configuration of the ultraSSDCapability
parameter is specified in the machine set, the machine provisioning fails.
For example, if the ultraSSDCapability
parameter is set to Disabled
, but an ultra disk is specified in the dataDisks
parameter, the following error message appears:
StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
- To resolve this issue, verify that your machine set configuration is correct.
2.3.8.2.2. Unsupported disk parameters
If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message:
failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>."
- To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct.
2.3.8.2.3. Unable to delete disks
If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired.
2.3.9. Enabling customer-managed encryption keys for a machine set
You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.
An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.
Prerequisites
Procedure
Configure the disk encryption set under the
providerSpec
field in your machine set YAML file. For example:providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS
Additional resources
2.3.10. Configuring trusted launch for Azure virtual machines by using machine sets
Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenShift Container Platform 4.14 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.
Some feature combinations result in an invalid configuration.
Secure Boot[1] | vTPM[2] | Valid configuration |
---|---|---|
Enabled | Enabled | Yes |
Enabled | Disabled | Yes |
Enabled | Omitted | Yes |
Disabled | Enabled | Yes |
Omitted | Enabled | Yes |
Disabled | Disabled | No |
Omitted | Disabled | No |
Omitted | Omitted | No |
-
Using the
secureBoot
field. -
Using the
virtualizedTrustedPlatformModule
field.
For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines.
Procedure
- In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the
providerSpec
field to provide a valid configuration:Sample valid configuration with UEFI Secure Boot and vTPM enabled
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ...
Verification
- On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured.
2.3.11. Configuring Azure confidential virtual machines by using machine sets
Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenShift Container Platform 4.14 supports Azure confidential virtual machines (VMs).
Confidential VMs are currently not supported on 64-bit ARM architectures.
By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.
For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines.
Procedure
- In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the
providerSpec
field:Sample configuration
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ...
- 1
- Specifies security profile settings for the managed disk when using a confidential VM.
- 2
- Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM.
- 3
- Specifies security profile settings for the confidential VM.
- 4
- Enables the use of confidential VMs. This value is required for all valid configurations.
- 5
- Specifies which UEFI security features to use. This section is required for all valid configurations.
- 6
- Disables UEFI Secure Boot.
- 7
- Enables the use of a vTPM.
- 8
- Specifies an instance type that supports confidential VMs.
Verification
- On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured.
2.3.12. Accelerated Networking for Microsoft Azure VMs
Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation.
2.3.12.1. Limitations
Consider the following limitations when deciding whether to use Accelerated Networking:
- Accelerated Networking is only supported on clusters where the Machine API is operational.
Although the minimum requirement for an Azure worker node is two vCPUs, Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of
vmSize
in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation.
- When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.
2.3.13. Configuring Capacity Reservation by using machine sets
OpenShift Container Platform version 4.14.35 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters.
You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds.
For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation.
You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the previous machine set deployed.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. -
You installed the OpenShift CLI (
oc
). You created a Capacity Reservation group.
For more information, see the Microsoft Azure documentation Create a Capacity Reservation.
Procedure
- In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the
providerSpec
field:Sample configuration
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ...
- 1
- Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on.
Verification
To verify machine deployment, list the machines that the machine set created by running the following command:
$ oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name>
where
<machine_set_name>
is the name of the compute machine set.In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation.
2.3.14. Adding a GPU node to an existing OpenShift Container Platform cluster
You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Azure cloud provider.
The following table lists the validated instance types:
vmSize | NVIDIA GPU accelerator | Maximum number of GPUs | Architecture |
---|---|---|---|
| V100 | 4 | x86 |
| T4 | 1 | x86 |
| A100 | 8 | x86 |
By default, Azure subscriptions do not have a quota for the Azure instance types with GPU. Customers have to request a quota increase for the Azure instance families listed above.
Procedure
View the machines and machine sets that exist in the
openshift-machine-api
namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones.$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m
Make a copy of one of the existing compute
MachineSet
definitions and output the result to a YAML file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.$ oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml
View the content of the machineset:
$ cat machineset-azure.yaml
Example
machineset-azure.yaml
fileapiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "0" machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T14:08:19Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: "23601" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1
Make a copy of the
machineset-azure.yaml
file by running the following command:$ cp machineset-azure.yaml machineset-azure-gpu.yaml
Update the following fields in
machineset-azure-gpu.yaml
:-
Change
.metadata.name
to a name containinggpu
. -
Change
.spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"]
to match the new .metadata.name. -
Change
.spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
. Change
.spec.template.spec.providerSpec.value.vmSize
toStandard_NC4as_T4_v3
.Example
machineset-azure-gpu.yaml
fileapiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "1" machine.openshift.io/memoryMb: "28672" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T20:27:12Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: "166285" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1
-
Change
To verify your changes, perform a
diff
of the original compute definition and the new GPU-enabled node definition by running the following command:$ diff machineset-azure.yaml machineset-azure-gpu.yaml
Example output
14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3
Create the GPU-enabled compute machine set from the definition file by running the following command:
$ oc create -f machineset-azure-gpu.yaml
Example output
machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created
View the machines and machine sets that exist in the
openshift-machine-api
namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones.$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h
View the machines that exist in the
openshift-machine-api
namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.$ oc get machines -n openshift-machine-api
Example output
NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m
View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Azure region and OpenShift Container Platform role.
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.27.3 myclustername-master-1 Ready control-plane,master 6h41m v1.27.3 myclustername-master-2 Ready control-plane,master 6h39m v1.27.3 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.27.3 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.27.3 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.27.3 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.27.3
View the list of compute machine sets:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h
Create the GPU-enabled compute machine set from the definition file by running the following command:
$ oc create -f machineset-azure-gpu.yaml
View the list of compute machine sets:
oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h
Verification
View the machine set you created by running the following command:
$ oc get machineset -n openshift-machine-api | grep gpu
The MachineSet replica count is set to
1
so a newMachine
object is created automatically.Example output
myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m
View the
Machine
object that the machine set created by running the following command:$ oc -n openshift-machine-api get machines | grep gpu
Example output
myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m
There is no need to specify a namespace for the node. The node definition is cluster scoped.
2.3.15. Deploying the Node Feature Discovery Operator
After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform.
Procedure
- Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console.
-
After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the
nfd-master
andnfd-worker
pods, onenfd-worker
pod for each compute node, in theopenshift-nfd
namespace. Verify that the Operator is installed and running by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d
- Browse to the installed Oerator in the console and select Create Node Feature Discovery.
-
Select Create to build a NFD custom resource. This creates NFD pods in the
openshift-nfd
namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them.
Verification
After a successful build, verify that a NFD pod is running on each nodes by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d
The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID
10de
.View the NVIDIA GPU discovered by the NFD Operator by running the following command:
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'
Example output
Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true
10de
appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.
Additional resources
2.3.15.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster
You can enable Accelerated Networking on Azure by adding acceleratedNetworking
to your machine set YAML file.
Prerequisites
- Have an existing Microsoft Azure cluster where the Machine API is operational.
Procedure
Add the following to the
providerSpec
field:providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2
- 1
- This line enables Accelerated Networking.
- 2
- Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation.
Next steps
- To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.
Verification
-
On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the
Accelerated networking
field is set toEnabled
.
Additional resources
2.4. Creating a compute machine set on Azure Stack Hub
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure Stack Hub. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.4.1. Sample YAML for a compute machine set custom resource on Azure Stack Hub
This sample YAML defines a compute machine set that runs in the 1
Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21
- 1 5 7 13 15 16 17 20
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
You can obtain the subnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1
You can obtain the vnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1
- 2 3 8 9 11 18 19
- Specify the node label to add.
- 4 6 10
- Specify the infrastructure ID, node label, and region.
- 14
- Specify the region to place machines on.
- 21
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
- 12
- Specify the availability set for the cluster.
2.4.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission. - Create an availability set in which to deploy Azure Stack Hub compute machines.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<availabilitySet>
,<clusterID>
, and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.4.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.4.4. Enabling Azure boot diagnostics
You can enable boot diagnostics on Azure machines that your machine set creates.
Prerequisites
- Have an existing Microsoft Azure Stack Hub cluster.
Procedure
Add the
diagnostics
configuration that is applicable to your storage type to theproviderSpec
field in your machine set YAML file:For an Azure Managed storage account:
providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1
- 1
- Specifies an Azure Managed storage account.
For an Azure Unmanaged storage account:
providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2
NoteOnly the Azure Blob Storage data service is supported.
Verification
- On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.
2.4.5. Enabling customer-managed encryption keys for a machine set
You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.
An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.
Prerequisites
Procedure
Configure the disk encryption set under the
providerSpec
field in your machine set YAML file. For example:providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS
Additional resources
2.5. Creating a compute machine set on GCP
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.5.1. Sample YAML for a compute machine set custom resource on GCP
This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
, where <role>
is the node label to add.
Values obtained by using the OpenShift CLI
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
- Infrastructure ID
The
<infrastructure_id>
string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- Image path
The
<path_to_image>
string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a
Sample GCP MachineSet
values
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a
- 1
- For
<infrastructure_id>
, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. - 2
- For
<node>
, specify the node label to add. - 3
- Specify the path to the image that is used in current compute machine sets.
To use a GCP Marketplace image, specify the offer to use:
-
OpenShift Container Platform:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736
-
OpenShift Platform Plus:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736
-
OpenShift Kubernetes Engine:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736
-
OpenShift Container Platform:
- 4
- Optional: Specify custom metadata in the form of a
key:value
pair. For example use cases, see the GCP documentation for setting custom metadata. - 5
- For
<project_name>
, specify the name of the GCP project that you use for your cluster. - 6
- Specifies a single service account. Multiple service accounts are not supported.
2.5.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.5.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.5.4. Configuring persistent disk types by using machine sets
You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file.
For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks.
Procedure
- In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following line under the
providerSpec
field:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1
- 1
- Specify the persistent disk type. Valid values are
pd-ssd
,pd-standard
, andpd-balanced
. The default value ispd-standard
.
Verification
-
Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the
Type
field matches the configured disk type.
2.5.5. Configuring Confidential VM by using machine sets
By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys.
For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM.
Confidential VMs are currently not supported on 64-bit ARM architectures.
OpenShift Container Platform 4.14 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP).
Procedure
- In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the
providerSpec
field:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ...
- 1
- Specify whether Confidential VM is enabled. Valid values are
Disabled
orEnabled
. - 2
- Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to
Terminate
, which stops the VM. Confidential VM does not support live VM migration. - 3
- Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types.
Verification
- On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured.
2.5.6. Machine sets that deploy machines as preemptible VM instances
You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED
state by Compute Engine.
Interruptions can occur when using preemptible VM instances for the following reasons:
- There is a system or maintenance event
- The supply of preemptible VM instances decreases
- The instance reaches the end of the allotted 24-hour period for preemptible VM instances
When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas
quantity, the compute machine set creates a machine that requests a preemptible VM instance.
2.5.6.1. Creating preemptible VM instances by using compute machine sets
You can launch a preemptible VM instance on GCP by adding preemptible
to your compute machine set YAML file.
Procedure
Add the following line under the
providerSpec
field:providerSpec: value: preemptible: true
If
preemptible
is set totrue
, the machine is labelled as aninterruptable-instance
after the instance is launched.
2.5.7. Configuring Shielded VM options by using machine sets
By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys.
For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM.
Procedure
- In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the
providerSpec
field:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ...
- 1
- In this section, specify any Shielded VM options that you want.
- 2
- Specify whether integrity monitoring is enabled. Valid values are
Disabled
orEnabled
.NoteWhen integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM).
- 3
- Specify whether UEFI Secure Boot is enabled. Valid values are
Disabled
orEnabled
. - 4
- Specify whether vTPM is enabled. Valid values are
Disabled
orEnabled
.
Verification
- Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured.
Additional resources
2.5.8. Enabling customer-managed encryption keys for a machine set
Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer’s data. By default, Compute Engine encrypts this data by using Compute Engine keys.
You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key.
If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com
pattern.
Procedure
To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location:
$ gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com” \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter
Configure the encryption key under the
providerSpec
field in your machine set YAML file. For example:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5
- 1
- The name of the customer-managed encryption key that is used for the disk encryption.
- 2
- The name of the KMS key ring that the KMS key belongs to.
- 3
- The GCP location in which the KMS key ring exists.
- 4
- Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set
projectID
in which the machine set was created is used. - 5
- Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used.
When a new machine is created by using the updated
providerSpec
object configuration, the disk encryption key is encrypted with the KMS key.
2.5.9. Enabling GPU support for a compute machine set
Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OpenShift Container Platform on GCP supports NVIDIA GPU models in the A2 and N1 machine series.
Model name | GPU type | Machine types [1] |
---|---|---|
NVIDIA A100 |
|
|
NVIDIA K80 |
|
|
NVIDIA P100 |
| |
NVIDIA P4 |
| |
NVIDIA T4 |
| |
NVIDIA V100 |
|
- For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series, A2 machine series, and GPU regions and zones availability.
You can define which supported GPU to use for an instance by using the Machine API.
You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators.
GPUs for graphics workloads are not supported.
Procedure
- In a text editor, open the YAML file for an existing compute machine set or create a new one.
Specify a GPU configuration under the
providerSpec
field in your compute machine set YAML file. See the following examples of valid configurations:Example configuration for the A2 machine series:
providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3
Example configuration for the N1 machine series:
providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5
- 1
- Specify the number of GPUs to attach to the machine.
- 2
- Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible.
- 3
- Specify the machine type. Ensure that the machine type and GPU type are compatible.
- 4
- When using GPU support, you must set
onHostMaintenance
toTerminate
. - 5
- Specify the restart policy for machines deployed by the compute machine set. Allowed values are
Always
orNever
.
2.5.10. Adding a GPU node to an existing OpenShift Container Platform cluster
You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider.
The following table lists the validated instance types:
Instance type | NVIDIA GPU accelerator | Maximum number of GPUs | Architecture |
---|---|---|---|
| A100 | 1 | x86 |
| T4 | 1 | x86 |
Procedure
-
Make a copy of an existing
MachineSet
. -
In the new copy, change the machine set
name
inmetadata.name
and in both instances ofmachine.openshift.io/cluster-api-machineset
. Change the instance type to add the following two lines to the newly copied
MachineSet
:machineType: a2-highgpu-1g onHostMaintenance: Terminate
Example
a2-highgpu-1g.json
file{ "apiVersion": "machine.openshift.io/v1beta1", "kind": "MachineSet", "metadata": { "annotations": { "machine.openshift.io/GPU": "0", "machine.openshift.io/memoryMb": "16384", "machine.openshift.io/vCPU": "4" }, "creationTimestamp": "2023-01-13T17:11:02Z", "generation": 1, "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p" }, "name": "myclustername-2pt9p-worker-gpu-a", "namespace": "openshift-machine-api", "resourceVersion": "20185", "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd" }, "spec": { "replicas": 1, "selector": { "matchLabels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "template": { "metadata": { "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machine-role": "worker", "machine.openshift.io/cluster-api-machine-type": "worker", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "spec": { "lifecycleHooks": {}, "metadata": {}, "providerSpec": { "value": { "apiVersion": "machine.openshift.io/v1beta1", "canIPForward": false, "credentialsSecret": { "name": "gcp-cloud-credentials" }, "deletionProtection": false, "disks": [ { "autoDelete": true, "boot": true, "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64", "labels": null, "sizeGb": 128, "type": "pd-ssd" } ], "kind": "GCPMachineProviderSpec", "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", "metadata": { "creationTimestamp": null }, "networkInterfaces": [ { "network": "myclustername-2pt9p-network", "subnetwork": "myclustername-2pt9p-worker-subnet" } ], "preemptible": true, "projectID": "myteam", "region": "us-central1", "serviceAccounts": [ { "email": "myclustername-2pt9p-w@myteam.iam.gserviceaccount.com", "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] } ], "tags": [ "myclustername-2pt9p-worker" ], "userDataSecret": { "name": "worker-user-data" }, "zone": "us-central1-a" } } } } }, "status": { "availableReplicas": 1, "fullyLabeledReplicas": 1, "observedGeneration": 1, "readyReplicas": 1, "replicas": 1 } }
View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OpenShift Container Platform role.
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.27.3 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.27.3 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.27.3 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.27.3 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.27.3 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.27.3 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.27.3
View the machines and machine sets that exist in the
openshift-machine-api
namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones.$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h
View the machines that exist in the
openshift-machine-api
namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.$ oc get machines -n openshift-machine-api | grep worker
Example output
myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h
Make a copy of one of the existing compute
MachineSet
definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.$ oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>
Edit the JSON file to make the following changes to the new
MachineSet
definition:-
Rename the machine set
name
by inserting the substringgpu
inmetadata.name
and in both instances ofmachine.openshift.io/cluster-api-machineset
. Change the
machineType
of the newMachineSet
definition toa2-highgpu-1g
, which includes an NVIDIA A100 GPU.jq .spec.template.spec.providerSpec.value.machineType ocp_4.14_machineset-a2-highgpu-1g.json "a2-highgpu-1g"
The
<output_file.json>
file is saved asocp_4.14_machineset-a2-highgpu-1g.json
.
-
Rename the machine set
Update the following fields in
ocp_4.14_machineset-a2-highgpu-1g.json
:-
Change
.metadata.name
to a name containinggpu
. -
Change
.spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
. -
Change
.spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
. -
Change
.spec.template.spec.providerSpec.value.MachineType
toa2-highgpu-1g
. Add the following line under
machineType
: `"onHostMaintenance": "Terminate". For example:"machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate",
-
Change
To verify your changes, perform a
diff
of the original compute definition and the new GPU-enabled node definition by running the following command:$ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.14_machineset-a2-highgpu-1g.json -
Example output
15c15 < "name": "myclustername-2pt9p-worker-gpu-a", --- > "name": "myclustername-2pt9p-worker-a", 25c25 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 34c34 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 59,60c59 < "machineType": "a2-highgpu-1g", < "onHostMaintenance": "Terminate", --- > "machineType": "n2-standard-4",
Create the GPU-enabled compute machine set from the definition file by running the following command:
$ oc create -f ocp_4.14_machineset-a2-highgpu-1g.json
Example output
machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created
Verification
View the machine set you created by running the following command:
$ oc -n openshift-machine-api get machinesets | grep gpu
The MachineSet replica count is set to
1
so a newMachine
object is created automatically.Example output
myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m
View the
Machine
object that the machine set created by running the following command:$ oc -n openshift-machine-api get machines | grep gpu
Example output
myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m
Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.
2.5.11. Deploying the Node Feature Discovery Operator
After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform.
Procedure
- Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console.
-
After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the
nfd-master
andnfd-worker
pods, onenfd-worker
pod for each compute node, in theopenshift-nfd
namespace. Verify that the Operator is installed and running by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d
- Browse to the installed Oerator in the console and select Create Node Feature Discovery.
-
Select Create to build a NFD custom resource. This creates NFD pods in the
openshift-nfd
namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them.
Verification
After a successful build, verify that a NFD pod is running on each nodes by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d
The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID
10de
.View the NVIDIA GPU discovered by the NFD Operator by running the following command:
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'
Example output
Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true
10de
appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.
2.6. Creating a compute machine set on IBM Cloud
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Cloud®. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.6.1. Sample YAML for a compute machine set custom resource on IBM Cloud
This sample YAML defines a compute machine set that runs in a specified IBM Cloud® zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18
- 1 5 7
- The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 3 8 9 16
- The node label to add.
- 4 6 10
- The infrastructure ID, node label, and region.
- 11
- The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation.
- 12
- The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify.
- 13
- Specify the IBM Cloud® instance profile.
- 14
- Specify the region to place machines on.
- 15
- The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID.
- 17
- The VPC name.
- 18
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
2.6.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.6.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.7. Creating a compute machine set on IBM Power Virtual Server
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Power® Virtual Server. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.7.1. Sample YAML for a compute machine set custom resource on IBM Power Virtual Server
This sample YAML file defines a compute machine set that runs in a specified IBM Power® Virtual Server zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_Private$ type: RegEx processorType: Shared processors: "0.5" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data
- 1 5 7
- The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 3 8 9
- The node label to add.
- 4 6 10
- The infrastructure ID, node label, and region.
- 11
- The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation.
- 12
- The infrastructure ID within your region to place machines on.
2.7.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.7.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.8. Creating a compute machine set on Nutanix
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.8.1. Sample YAML for a compute machine set custom resource on Nutanix
This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
Values obtained by using the OpenShift CLI
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI (oc
).
- Infrastructure ID
The
<infrastructure_id>
string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14
- 1
- For
<infrastructure_id>
, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. - 2
- Specify the node label to add.
- 3
- Specify the infrastructure ID, node label, and zone.
- 4
- Annotations for the cluster autoscaler.
- 5
- Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are
Legacy
,SecureBoot
, orUEFI
. The default isLegacy
.NoteYou must use the
Legacy
boot type in OpenShift Container Platform 4.14. - 6
- Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires
key
andvalue
parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management. - 7
- Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is
uuid
, so there is auuid
stanza. - 8
- Specify the image to use. Use an image from an existing default compute machine set for the cluster.
- 9
- Specify the amount of memory for the cluster in Gi.
- 10
- Specify the Nutanix project that you use for your cluster. In this example, the project type is
name
, so there is aname
stanza. - 11
- Specify the size of the system disk in Gi.
- 12
- Specify the name of the secret in the user data YAML file that is in the
openshift-machine-api
namespace. Use the value that installation program populates in the default compute machine set. - 13
- Specify the number of vCPU sockets.
- 14
- Specify the number of vCPUs per socket.
2.8.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.8.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.9. Creating a compute machine set on OpenStack
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.9.1. Sample YAML for a compute machine set custom resource on RHOSP
This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>
- 1 5 7 13 15 16 17 18
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 3 8 9 19
- Specify the node label to add.
- 4 6 10
- Specify the infrastructure ID and node label.
- 11
- To set a server group policy for the MachineSet, enter the value that is returned from creating a server group. For most deployments,
anti-affinity
orsoft-anti-affinity
policies are recommended. - 12
- Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the
primarySubnet
value. - 14
- Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of
machinesSubnet
in theinstall-config.yaml
file.
2.9.2. Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP
If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create compute machine sets that use that technology.
This sample YAML defines a compute machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: ""
In this sample, infrastructure_id
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role
is the node label to add.
The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports
list.
Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a compute machine set custom resource on RHOSP".
An example compute machine set that uses SR-IOV networks
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>
- 1 5
- Enter a network UUID for each port.
- 2 6
- Enter a subnet UUID for each port.
- 3 7
- The value of the
vnicType
parameter must bedirect
for each port. - 4 8
- The value of the
portSecurity
parameter must befalse
for each port.You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it.
After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter:
$ oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true"
Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix>
. The nameSuffix
field is required in port definitions.
You can enable trunking for each port.
Optionally, you can add tags to ports as part of their tags
lists.
Additional resources
2.9.3. Sample YAML for SR-IOV deployments where port security is disabled
To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a compute machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports
list. This difference from the standard SR-IOV compute machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces.
Ports that you define for machines subnets require:
- Allowed address pairs for the API and ingress virtual IP ports
- The compute security group
- Attachment to the machines network and subnet
Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP".
An example compute machine set that uses SR-IOV networks and has port security disabled
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data
Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix>
. The nameSuffix
field is required in port definitions.
You can enable trunking for each port.
Optionally, you can add tags to ports as part of their tags
lists.
If your cluster uses Kuryr and the RHOSP SR-IOV network has port security disabled, the primary port for compute machines must have:
-
The value of the
spec.template.spec.providerSpec.value.networks.portSecurityEnabled
parameter set tofalse
. -
For each subnet, the value of the
spec.template.spec.providerSpec.value.networks.subnets.portSecurityEnabled
parameter set tofalse
. -
The value of
spec.template.spec.providerSpec.value.securityGroups
set to empty:[]
.
An example section of a compute machine set for a cluster on Kuryr that uses SR-IOV and has port security disabled
... networks: - subnets: - uuid: <machines_subnet_UUID> portSecurityEnabled: false portSecurityEnabled: false securityGroups: [] ...
In that case, you can apply the compute security group to the primary VM interface after the VM is created. For example, from a command line:
$ openstack port set --enable-port-security --security-group <infrastructure_id>-<node_role> <main_port_ID>
After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter:
$ oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true"
2.9.4. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.9.5. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.10. Creating a compute machine set on vSphere
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.10.1. Sample YAML for a compute machine set custom resource on vSphere
This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16
- 1 3 5
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (
oc
) installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 8
- Specify the infrastructure ID and node label.
- 6 7 9
- Specify the node label to add.
- 10
- Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster.
- 11
- Specify the vSphere VM template to use, such as
user-5ddjd-rhcos
. - 12
- Specify the vCenter Datacenter to deploy the compute machine set on.
- 13
- Specify the vCenter Datastore to deploy the compute machine set on.
- 14
- Specify the path to the vSphere VM folder in vCenter, such as
/dc1/vm/user-inst-5ddjd
. - 15
- Specify the vSphere resource pool for your VMs.
- 16
- Specify the vCenter server IP or fully qualified domain name.
2.10.2. Minimum required vCenter privileges for compute machine set management
To manage compute machine sets in an OpenShift Container Platform cluster on vCenter, you must use an account with privileges to read, create, and delete the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.
If you cannot use an account with global administrative privileges, you must create roles to grant the minimum required privileges. The following table lists the minimum vCenter roles and privileges that are required to create, scale, and delete compute machine sets and to delete machines in your OpenShift Container Platform cluster.
Example 2.1. Minimum vCenter roles and privileges required for compute machine set management
vSphere object for role | When required | Required privileges |
---|---|---|
vSphere vCenter | Always |
|
vSphere vCenter Cluster | Always |
|
vSphere Datastore | Always |
|
vSphere Port Group | Always |
|
Virtual Machine Folder | Always |
|
vSphere vCenter Datacenter | If the installation program creates the virtual machine folder |
|
1 The |
The following table details the permissions and propagation settings that are required for compute machine set management.
Example 2.2. Required permissions and propagation settings
vSphere object | Folder type | Propagate to children | Permissions required |
---|---|---|---|
vSphere vCenter | Always | Not required | Listed required privileges |
vSphere vCenter Datacenter | Existing folder | Not required |
|
Installation program creates the folder | Required | Listed required privileges | |
vSphere vCenter Cluster | Always | Required | Listed required privileges |
vSphere vCenter Datastore | Always | Not required | Listed required privileges |
vSphere Switch | Always | Not required |
|
vSphere Port Group | Always | Not required | Listed required privileges |
vSphere vCenter Virtual Machine Folder | Existing folder | Required | Listed required privileges |
For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.
2.10.3. Requirements for clusters with user-provisioned infrastructure to use compute machine sets
To use compute machine sets on clusters that have user-provisioned infrastructure, you must ensure that you cluster configuration supports using the Machine API.
Obtaining the infrastructure ID
To create compute machine sets, you must be able to supply the infrastructure ID for your cluster.
Procedure
To obtain the infrastructure ID for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'
Satisfying vSphere credentials requirements
To use compute machine sets, the Machine API must be able to interact with vCenter. Credentials that authorize the Machine API components to interact with vCenter must exist in a secret in the openshift-machine-api
namespace.
Procedure
To determine whether the required credentials exist, run the following command:
$ oc get secret \ -n openshift-machine-api vsphere-cloud-credentials \ -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
Sample output
<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>
where
<vcenter-server>
is the IP address or fully qualified domain name (FQDN) of the vCenter server and<openshift-user>
and<openshift-user-password>
are the OpenShift Container Platform administrator credentials to use.If the secret does not exist, create it by running the following command:
$ oc create secret generic vsphere-cloud-credentials \ -n openshift-machine-api \ --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>
Satisfying Ignition configuration requirements
Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the machine-config-server
address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator.
By default, this configuration is stored in the worker-user-data
secret in the machine-api-operator
namespace. Compute machine sets reference the secret during the machine creation process.
Procedure
To determine whether the required secret exists, run the following command:
$ oc get secret \ -n openshift-machine-api worker-user-data \ -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
Sample output
disableTemplating: false userData: 1 { "ignition": { ... }, ... }
- 1
- The full output is omitted here, but should have this format.
If the secret does not exist, create it by running the following command:
$ oc create secret generic worker-user-data \ -n openshift-machine-api \ --from-file=<installation_directory>/worker.ign
where
<installation_directory>
is the directory that was used to store your installation assets during cluster installation.
2.10.4. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Clusters that are installed with user-provisioned infrastructure have a different networking stack than clusters with infrastructure that is provisioned by the installation program. As a result of this difference, automatic load balancer management is unsupported on clusters that have user-provisioned infrastructure. For these clusters, a compute machine set can only create worker
and infra
type machines.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission. - Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified.
- If your cluster uses user-provisioned infrastructure, you have satisfied the specific Machine API requirements for that configuration.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values:
Example vSphere
providerSpec
valuesapiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... template: ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" numCPUs: 4 numCoresPerSocket: 4 snapshot: "" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4
- 1
- The name of the secret in the
openshift-machine-api
namespace that contains the required vCenter credentials. - 2
- The name of the RHCOS VM template for your cluster that was created during installation.
- 3
- The name of the secret in the
openshift-machine-api
namespace that contains the required Ignition configuration credentials. - 4
- The IP address or fully qualified domain name (FQDN) of the vCenter server.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.10.5. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources
2.11. Creating a compute machine set on bare metal
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on bare metal. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none
cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
2.11.1. Sample YAML for a compute machine set custom resource on bare metal
This sample YAML defines a compute machine set that runs on bare metal and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data
- 1 3 5
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (
oc
) installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
- 2 4 8
- Specify the infrastructure ID and node label.
- 6 7 9
- Specify the node label to add.
- 10
- Edit the
checksum
URL to use the API VIP address. - 11
- Edit the
url
URL to use the API VIP address.
2.11.2. Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc
). -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ...
- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines. - 3
- The values in the
<providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
2.11.3. Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Prerequisites
- Your cluster uses a cluster autoscaler.
Procedure
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/accelerator
label:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1
- 1
- Specify a label of your choice that consists of alphanumeric characters,
-
,_
, or.
and starts and ends with an alphanumeric character. For example, you might usenvidia-t4
to represent Nvidia T4 GPUs, ornvidia-a10g
for A10G GPUs.NoteYou must specify the value of this label for the
spec.resourceLimits.gpus.type
parameter in yourClusterAutoscaler
CR. For more information, see "Cluster autoscaler resource definition".
Additional resources