Chapter 11. Managing machines with the Cluster API


11.1. About the Cluster API

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The Cluster API is an upstream project that is integrated into OpenShift Container Platform as a Technology Preview for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Red Hat OpenStack Platform (RHOSP), VMware vSphere, and bare metal.

11.1.1. Cluster API overview

You can use the Cluster API to create and manage compute machine sets and compute machines in your OpenShift Container Platform cluster. This capability is in addition or an alternative to managing machines with the Machine API.

For OpenShift Container Platform 4.19 clusters, you can use the Cluster API to perform node host provisioning management actions after the cluster installation finishes. This system enables an elastic, dynamic provisioning method on top of public or private cloud infrastructure.

With the Cluster API Technology Preview, you can create compute machines and compute machine sets on OpenShift Container Platform clusters for supported providers. You can also explore the features that are enabled by this implementation that might not be available with the Machine API.

11.1.1.1. Cluster API benefits

By using the Cluster API, OpenShift Container Platform users and developers gain the following advantages:

  • The option to use upstream community Cluster API infrastructure providers that might not be supported by the Machine API.
  • The opportunity to collaborate with third parties who maintain machine controllers for infrastructure providers.
  • The ability to use the same set of Kubernetes tools for infrastructure management in OpenShift Container Platform.
  • The ability to create compute machine sets by using the Cluster API that support features that are not available with the Machine API.

11.1.1.2. Cluster API limitations

Using the Cluster API to manage machines is a Technology Preview feature and has the following limitations:

  • To use this feature, you must enable the TechPreviewNoUpgrade feature set.

    Important

    Enabling this feature set cannot be undone and prevents minor version updates.

  • Only Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Red Hat OpenStack Platform (RHOSP), VMware vSphere, and bare-metal clusters can use the Cluster API.
  • You must manually create some of the primary resources that the Cluster API requires. For more information, see "Getting started with the Cluster API".
  • You cannot use the Cluster API to manage control plane machines.
  • Migration of existing compute machine sets created by the Machine API to Cluster API compute machine sets is not supported.
  • Full feature parity with the Machine API is not available.
  • For clusters that use the Cluster API, OpenShift CLI (oc) commands prioritize Cluster API objects over Machine API objects. This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API.

    For more information and a workaround for this issue, see "Referencing the intended objects when using the CLI" in the troubleshooting content.

11.1.2. Cluster API architecture

The OpenShift Container Platform integration of the upstream Cluster API is implemented and managed by the Cluster CAPI Operator. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, in contrast to the Machine API, which uses the openshift-machine-api namespace.

11.1.2.1. The Cluster CAPI Operator

The Cluster CAPI Operator is an OpenShift Container Platform Operator that maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster.

If a cluster is configured correctly to allow the use of the Cluster API, the Cluster CAPI Operator installs the Cluster API components on the cluster.

For more information, see the "Cluster CAPI Operator" entry in the Cluster Operators reference content.

11.1.2.2. Cluster API primary resources

The Cluster API consists of the following primary resources. For the Technology Preview of this feature, you must create some of these resources manually in the openshift-cluster-api namespace.

Cluster
A fundamental unit that represents a cluster that is managed by the Cluster API.
Infrastructure cluster
A provider-specific resource that defines properties that all of the compute machine sets in the cluster share, such as the region and subnets.
Machine template
A provider-specific template that defines the properties of the machines that a compute machine set creates.
Machine set

A group of machines.

Compute machine sets are to machines as replica sets are to pods. To add machines or scale them down, change the replicas field on the compute machine set custom resource to meet your compute needs.

With the Cluster API, a compute machine set references a Cluster object and a provider-specific machine template.

Machine

A fundamental unit that describes the host for a node.

The Cluster API creates machines based on the configuration in the machine template.

11.2. Getting started with the Cluster API

The Machine API and Cluster API are distinct API groups that have similar resources. You can use these API groups to automate the management of infrastructure resources on your OpenShift Container Platform cluster.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

When you install a standard OpenShift Container Platform cluster that has three control plane nodes, three compute nodes, and uses the default configuration options, the installation program provisions the following infrastructure resources in the openshift-machine-api namespace

  • One control plane machine set that manages three control plane machines.
  • One or more compute machine sets that manage three compute machines.
  • One machine health check that manages spot instances.

When you install a cluster that supports managing infrastructure resources with the Cluster API, the installation program provisions the following resources in the openshift-cluster-api namespace:

  • One cluster resource.
  • One provider-specific infrastructure cluster resource.

On clusters that support migrating Machine API resources to Cluster API resources, a two-way synchronization controller creates these primary resources automatically. For more information, see Migrating Machine API resources to Cluster API resources.

11.2.1. Creating the Cluster API primary resources

For clusters that do not support migrating Machine API resources to Cluster API resources, you must manually create the following Cluster API resources in the openshift-cluster-api namespace:

  • One or more machine templates that correspond to compute machine sets.
  • One or more compute machine sets that manage three compute machines.

11.2.1.1. Creating a Cluster API machine template

You can create a provider-specific machine template resource by creating a YAML manifest file and applying it with the OpenShift CLI (oc).

Prerequisites

  • You have deployed an OpenShift Container Platform cluster.
  • You have enabled the use of the Cluster API.
  • You have access to the cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a YAML file similar to the following. This procedure uses <machine_template_resource_file>.yaml as an example file name.

    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: <machine_template_kind> 
    1
    
    metadata:
      name: <template_name> 
    2
    
      namespace: openshift-cluster-api
    spec:
      template:
        spec: 
    3
    Copy to Clipboard
    1
    Specify the machine template kind. This value must match the value for your platform. The following values are valid:
    Cluster infrastructure providerValue

    Amazon Web Services (AWS)

    AWSMachineTemplate

    Google Cloud Platform (GCP)

    GCPMachineTemplate

    Microsoft Azure

    AzureMachineTemplate

    Red Hat OpenStack Platform (RHOSP)

    OpenStackMachineTemplate

    VMware vSphere

    VSphereMachineTemplate

    Bare metal

    Metal3MachineTemplate

    2
    Specify a name for the machine template.
    3
    Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API machine template YAML for your provider.
  2. Create the machine template CR by running the following command:

    $ oc create -f <machine_template_resource_file>.yaml
    Copy to Clipboard

Verification

  • Confirm that the machine template CR is created by running the following command:

    $ oc get <machine_template_kind> -n openshift-cluster-api
    Copy to Clipboard

    where <machine_template_kind> is the value that corresponds to your platform.

    Example output

    NAME              AGE
    <template_name>   77m
    Copy to Clipboard

11.2.1.2. Creating a Cluster API compute machine set

You can create compute machine sets that use the Cluster API to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster.
  • You have enabled the use of the Cluster API.
  • You have access to the cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have created the machine template resource.

Procedure

  1. Create a YAML file similar to the following. This procedure uses <machine_set_resource_file>.yaml as an example file name.

    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineSet
    metadata:
      name: <machine_set_name> 
    1
    
      namespace: openshift-cluster-api
    spec:
      clusterName: <cluster_name> 
    2
    
      replicas: 1
      selector:
        matchLabels:
          test: example
      template:
        metadata:
          labels:
            test: example
        spec: 
    3
    
    # ...
    Copy to Clipboard
    1
    Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region>.
    2
    Specify the name of the cluster. Obtain the value of the cluster ID by running the following command:
    $  oc get infrastructure cluster \
       -o jsonpath='{.status.infrastructureName}'
    Copy to Clipboard
    3
    Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API compute machine set YAML for your provider.
  2. Create the compute machine set CR by running the following command:

    $ oc create -f <machine_set_resource_file>.yaml
    Copy to Clipboard
  3. Confirm that the compute machine set CR is created by running the following command:

    $ oc get machineset.cluster.x-k8s.io -n openshift-cluster-api
    Copy to Clipboard

    Example output

    NAME                 CLUSTER          REPLICAS   READY   AVAILABLE   AGE   VERSION
    <machine_set_name>   <cluster_name>   1          1       1           17m
    Copy to Clipboard

    When the new compute machine set is available, the REPLICAS and AVAILABLE values match. If the compute machine set is not available, wait a few minutes and run the command again.

Verification

  • To verify that the compute machine set is creating machines according to your required configuration, review the lists of machines and nodes in the cluster by running the following commands:

    • View the list of Cluster API machines:

      $ oc get machine.cluster.x-k8s.io -n openshift-cluster-api
      Copy to Clipboard

      Example output

      NAME                             CLUSTER          NODENAME                                 PROVIDERID      PHASE     AGE     VERSION
      <machine_set_name>-<string_id>   <cluster_name>   <ip_address>.<region>.compute.internal   <provider_id>   Running   8m23s
      Copy to Clipboard

    • View the list of nodes:

      $ oc get node
      Copy to Clipboard

      Example output

      NAME                                       STATUS   ROLES    AGE     VERSION
      <ip_address_1>.<region>.compute.internal   Ready    worker   5h14m   v1.28.5
      <ip_address_2>.<region>.compute.internal   Ready    master   5h19m   v1.28.5
      <ip_address_3>.<region>.compute.internal   Ready    worker   7m      v1.28.5
      Copy to Clipboard

11.2.2. Migrating Machine API resources to Cluster API resources

On clusters that support migrating Machine API resources to Cluster API resources, a two-way synchronization controller creates the following Cluster API resources in the openshift-cluster-api namespace:

  • One or more machine templates that correspond to compute machine sets.
  • One or more compute machine sets that manage three compute machines.
  • One or more Cluster API compute machines that correspond to each Machine API compute machine.
Note

The two-way synchronization controller only operates on clusters with the MachineAPIMigration feature gate in the TechPreviewNoUpgrade feature set enabled.

These Cluster API resources correspond to the resources that the installation program provisions in the openshift-machine-api namespace for a cluster that uses the default configuration options. The Cluster API resources have the same names as their Machine API counterparts and appear in the output of commands, such as oc get, that list resources. The synchronization controller creates the Cluster API resources in an unprovisioned (Paused) state to prevent unintended reconciliation.

For supported configurations, you can migrate a Machine API resource to the equivalent Cluster API resource by changing which API it considers authoritative. When you migrate a Machine API resources to the Cluster API, you transfer management of the resource to the Cluster API.

By migrating a Machine API resource to use the Cluster API, you can verify that everything works as expected before deciding to use the Cluster API in production clusters. After migrating a Machine API resource to an equivalent Cluster API resource, you can examine the new resource to verify that the features and configuration match the original Machine API resource.

When you change the authoritative API for a compute machine set, any existing compute machines that the compute machine set manages retain their original authoritative API. As a result, a compute machine set that manages machines that use different authoritative APIs is a valid and expected occurrence in clusters that support migrating between these API types.

When you change the authoritative API for a compute machine, the instance on the underlying infrastructure that backs the machine is not recreated or reprovisioned. In-place changes, such as modifying labels, tags, taints, or annotations, are the only changes that the API group can make to the underlying instance that backs the machine.

Note

You can only migrate some resources on supported infrastructure types.

Table 11.1. Supported resource conversions
InfrastructureCompute machineCompute machine setMachine health checkControl plane machine setCluster autoscaler

AWS

Technology Preview

Technology Preview

Not Available

Not Available

Not Available

All other infrastructure types

Not Available

Not Available

Not Available

Not Available

Not Available

11.2.2.1. Migrating a Machine API resource to use the Cluster API

You can migrate individual Machine API objects to equivalent Cluster API objects.

Important

Migrating a Machine API resource to use the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster on a supported infrastructure type.
  • You have enabled the use of the Cluster API.
  • You have enabled the MachineAPIMigration feature gate in the TechPreviewNoUpgrade feature set.
  • You have access to the cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Identify the Machine API resource that you want to migrate to a Cluster API resource by running the following command:

    $ oc get <resource_kind> -n openshift-machine-api
    Copy to Clipboard

    where <resource_kind> is one of the following values:

    machine.machine.openshift.io
    The fully qualified name of the resource kind for a compute or control plane machine.
    machineset.machine.openshift.io
    The fully qualified name of the resource kind for a compute machine set.
  2. Edit the resource specification by running the following command:

    $ oc edit <resource_kind>/<resource_name> -n openshift-machine-api
    Copy to Clipboard

    where:

    <resource_kind>
    Specifies a compute machine with machine.machine.openshift.io or compute machine set with machineset.machine.openshift.io.
    <resource_name>
    Specifies the name of the Machine API resource that you want to migrate to a Cluster API resource.
  3. In the resource specification, update the value of the spec.authoritativeAPI field:

    apiVersion: machine.openshift.io/v1beta1
    kind: <resource_kind> 
    1
    
    metadata:
      name: <resource_name> 
    2
    
      [...]
    spec:
      authoritativeAPI: ClusterAPI 
    3
    
      [...]
    status:
      authoritativeAPI: MachineAPI 
    4
    
      [...]
    Copy to Clipboard
    1
    The resource kind varies depending on the resource kind. For example, the resource kind for a compute machine set is MachineSet and the resource kind for a compute machine is Machine.
    2
    The name of the resource that you want to migrate.
    3
    Specify the authoritative API that you want this resource to use. For example, to start migrating a Machine API resource to the Cluster API, specify ClusterAPI.
    4
    The value for the current authoritative API. This value indicates which API currently manages this resource. Do not change the value in this part of the specification.

Verification

  • Check the status of the conversion by running the following command:

    $ oc -n openshift-machine-api get <resource_kind>/<resource_name> -o json | jq .status.authoritativeAPI
    Copy to Clipboard

    where:

    <resource_kind>
    Specifies a compute machine with machine.machine.openshift.io or compute machine set with machineset.machine.openshift.io.
    <resource_name>
    Specifies the name of the Machine API resource that you want to migrate to a Cluster API resource.
    • While the conversion progresses, this command returns a value of Migrating. If this value persists for a long time, check the logs for the cluster-capi-operator deployment in the openshift-cluster-api namespace for more information and to identify potential issues.
    • When the conversion is complete, this command returns a value of ClusterAPI.

11.2.2.2. Deploying Cluster API compute machines by using a Machine API compute machine set

You can configure a Machine API compute machine set to deploy Cluster API compute machines. With this process, you can test the Cluster API compute machine creation workflow without creating and scaling a Cluster API compute machine set.

A Machine API compute machine set with this configuration creates nonauthoritative Machine API compute machines that use the Cluster API as authoritative. The two-way synchronization controller then creates corresponding authoritative Cluster API machines that provision on the underlying infrastructure.

Important

Deploying Cluster API compute machines by using a Machine API compute machine set is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster on a supported infrastructure type.
  • You have enabled the use of the Cluster API.
  • You have enabled the MachineAPIMigration feature gate in the TechPreviewNoUpgrade feature set.
  • You have access to the cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List the Machine API compute machine sets in your cluster by running the following command:

    $ oc get machineset.machine.openshift.io -n openshift-machine-api
    Copy to Clipboard
  2. Edit the resource specification by running the following command:

    $ oc edit machineset.machine.openshift.io <machine_set_name> \
      -n openshift-machine-api
    Copy to Clipboard

    where <machine_set_name> is the name of the Machine API compute machine set that you want to configure to deploy Cluster API compute machines.

  3. In the resource specification, update the value of the spec.template.spec.authoritativeAPI field:

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    metadata:
      [...]
      name: <machine_set_name>
      [...]
    spec:
      authoritativeAPI: MachineAPI 
    1
    
      [...]
      template:
        [...]
        spec:
          authoritativeAPI: ClusterAPI 
    2
    
    status:
      authoritativeAPI: MachineAPI 
    3
    
      [...]
    Copy to Clipboard
    1
    The unconverted value for the Machine API compute machine set. Do not change the value in this part of the specification.
    2
    Specify ClusterAPI to configure the compute machine set to deploy Cluster API compute machines.
    3
    The current value for the Machine API compute machine set. Do not change the value in this part of the specification.

Verification

  1. List the machines that are managed by the updated compute machine set by running the following command:

    $ oc get machines.machine.openshift.io \
      -n openshift-machine-api \
      -l machine.openshift.io/cluster-api-machineset=<machine_set_name>
    Copy to Clipboard
  2. To verify that a machine created by the updated machine set has the correct configuration, examine the status.authoritativeAPI field in the CR for one of the new machines by running the following command:

    $ oc describe machines.machine.openshift.io <machine_name> \
      -n openshift-machine-api
    Copy to Clipboard

    For a Cluster API compute machine, the value of the field is ClusterAPI.

11.3. Managing machines with the Cluster API

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.3.1. Modifying a Cluster API machine template

You can update the machine template resource for your cluster by modifying the YAML manifest file and applying it with the OpenShift CLI (oc).

Prerequisites

  • You have deployed an OpenShift Container Platform cluster that uses the Cluster API.
  • You have access to the cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List the machine template resource for your cluster by running the following command:

    $ oc get <machine_template_kind> 
    1
    Copy to Clipboard
    1
    Specify the value that corresponds to your platform. The following values are valid:
    Cluster infrastructure providerValue

    Amazon Web Services

    AWSMachineTemplate

    Google Cloud Platform

    GCPMachineTemplate

    Microsoft Azure

    AzureMachineTemplate

    RHOSP

    OpenStackMachineTemplate

    VMware vSphere

    VSphereMachineTemplate

    Bare metal

    Metal3MachineTemplate

    Example output

    NAME              AGE
    <template_name>   77m
    Copy to Clipboard

  2. Write the machine template resource for your cluster to a file that you can edit by running the following command:

    $ oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml
    Copy to Clipboard

    where <template_name> is the name of the machine template resource for your cluster.

  3. Make a copy of the <template_name>.yaml file with a different name. This procedure uses <modified_template_name>.yaml as an example file name.
  4. Use a text editor to make changes to the <modified_template_name>.yaml file that defines the updated machine template resource for your cluster. When editing the machine template resource, observe the following:

    • The parameters in the spec stanza are provider specific. For more information, see the sample Cluster API machine template YAML for your provider.
    • You must use a value for the metadata.name parameter that differs from any existing values.

      Important

      For any Cluster API compute machine sets that reference this template, you must update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource.

  5. Apply the machine template CR by running the following command:

    $ oc apply -f <modified_template_name>.yaml 
    1
    Copy to Clipboard
    1
    Use the edited YAML file with a new name.

Next steps

  • For any Cluster API compute machine sets that reference this template, update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource. For more information, see "Modifying a compute machine set by using the CLI."

11.3.2. Modifying a compute machine set by using the CLI

You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI.

By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines.

Note

Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources.

You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas.

If you need to scale a compute machine set without making other changes, you do not need to delete the machines.

Note

By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods.

The output examples in this procedure use the values for an AWS cluster.

Prerequisites

  • Your OpenShift Container Platform cluster uses the Cluster API.
  • You are logged in to the cluster as an administrator by using the OpenShift CLI (oc).

Procedure

  1. List the compute machine sets in your cluster by running the following command:

    $ oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api
    Copy to Clipboard

    Example output

    NAME                          CLUSTER             REPLICAS   READY   AVAILABLE   AGE   VERSION
    <compute_machine_set_name_1>  <cluster_name>      1          1       1           26m
    <compute_machine_set_name_2>  <cluster_name>      1          1       1           26m
    Copy to Clipboard

  2. Edit a compute machine set by running the following command:

    $ oc edit machinesets.cluster.x-k8s.io <machine_set_name> \
      -n openshift-cluster-api
    Copy to Clipboard
  3. Note the value of the spec.replicas field, because you need it when scaling the machine set to apply the changes.

    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineSet
    metadata:
      name: <machine_set_name>
      namespace: openshift-cluster-api
    spec:
      replicas: 2 
    1
    
    # ...
    Copy to Clipboard
    1
    The examples in this procedure show a compute machine set that has a replicas value of 2.
  4. Update the compute machine set CR with the configuration options that you want and save your changes.
  5. List the machines that are managed by the updated compute machine set by running the following command:

    $ oc get machines.cluster.x-k8s.io \
      -n openshift-cluster-api \
      -l cluster.x-k8s.io/set-name=<machine_set_name>
    Copy to Clipboard

    Example output for an AWS cluster

    NAME                        CLUSTER          NODENAME                                    PROVIDERID                              PHASE           AGE     VERSION
    <machine_name_original_1>   <cluster_name>   <original_1_ip>.<region>.compute.internal   aws:///us-east-2a/i-04e7b2cbd61fd2075   Running         4h
    <machine_name_original_2>   <cluster_name>   <original_2_ip>.<region>.compute.internal   aws:///us-east-2a/i-04e7b2cbd61fd2075   Running         4h
    Copy to Clipboard

  6. For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command:

    $ oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> \
      -n openshift-cluster-api \
      cluster.x-k8s.io/delete-machine="true"
    Copy to Clipboard
  7. To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command:

    $ oc scale --replicas=4 \
    1
    
      machinesets.cluster.x-k8s.io <machine_set_name> \
      -n openshift-cluster-api
    Copy to Clipboard
    1
    The original example value of 2 is doubled to 4.
  8. List the machines that are managed by the updated compute machine set by running the following command:

    $ oc get machines.cluster.x-k8s.io \
      -n openshift-cluster-api \
      -l cluster.x-k8s.io/set-name=<machine_set_name>
    Copy to Clipboard

    Example output for an AWS cluster

    NAME                        CLUSTER          NODENAME                                    PROVIDERID                              PHASE           AGE     VERSION
    <machine_name_original_1>   <cluster_name>   <original_1_ip>.<region>.compute.internal   aws:///us-east-2a/i-04e7b2cbd61fd2075   Running         4h
    <machine_name_original_2>   <cluster_name>   <original_2_ip>.<region>.compute.internal   aws:///us-east-2a/i-04e7b2cbd61fd2075   Running         4h
    <machine_name_updated_1>    <cluster_name>   <updated_1_ip>.<region>.compute.internal    aws:///us-east-2a/i-04e7b2cbd61fd2075   Provisioned     55s
    <machine_name_updated_2>    <cluster_name>   <updated_2_ip>.<region>.compute.internal    aws:///us-east-2a/i-04e7b2cbd61fd2075   Provisioning    55s
    Copy to Clipboard

    When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas.

  9. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command:

    $ oc scale --replicas=2 \
    1
    
      machinesets.cluster.x-k8s.io <machine_set_name> \
      -n openshift-cluster-api
    Copy to Clipboard
    1
    The original example value of 2.

Verification

  • To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command:

    $ oc describe machines.cluster.x-k8s.io <machine_name_updated_1> \
      -n openshift-cluster-api
    Copy to Clipboard
  • To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command:

    $ oc get machines.cluster.x-k8s.io \
      -n openshift-cluster-api \
      cluster.x-k8s.io/set-name=<machine_set_name>
    Copy to Clipboard

    Example output while deletion is in progress for an AWS cluster

    NAME                        CLUSTER          NODENAME                                    PROVIDERID                              PHASE      AGE     VERSION
    <machine_name_original_1>   <cluster_name>   <original_1_ip>.<region>.compute.internal   aws:///us-east-2a/i-04e7b2cbd61fd2075   Running    18m
    <machine_name_original_2>   <cluster_name>   <original_2_ip>.<region>.compute.internal   aws:///us-east-2a/i-04e7b2cbd61fd2075   Running    18m
    <machine_name_updated_1>    <cluster_name>   <updated_1_ip>.<region>.compute.internal    aws:///us-east-2a/i-04e7b2cbd61fd2075   Running    18m
    <machine_name_updated_2>    <cluster_name>   <updated_2_ip>.<region>.compute.internal    aws:///us-east-2a/i-04e7b2cbd61fd2075   Running    18m
    Copy to Clipboard

    Example output when deletion is complete for an AWS cluster

    NAME                        CLUSTER          NODENAME                                    PROVIDERID                              PHASE      AGE     VERSION
    <machine_name_updated_1>    <cluster_name>   <updated_1_ip>.<region>.compute.internal    aws:///us-east-2a/i-04e7b2cbd61fd2075   Running    18m
    <machine_name_updated_2>    <cluster_name>   <updated_2_ip>.<region>.compute.internal    aws:///us-east-2a/i-04e7b2cbd61fd2075   Running    18m
    Copy to Clipboard

11.4. Configuration options for Cluster API machines

11.4.1. Cluster API configuration options for Amazon Web Services

You can change the configuration of your Amazon Web Services (AWS) Cluster API machines by updating values in the Cluster API custom resource manifests.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.4.1.1. Sample YAML for configuring Amazon Web Services clusters

The following example YAML files show configurations for an Amazon Web Services cluster.

11.4.1.1.1. Sample YAML for a Cluster API machine template resource on Amazon Web Services

The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate 
1

metadata:
  name: <template_name> 
2

  namespace: openshift-cluster-api
spec:
  template:
    spec: 
3

      iamInstanceProfile: # ...
      instanceType: m5.large
      ignition:
        storageType: UnencryptedUserData
        version: "3.4"
      ami:
        id: # ...
      subnet:
        filters:
        - name: tag:Name
          values:
          - # ...
      additionalSecurityGroups:
      - filters:
        - name: tag:Name
          values:
          - # ...
Copy to Clipboard
1
Specify the machine template kind. This value must match the value for your platform.
2
Specify a name for the machine template.
3
Specify the details for your environment. The values here are examples.
11.4.1.1.2. Sample YAML for a Cluster API compute machine set resource on Amazon Web Services

The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines.

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata:
  name: <machine_set_name> 
1

  namespace: openshift-cluster-api
  labels:
    cluster.x-k8s.io/cluster-name: <cluster_name> 
2

spec:
  clusterName: <cluster_name> 
3

  replicas: 1
  selector:
    matchLabels:
      test: example
      cluster.x-k8s.io/cluster-name: <cluster_name>
      cluster.x-k8s.io/set-name: <machine_set_name>
  template:
    metadata:
      labels:
        test: example
        cluster.x-k8s.io/cluster-name: <cluster_name>
        cluster.x-k8s.io/set-name: <machine_set_name>
        node-role.kubernetes.io/<role>: ""
    spec:
      bootstrap:
         dataSecretName: worker-user-data
      clusterName: <cluster_name>
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: AWSMachineTemplate 
4

        name: <template_name> 
5
Copy to Clipboard
1
Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region>.
2 3
Specify the cluster ID as the name of the cluster.
4
Specify the machine template kind. This value must match the value for your platform.
5
Specify the machine template name.

11.4.1.2. Enabling Amazon Web Services features with the Cluster API

You can enable the following features by updating values in the Cluster API custom resource manifests.

11.4.1.2.1. Elastic Fabric Adapter instances and placement group options

You can deploy compute machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group.

EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. The following example uses an EFA and placement group together to demonstrate a configuration that can improve network performance for machines within the specified placement group.

To deploy compute machines with your configuration, configure the appropriate values in a machine template YAML file. Then, configure a machine set YAML file to reference the machine template when it deploys machines.

Sample EFA instance and placement group configuration

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
# ...
spec:
  template:
    spec:
      instanceType: <supported_instance_type> 
1

      networkInterfaceType: efa 
2

      placementGroupName: <placement_group> 
3

      placementGroupPartition: <placement_group_partition_number> 
4

# ...
Copy to Clipboard

1
Specifies an instance type that supports EFAs.
2
Specifies the efa network interface type.
3
Specifies the name of the existing AWS placement group to deploy machines in.
4
Optional: Specifies the partition number of the existing AWS placement group where you want your machines deployed.
Note

Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case.

11.4.1.2.2. Amazon EC2 Instance Metadata Service configuration options

You can restrict the version of the Amazon EC2 Instance Metadata Service (IMDS) that machines on Amazon Web Services (AWS) clusters use. Machines can require the use of IMDSv2 (AWS documentation), or allow the use of IMDSv1 in addition to IMDSv2.

To deploy compute machines with your configuration, configure the appropriate values in a machine template YAML file. Then, configure a machine set YAML file to reference the machine template when it deploys machines.

Important

Before creating machines that require IMDSv2, ensure that any workloads that interact with the IMDS support IMDSv2.

Sample IMDS configuration

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
# ...
spec:
  template:
    spec:
      instanceMetadataOptions:
        httpEndpoint: enabled
        httpPutResponseHopLimit: 1 
1

        httpTokens: optional 
2

        instanceMetadataTags: disabled
# ...
Copy to Clipboard

1
Specifies the number of network hops allowed for IMDSv2 calls. If no value is specified, this parameter is set to 1 by default.
2
Specifies whether to require the use of IMDSv2. If no value is specified, this parameter is set to optional by default. The following values are valid:
optional
Allow the use of both IMDSv1 and IMDSv2.
required
Require IMDSv2.
Note

The Machine API does not support the httpEndpoint, httpPutResponseHopLimit, and instanceMetadataTags fields. If you migrate a Cluster API machine template that uses this feature to a Machine API compute machine set, any Machine API machines that it creates will not have these fields and the underlying instances will not use these settings. Any existing machines that the migrated machine set manages will retain these fields and the underlying instances will continue to use these settings.

Requiring the use of IMDSv2 might cause timeouts. For more information, including mitigation strategies, see Instance metadata access considerations (AWS documentation).

11.4.1.2.3. Dedicated Instance configuration options

You can deploy machines that are backed by Dedicated Instances on Amazon Web Services (AWS) clusters.

Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.

OpenShift Container Platform supports instances with public or dedicated tenancy.

To deploy compute machines with your configuration, configure the appropriate values in a machine template YAML file. Then, configure a machine set YAML file to reference the machine template when it deploys machines.

Sample Dedicated Instances configuration

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
# ...
spec:
  template:
    spec:
      tenancy: dedicated 
1

# ...
Copy to Clipboard

1
Specifies using instances with dedicated tenancy that run on single-tenant hardware. If you do not specify this value, instances with public tenancy that run on shared hardware are used by default.
11.4.1.2.4. Non-guaranteed Spot Instances and hourly cost limits

You can deploy machines as non-guaranteed Spot Instances on Amazon Web Services (AWS). Spot Instances use spare AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.

To deploy compute machines with your configuration, configure the appropriate values in a machine template YAML file. Then, configure a machine set YAML file to reference the machine template when it deploys machines.

Important

AWS EC2 can reclaim the capacity for a Spot Instance at any time.

Sample Spot Instance configuration

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
# ...
spec:
  template:
    spec:
      spotMarketOptions: 
1

        maxPrice: <price_per_hour> 
2

# ...
Copy to Clipboard

1
Specifies the use of Spot Instances.
2
Optional: Specifies an hourly cost limit in US dollars for the Spot Instance. For example, setting the <price_per_hour> value to 2.50 limits the cost of the Spot Instance to USD 2.50 per hour. When this value is not set, the maximum price charges up to the On-Demand Instance price.
Warning

Setting a specific maxPrice: <price_per_hour> value might increase the frequency of interruptions compared to using the default On-Demand Instance price. It is strongly recommended to use the default On-Demand Instance price and to not set the maximum price for Spot Instances.

Interruptions can occur when using Spot Instances for the following reasons:

  • The instance price exceeds your maximum price
  • The demand for Spot Instances increases
  • The supply of Spot Instances decreases

AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning.

When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot Instance.

11.4.1.2.5. Capacity Reservation configuration options

OpenShift Container Platform version 4.19 and later supports Capacity Reservations on Amazon Web Services clusters, including On-Demand Capacity Reservations and Capacity Blocks for ML.

You can deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the instance type, region, and number of instances that you want to reserve. If your Capacity Reservation can accommodate the capacity request, the deployment succeeds.

To deploy compute machines with your configuration, configure the appropriate values in a machine template YAML file. Then, configure a machine set YAML file to reference the machine template when it deploys machines.

Sample Capacity Reservation configuration

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
# ...
spec:
  template:
    spec:
      capacityReservationId: <capacity_reservation> 
1

      marketType: <market_type> 
2

# ...
Copy to Clipboard

1
Specify the ID of the Capacity Block for ML or On-Demand Capacity Reservation that you want to deploy machines on.
2
Specify the market type to use. The following values are valid:
CapacityBlock
Use this market type with Capacity Blocks for ML.
OnDemand
Use this market type with On-Demand Capacity Reservations.
Spot
Use this market type with Spot Instances. This option is not compatible with Capacity Reservations.

For more information, including limitations and suggested use cases for this offering, see On-Demand Capacity Reservations and Capacity Blocks for ML in the AWS documentation.

11.4.1.2.6. GPU-enabled machine options

You can deploy GPU-enabled compute machines on Amazon Web Services (AWS). The following sample configuration uses an AWS G4dn instance type, which includes an NVIDIA Tesla T4 Tensor Core GPU, as an example.

For more information about supported instance types, see the following pages in the NVIDIA documentation:

To deploy compute machines with your configuration, configure the appropriate values in a machine template YAML file and a machine set YAML file that references the machine template when it deploys machines.

Sample GPU-enabled machine template configuration

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
# ...
spec:
  template:
    spec:
      instanceType: g4dn.xlarge 
1

# ...
Copy to Clipboard

1
Specifies a G4dn instance type.

Sample GPU-enabled machine set configuration

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata:
  name: <cluster_name>-gpu-<region> 
1

  namespace: openshift-cluster-api
  labels:
    cluster.x-k8s.io/cluster-name: <cluster_name>
spec:
  clusterName: <cluster_name>
  replicas: 1
  selector:
    matchLabels:
      test: example
      cluster.x-k8s.io/cluster-name: <cluster_name>
      cluster.x-k8s.io/set-name: <cluster_name>-gpu-<region> 
2

  template:
    metadata:
      labels:
        test: example
        cluster.x-k8s.io/cluster-name: <cluster_name>
        cluster.x-k8s.io/set-name: <cluster_name>-gpu-<region> 
3

        node-role.kubernetes.io/<role>: ""
# ...
Copy to Clipboard

1
Specifies a name that includes the gpu role. The name includes the cluster ID as a prefix and the region as a suffix.
2
Specifies a selector label that matches the machine set name.
3
Specifies a template label that matches the machine set name.

11.4.2. Cluster API configuration options for Google Cloud Platform

You can change the configuration of your Google Cloud Platform (GCP) Cluster API machines by updating values in the Cluster API custom resource manifests.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.4.2.1. Sample YAML for configuring Google Cloud Platform clusters

The following example YAML files show configurations for a Google Cloud Platform cluster.

11.4.2.1.1. Sample YAML for a Cluster API machine template resource on Google Cloud Platform

The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate 
1

metadata:
  name: <template_name> 
2

  namespace: openshift-cluster-api
spec:
  template:
    spec: 
3

      rootDeviceType: pd-ssd
      rootDeviceSize: 128
      instanceType: n1-standard-4
      image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64
      subnet: <cluster_name>-worker-subnet
      serviceAccounts:
        email: <service_account_email_address>
        scopes:
          - https://www.googleapis.com/auth/cloud-platform
      additionalLabels:
        kubernetes-io-cluster-<cluster_name>: owned
      additionalNetworkTags:
        - <cluster_name>-worker
      ipForwarding: Disabled
Copy to Clipboard
1
Specify the machine template kind. This value must match the value for your platform.
2
Specify a name for the machine template.
3
Specify the details for your environment. The values here are examples.
11.4.2.1.2. Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform

The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines.

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata:
  name: <machine_set_name> 
1

  namespace: openshift-cluster-api
  labels:
    cluster.x-k8s.io/cluster-name: <cluster_name> 
2

spec:
  clusterName: <cluster_name> 
3

  replicas: 1
  selector:
    matchLabels:
      test: example
      cluster.x-k8s.io/cluster-name: <cluster_name>
      cluster.x-k8s.io/set-name: <machine_set_name>
  template:
    metadata:
      labels:
        test: example
        cluster.x-k8s.io/cluster-name: <cluster_name>
        cluster.x-k8s.io/set-name: <machine_set_name>
        node-role.kubernetes.io/<role>: ""
    spec:
      bootstrap:
         dataSecretName: worker-user-data
      clusterName: <cluster_name>
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: GCPMachineTemplate 
4

        name: <template_name> 
5

      failureDomain: <failure_domain> 
6
Copy to Clipboard
1
Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region>.
2 3
Specify the cluster ID as the name of the cluster.
4
Specify the machine template kind. This value must match the value for your platform.
5
Specify the machine template name.
6
Specify the failure domain within the GCP region.

11.4.3. Cluster API configuration options for Microsoft Azure

You can change the configuration of your Microsoft Azure Cluster API machines by updating values in the Cluster API custom resource manifests.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.4.3.1. Sample YAML for configuring Microsoft Azure clusters

The following example YAML files show configurations for an Azure cluster.

11.4.3.1.1. Sample YAML for a Cluster API machine template resource on Microsoft Azure

The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureMachineTemplate 
1

metadata:
  name: <template_name> 
2

  namespace: openshift-cluster-api
spec:
  template:
    spec: 
3

      disableExtensionOperations: true
      identity: UserAssigned
      image:
        id: /subscriptions/<subscription_id>/resourceGroups/<cluster_name>-rg/providers/Microsoft.Compute/galleries/gallery_<compliant_cluster_name>/images/<cluster_name>-gen2/versions/latest 
4

      networkInterfaces:
        - acceleratedNetworking: true
          privateIPConfigs: 1
          subnetName: <cluster_name>-worker-subnet
      osDisk:
        diskSizeGB: 128
        managedDisk:
          storageAccountType: Premium_LRS
        osType: Linux
      sshPublicKey: <ssh_key_value>
      userAssignedIdentities:
        - providerID: 'azure:///subscriptions/<subscription_id>/resourcegroups/<cluster_name>-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<cluster_name>-identity'
      vmSize: Standard_D4s_v3
Copy to Clipboard
1
Specify the machine template kind. This value must match the value for your platform.
2
Specify a name for the machine template.
3
Specify the details for your environment. The values here are examples.
4
Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix.
Note

Default OpenShift Container Platform cluster names contain hyphens (-), which are not compatible with Azure gallery name requirements. The value of <compliant_cluster_name> in this configuration must use underscores (_) instead of hyphens to comply with these requirements. Other instances of <cluster_name> do not change.

For example, a cluster name of jdoe-test-2m2np transforms to jdoe_test_2m2np. The full string for gallery_<compliant_cluster_name> in this example is gallery_jdoe_test_2m2np, not gallery_jdoe-test-2m2np. The complete value of spec.template.spec.image.id for this example value is /subscriptions/<subscription_id>/resourceGroups/jdoe-test-2m2np-rg/providers/Microsoft.Compute/galleries/gallery_jdoe_test_2m2np/images/jdoe-test-2m2np-gen2/versions/latest.

11.4.3.1.2. Sample YAML for a Cluster API compute machine set resource on Microsoft Azure

The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines.

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata:
  name: <machine_set_name> 
1

  namespace: openshift-cluster-api
  labels:
    cluster.x-k8s.io/cluster-name: <cluster_name> 
2

spec:
  clusterName: <cluster_name>
  replicas: 1
  selector:
    matchLabels:
      test: example
      cluster.x-k8s.io/cluster-name: <cluster_name>
      cluster.x-k8s.io/set-name: <machine_set_name>
  template:
    metadata:
      labels:
        test: example
        cluster.x-k8s.io/cluster-name: <cluster_name>
        cluster.x-k8s.io/set-name: <machine_set_name>
        node-role.kubernetes.io/<role>: ""
    spec:
      bootstrap:
         dataSecretName: worker-user-data
      clusterName: <cluster_name>
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: AzureMachineTemplate 
3

        name: <template_name> 
4
Copy to Clipboard
1
Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region>.
2
Specify the cluster ID as the name of the cluster.
3
Specify the machine template kind. This value must match the value for your platform.
4
Specify the machine template name.

11.4.4. Cluster API configuration options for Red Hat OpenStack Platform

You can change the configuration of your Red Hat OpenStack Platform (RHOSP) Cluster API machines by updating values in the Cluster API custom resource manifests.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.4.4.1. Sample YAML for configuring RHOSP clusters

The following example YAML files show configurations for a RHOSP cluster.

11.4.4.1.1. Sample YAML for a Cluster API machine template resource on RHOSP

The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackMachineTemplate 
1

metadata:
  name: <template_name> 
2

  namespace: openshift-cluster-api
spec:
  template:
    spec: 
3

      flavor: <openstack_node_machine_flavor> 
4

      image:
        filter:
          name: <openstack_image> 
5
Copy to Clipboard
1
Specify the machine template kind. This value must match the value for your platform.
2
Specify a name for the machine template.
3
Specify the details for your environment. The values here are examples.
4
Specify the RHOSP flavor to use. For more information, see Creating flavors for launching instances.
5
Specify the image to use.
11.4.4.1.2. Sample YAML for a Cluster API compute machine set resource on RHOSP

The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the infrastructure resource and machine template when creating machines.

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata:
  name: <machine_set_name> 
1

  namespace: openshift-cluster-api
spec:
  clusterName: <cluster_name> 
2

  replicas: 1
  selector:
    matchLabels:
      test: example
      cluster.x-k8s.io/cluster-name: <cluster_name>
      cluster.x-k8s.io/set-name: <machine_set_name>
  template:
    metadata:
      labels:
        test: example
        cluster.x-k8s.io/cluster-name: <cluster_name>
        cluster.x-k8s.io/set-name: <machine_set_name>
        node-role.kubernetes.io/<role>: ""
    spec:
      bootstrap:
         dataSecretName: worker-user-data 
3

      clusterName: <cluster_name>
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: OpenStackMachineTemplate 
4

        name: <template_name> 
5

      failureDomain: <nova_availability_zone> 
6
Copy to Clipboard
1
Specify a name for the compute machine set.
2
Specify the cluster ID as the name of the cluster.
3
For the Cluster API Technology Preview, the Operator can use the worker user data secret from the openshift-machine-api namespace.
4
Specify the machine template kind. This value must match the value for your platform.
5
Specify the machine template name.
6
Optional: Specify the name of the Nova availability zone for the machine set to create machines in. If you do not specify a value, machines are not restricted to a specific availability zone.

11.4.5. Cluster API configuration options for VMware vSphere

You can change the configuration of your VMware vSphere Cluster API machines by updating values in the Cluster API custom resource manifests.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.4.5.1. Sample YAML for configuring VMware vSphere clusters

The following example YAML files show configurations for a VMware vSphere cluster.

11.4.5.1.1. Sample YAML for a Cluster API machine template resource on VMware vSphere

The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate 
1

metadata:
  name: <template_name> 
2

  namespace: openshift-cluster-api
spec:
  template:
    spec: 
3

      template: <vm_template_name> 
4

      server: <vcenter_server_ip> 
5

      diskGiB: 128
      cloneMode: linkedClone 
6

      datacenter: <vcenter_data_center_name> 
7

      datastore: <vcenter_datastore_name> 
8

      folder: <vcenter_vm_folder_path> 
9

      resourcePool: <vsphere_resource_pool> 
10

      numCPUs: 4
      memoryMiB: 16384
      network:
        devices:
        - dhcp4: true
          networkName: "<vm_network_name>" 
11
Copy to Clipboard
1
Specify the machine template kind. This value must match the value for your platform.
2
Specify a name for the machine template.
3
Specify the details for your environment. The values here are examples.
4
Specify the vSphere VM template to use, such as user-5ddjd-rhcos.
5
Specify the vCenter server IP or fully qualified domain name.
6
Specify the type of VM clone to use. The following values are valid:
  • fullClone
  • linkedClone

When using the linkedClone type, the disk size matches the clone source instead of using the diskGiB value. For more information, see the vSphere documentation about VM clone types.

7
Specify the vCenter data center to deploy the compute machine set on.
8
Specify the vCenter datastore to deploy the compute machine set on.
9
Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd.
10
Specify the vSphere resource pool for your VMs.
11
Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster.
11.4.5.1.2. Sample YAML for a Cluster API compute machine set resource on VMware vSphere

The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines.

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata:
  name: <machine_set_name> 
1

  namespace: openshift-cluster-api
  labels:
    cluster.x-k8s.io/cluster-name: <cluster_name> 
2

spec:
  clusterName: <cluster_name> 
3

  replicas: 1
  selector:
    matchLabels:
      test: example
      cluster.x-k8s.io/cluster-name: <cluster_name>
      cluster.x-k8s.io/set-name: <machine_set_name>
  template:
    metadata:
      labels:
        test: example
        cluster.x-k8s.io/cluster-name: <cluster_name>
        cluster.x-k8s.io/set-name: <machine_set_name>
        node-role.kubernetes.io/<role>: ""
    spec:
      bootstrap:
         dataSecretName: worker-user-data
      clusterName: <cluster_name>
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: VSphereMachineTemplate 
4

        name: <template_name> 
5

      failureDomain: 
6

        - name: <failure_domain_name>
          region: <region_a>
          zone: <zone_a>
          server: <vcenter_server_name>
          topology:
            datacenter: <region_a_data_center>
            computeCluster: "</region_a_data_center/host/zone_a_cluster>"
            resourcePool: "</region_a_data_center/host/zone_a_cluster/Resources/resource_pool>"
            datastore: "</region_a_data_center/datastore/datastore_a>"
            networks:
            - port-group
Copy to Clipboard
1
Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region>.
2 3
Specify the cluster ID as the name of the cluster.
4
Specify the machine template kind. This value must match the value for your platform.
5
Specify the machine template name.
6
Specify the failure domain configuration details.
Note

Using multiple regions and zones on a vSphere cluster that uses the Cluster API is not a validated configuration.

11.4.6. Cluster API configuration options for bare metal

You can change the configuration of your bare metal Cluster API machines by updating values in the Cluster API custom resource manifests.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.4.6.1. Sample YAML for configuring bare metal clusters

The following example YAML files show configurations for a bare metal cluster.

11.4.6.1.1. Sample YAML for a Cluster API machine template resource on bare metal

The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: Metal3MachineTemplate 
1

metadata:
  name: <template_name>  
2

  namespace: openshift-cluster-api
spec:
  template:
    spec: 
3

      customDeploy: install_coreos
      userData:
        name: worker-user-data-managed 
4
Copy to Clipboard
1
Specify the machine template kind. This value must match the value for your platform.
2
Specify a name for the machine template.
3
Specify the details for your environment. The values here are examples.
4
The userData parameter refers to the Ignition configuration, which the Machine API Operator generates during installation. You must apply the openshift-cluster-api namespace to ensure the cluster can access the secret by running the following command:
$ oc get secret worker-user-data-managed \
  -n openshift-machine-api -o yaml | \
  sed 's/namespace: .*/namespace: openshift-cluster-api/' | oc apply -f -
Copy to Clipboard
11.4.6.1.2. Sample YAML for a Cluster API compute machine set resource on bare metal

The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the cluster resource and machine template when creating machines.

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata:
  name: <machine_set_name> 
1

  namespace: openshift-cluster-api
  labels:
    cluster.x-k8s.io/cluster-name: <cluster_name> 
2

spec:
  clusterName: <cluster_name>
  replicas: 1
  selector:
    matchLabels:
      test: example
      cluster.x-k8s.io/cluster-name: <cluster_name>
      cluster.x-k8s.io/set-name: <machine_set_name>
  template:
    metadata:
      labels:
        test: example
        cluster.x-k8s.io/cluster-name: <cluster_name>
        cluster.x-k8s.io/set-name: <machine_set_name>
        node-role.kubernetes.io/worker: ""
    spec:
      bootstrap:
         dataSecretName: worker-user-data-managed
      clusterName: <cluster_name>
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: Metal3MachineTemplate 
3

        name: <template_name> 
4
Copy to Clipboard
1
Specify a name for the compute machine set. The cluster ID, machine role, and region form a typical pattern for this value in the following format: <cluster_name>-<role>-<region>.
2
Specify the cluster ID as the name of the cluster.
3
Specify the machine template kind. This value must match the value for your platform.
4
Specify the machine template name.

11.5. Troubleshooting clusters that use the Cluster API

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Use the information in this section to understand and recover from issues you might encounter. Generally, troubleshooting steps for problems with the Cluster API are similar to those steps for problems with the Machine API.

The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, whereas the Machine API uses the openshift-machine-api namespace. When using oc commands that reference a namespace, be sure to reference the correct one.

11.5.1. Referencing the intended objects when using the CLI

For clusters that use the Cluster API, OpenShift CLI (oc) commands prioritize Cluster API objects over Machine API objects.

This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API. This explanation uses the oc delete machine command, which deletes a machine, as an example.

Cause

When you run an oc command, oc communicates with the Kube API server to determine which objects to act upon. The Kube API server uses the first installed custom resource definition (CRD) it encounters alphabetically when an oc command is run.

CRDs for Cluster API objects are in the cluster.x-k8s.io group, while CRDs for Machine API objects are in the machine.openshift.io group. Because the letter c precedes the letter m alphabetically, the Kube API server matches on the Cluster API object CRD. As a result, the oc command acts upon Cluster API objects.

Consequence

Due to this behavior, the following unintended outcomes can occur on a cluster that uses the Cluster API:

  • For namespaces that contain both types of objects, commands such as oc get machine return only Cluster API objects.
  • For namespaces that contain only Machine API objects, commands such as oc get machine return no results.
Workaround
You can ensure that oc commands act on the type of objects you intend by using the corresponding fully qualified name.

Prerequisites

  • You have access to the cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  • To delete a Machine API machine, use the fully qualified name machine.machine.openshift.io when running the oc delete machine command:

    $ oc delete machine.machine.openshift.io <machine_name>
    Copy to Clipboard
  • To delete a Cluster API machine, use the fully qualified name machine.cluster.x-k8s.io when running the oc delete machine command:

    $ oc delete machine.cluster.x-k8s.io <machine_name>
    Copy to Clipboard

11.6. Disabling the Cluster API

To stop using the Cluster API to automate the management of infrastructure resources on your OpenShift Container Platform cluster, convert any Cluster API resources on your cluster to equivalent Machine API resources.

Important

Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

11.6.1. Migrating Cluster API resources to Machine API resources

On clusters that support migrating between Machine API and Cluster API resources, the two-way synchronization controller supports converting a Cluster API resource to a Machine API resource.

Note

The two-way synchronization controller only operates on clusters with the MachineAPIMigration feature gate in the TechPreviewNoUpgrade feature set enabled.

You can migrate resources that you originally migrated from the Machine API to the Cluster API, or resources that you created as Cluster API resources initially. Migrating an original Machine API resource to a Cluster API resource and then migrating it back provides an opportunity to verify that the migration process works as expected.

Note

You can only migrate some resources on supported infrastructure types.

Table 11.2. Supported resource conversions
InfrastructureCompute machineCompute machine setMachine health checkControl plane machine setCluster autoscaler

AWS

Technology Preview

Technology Preview

Not Available

Not Available

Not Available

All other infrastructure types

Not Available

Not Available

Not Available

Not Available

Not Available

11.6.1.1. Migrating a Cluster API resource to use the Machine API

You can migrate individual Cluster API objects to equivalent Machine API objects.

Important

Migrating a Cluster API resource to use the Machine API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • You have deployed an OpenShift Container Platform cluster on a supported infrastructure type.
  • You have enabled the MachineAPIMigration feature gate in the TechPreviewNoUpgrade feature set.
  • You have access to the cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Identify the Cluster API resource that you want to migrate to a Machine API resource by running the following command:

    $ oc get <resource_kind> -n openshift-cluster-api
    Copy to Clipboard

    where <resource_kind> is one of the following values:

    machine.cluster.x-k8s.io
    The fully qualified name of the resource kind for a compute or control plane machine.
    machineset.cluster.x-k8s.io
    The fully qualified name of the resource kind for a compute machine set.
  2. Edit the resource specification by running the following command:

    $ oc edit <resource_kind>/<resource_name> -n openshift-machine-api
    Copy to Clipboard

    where:

    <resource_kind>
    Specifies a compute machine with machine.machine.openshift.io or compute machine set with machineset.machine.openshift.io.
    <resource_name>
    Specifies the name of the Machine API resource that corresponds to the Cluster API resource that you want to migrate to the Machine API.
  3. In the resource specification, update the value of the spec.authoritativeAPI field:

    apiVersion: machine.openshift.io/v1beta1
    kind: <resource_kind> 
    1
    
    metadata:
      name: <resource_name> 
    2
    
      [...]
    spec:
      authoritativeAPI: MachineAPI 
    3
    
      [...]
    status:
      authoritativeAPI: ClusterAPI 
    4
    
      [...]
    Copy to Clipboard
    1
    The resource kind varies depending on the resource kind. For example, the resource kind for a compute machine set is MachineSet and the resource kind for a compute machine is Machine.
    2
    The name of the resource that you want to migrate.
    3
    Specify the authoritative API that you want this resource to use. For example, to start migrating a Cluster API resource to the Machine API, specify MachineAPI.
    4
    The value for the current authoritative API. This value indicates which API currently manages this resource. Do not change the value in this part of the specification.

Verification

  • Check the status of the conversion by running the following command:

    $ oc -n openshift-machine-api get <resource_kind>/<resource_name> -o json | jq .status.authoritativeAPI
    Copy to Clipboard

    where:

    <resource_kind>
    Specifies a compute machine with machine.machine.openshift.io or compute machine set with machineset.machine.openshift.io.
    <resource_name>
    Specifies the name of the Machine API resource that corresponds to the Cluster API resource that you want to migrate to the Machine API.
    • While the conversion progresses, this command returns a value of Migrating. If this value persists for a long time, check the logs for the cluster-capi-operator deployment in the openshift-cluster-api namespace for more information and to identify potential issues.
    • When the conversion is complete, this command returns a value of MachineAPI.
    Important

    Do not delete any nonauthoritative resource that does not use the current authoritative API unless you want to delete the corresponding resource that does use the current authoritative API.

    When you delete a nonauthoritative resource that does not use the current authoritative API, the synchronization controller deletes the corresponding resource that does use the current authoritative API.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat