Chapter 5. Installing with the Assisted Installer API


After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster by using the Assisted Installer API. To use the API, you must perform the following procedures:

  • Set up the API authentication.
  • Configure the pull secret.
  • Register a new cluster definition.
  • Create an infrastructure environment for the cluster.

Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API, but you can review all of the endpoints in the API viewer or the swagger.yaml file.

5.1. Generating the offline token

Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token.

Prerequisites

Procedure

  1. In the menu, click Downloads.
  2. In the Tokens section under OpenShift Cluster Manager API Token, click View API Token.
  3. Click Load Token.

    Important

    Disable pop-up blockers.

  4. In the Your API token section, copy the offline token.
  5. In your terminal, set the offline token to the OFFLINE_TOKEN variable:

    $ export OFFLINE_TOKEN=<copied_token>
    Copy to Clipboard Toggle word wrap
    Tip

    To make the offline token permanent, add it to your profile.

  6. (Optional) Confirm the OFFLINE_TOKEN variable definition.

    $ echo ${OFFLINE_TOKEN}
    Copy to Clipboard Toggle word wrap

5.2. Authenticating with the REST API

API calls require authentication with the API token. Assuming you use API_TOKEN as a variable name, add -H "Authorization: Bearer ${API_TOKEN}" to API calls to authenticate with the REST API.

Note

The API token expires after 15 minutes.

Prerequisites

  • You have generated the OFFLINE_TOKEN variable.

Procedure

  1. On the command line terminal, set the API_TOKEN variable using the OFFLINE_TOKEN to validate the user.

    $ export API_TOKEN=$( \
      curl \
      --silent \
      --header "Accept: application/json" \
      --header "Content-Type: application/x-www-form-urlencoded" \
      --data-urlencode "grant_type=refresh_token" \
      --data-urlencode "client_id=cloud-services" \
      --data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
      "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \
      | jq --raw-output ".access_token" \
    )
    Copy to Clipboard Toggle word wrap
  1. Confirm the API_TOKEN variable definition:

    $ echo ${API_TOKEN}
    Copy to Clipboard Toggle word wrap
  2. Create a script in your path for one of the token generating methods. For example:

    $ vim ~/.local/bin/refresh-token
    Copy to Clipboard Toggle word wrap
    export API_TOKEN=$( \
      curl \
      --silent \
      --header "Accept: application/json" \
      --header "Content-Type: application/x-www-form-urlencoded" \
      --data-urlencode "grant_type=refresh_token" \
      --data-urlencode "client_id=cloud-services" \
      --data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
      "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \
      | jq --raw-output ".access_token" \
    )
    Copy to Clipboard Toggle word wrap

    Then, save the file.

  3. Change the file mode to make it executable:

    $ chmod +x ~/.local/bin/refresh-token
    Copy to Clipboard Toggle word wrap
  4. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  5. Verify that you can access the API by running the following command:

    $ curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer ${API_TOKEN}" | jq
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "release_tag": "v2.11.3",
      "versions": {
        "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-211",
        "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-266",
        "assisted-installer-service": "quay.io/app-sre/assisted-service:78d113a",
        "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-195"
      }
    }
    Copy to Clipboard Toggle word wrap

5.3. Configuring the pull secret

Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request’s JSON object. The pull secret JSON must be formatted to escape the quotes. For example:

Before

{"auths":{"cloud.openshift.com": ...
Copy to Clipboard Toggle word wrap

After

{\"auths\":{\"cloud.openshift.com\": ...
Copy to Clipboard Toggle word wrap

Procedure

  1. In the menu, click OpenShift.
  2. In the submenu, click Downloads.
  3. In the Tokens section under Pull secret, click Download.
  4. To use the pull secret from a shell variable, execute the following command:

    $ export PULL_SECRET=$(cat ~/Downloads/pull-secret.txt | jq -R .)
    Copy to Clipboard Toggle word wrap
  5. To slurp the pull secret file using jq, reference it in the pull_secret variable, piping the value to tojson to ensure that it is properly formatted as escaped JSON. For example:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
            --slurpfile pull_secret ~/Downloads/pull-secret.txt ' 
    1
    
        {
            "name": "testcluster",
            "control_plane_count": "3",
            "openshift_version": "4.11",
            "pull_secret": $pull_secret[0] | tojson, 
    2
    
            "base_dns_domain": "example.com"
        }
    ')"
    Copy to Clipboard Toggle word wrap
    1
    Slurp the pull secret file.
    2
    Format the pull secret to escaped JSON format.
  6. Confirm the PULL_SECRET variable definition:

    $ echo ${PULL_SECRET}
    Copy to Clipboard Toggle word wrap

5.4. Generating the SSH public key

During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubleshooting an installation error.

If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now.

For more information, see Generating a key pair for cluster node SSH access.

Prerequisites

  • Generate the OFFLINE_TOKEN and API_TOKEN variables.

Procedure

  1. From the root user in your terminal, get the SSH public key:

    $ cat /root/.ssh/id_rsa.pub
    Copy to Clipboard Toggle word wrap
  2. Set the SSH public key to the CLUSTER_SSHKEY variable:

    $ CLUSTER_SSHKEY=<downloaded_ssh_key>
    Copy to Clipboard Toggle word wrap
  3. Confirm the CLUSTER_SSHKEY variable definition:

    $ echo ${CLUSTER_SSHKEY}
    Copy to Clipboard Toggle word wrap

5.5. Registering a new cluster

To register a new cluster definition with the API, use the /v2/clusters endpoint.

The following parameters are mandatory:

  • name
  • openshift-version
  • pull_secret
  • cpu_architecture

See the cluster-create-params model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators field, see Additional Resources for details on installing Operators.

Prerequisites

  • You have generated a valid API_TOKEN. Tokens expire every 15 minutes.
  • You have downloaded the pull secret.
  • Optional: You have assigned the pull secret to the $PULL_SECRET variable.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Register a new cluster by using one of the following methods:

    • Reference the pull secret file in the request:

      $ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
        -H "Authorization: Bearer ${API_TOKEN}" \
        -H "Content-Type: application/json" \
        -d "$(jq --null-input \
            --slurpfile pull_secret ~/Downloads/pull-secret.txt ' \
        { \
            "name": "testcluster", \
            "openshift_version": "4.16", \ 
      1
      
            "control_plane_count": "<number>", \ 
      2
      
            "cpu_architecture" : "<architecture_name>", \ 
      3
      
            "base_dns_domain": "example.com", \
            "pull_secret": $pull_secret[0] | tojson \
        } \
        ')" | jq '.id'
      Copy to Clipboard Toggle word wrap
    • Write the configuration to a JSON file and then reference it in the request:

      $ cat << EOF > cluster.json
      {
        "name": "testcluster",
        "openshift_version": "4.16", 
      1
      
        "control_plane_count": "<number>",
      2
      
        "base_dns_domain": "example.com",
        "network_type": "examplenetwork",
        "cluster_network_cidr":"11.111.1.0/14"
        "cluster_network_host_prefix": 11,
        "service_network_cidr": "111.11.1.0/16",
        "api_vips":[{"ip": ""}],
        "ingress_vips": [{"ip": ""}],
        "vip_dhcp_allocation": false,
        "additional_ntp_source": "clock.redhat.com,clock2.redhat.com",
        "ssh_public_key": "$CLUSTER_SSHKEY",
        "pull_secret": $PULL_SECRET
      }
      EOF
      Copy to Clipboard Toggle word wrap
      $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters" \
        -d @./cluster.json \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $API_TOKEN" \
        | jq '.id'
      Copy to Clipboard Toggle word wrap
      1 1
      Pay attention to the following:
      • To install the latest OpenShift version, use the x.y format, such as 4.16 for version 4.16.10. To install a specific OpenShift version, use the x.y.z format, such as 4.16.3 for version 4.16.3.
      • To install a multi-architecture compute cluster, add the -multi extension, such as 4.16-multi for the latest version or 4.16.3-multi for a specific version.
      • If you are booting from an iSCSI drive, enter OpenShift Container Platform version 4.15 or later.
      2 2
      Optionally set the number of control plane nodes to 1 for a single-node OpenShift cluster, to 2 or more for a Two-Node OpenShift with Arbiter cluster, or to 3, 4, or 5 for a multi-node OpenShift Container Platform cluster. If this setting is omitted, the Assisted Installer sets 3 as the default.
      Note
      • The control_plane_count field replaces the high_availability_mode field, which is deprecated. For details, see API deprecation notice.
      • Currently, single-node OpenShift is not supported on IBM Z® and IBM Power® platforms.
      • The Assisted Installer supports 4 or 5 control plane nodes from OpenShift Container Platform 4.18 and later, on a bare metal or user-managed networking platform with an x86_64 CPU architecture. For details, see About specifying the number of control plane nodes.
      • The Assisted Installer supports 2 control plane nodes from OpenShift Container Platform 4.19 and later, for a Two-Node OpenShift with Arbiter cluster topology. If the number of control plane nodes for a cluster is 2, then it must have at least one additional arbiter host. For details, see Two-Node OpenShift with Arbiter resource requirements.
      3
      Valid values are x86_64, arm64, ppc64le, s390x, or multi. Specify multi for a multi-architecture compute cluster.
  3. Assign the returned cluster_id to the CLUSTER_ID variable and export it:

    $ export CLUSTER_ID=<cluster_id>
    Copy to Clipboard Toggle word wrap
    Note

    If you close your terminal session, you need to export the CLUSTER_ID variable again in a new terminal session.

  4. Check the status of the new cluster:

    $ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
      | jq
    Copy to Clipboard Toggle word wrap

Once you register a new cluster definition, create the infrastructure environment for the cluster.

Note

You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment.

5.5.1. Installing Operators

You can customize your deployment by adding Operators to the cluster during installation. You can install one or more Operators individually or add a group of Operators that form a bundle. If you require advanced options, add the Operators after you have installed the cluster.

This step is optional.

5.5.1.1. Installing standalone Operators

Before selecting Operators for installation, you can verify which Operators are available in the Assisted Installer. You can also check whether an Operator is supported for a specific OCP version, CPU architecture, or platform.

You set the required Operator definitions by using the POST method for the assisted-service/v2/clusters/{cluster_id} endpoint and by setting the olm_operators parameter.

The Assisted Installer allows you to install the following standalone Operators. For additional Operators that you can select as part of a bundle, see Installing bundle Operators.

  • OpenShift Virtualization Operator (cnv)

    Note
    • Currently, OpenShift Virtualization is not supported on IBM Z® and IBM Power®.
    • The OpenShift Virtualization Operator requires backend storage and might automatically activate a storage Operator in the background, according to the following criteria:

      • None - If the CPU architecture is ARM64, no storage Operator is activated.
      • LVM Storage - For single-node OpenShift clusters on any other CPU architecture deploying OpenShift Container Platform 4.12 or higher.
      • Local Storage Operator (LSO) - For all other deployments.
  • Migration Toolkit for Virtualization Operator (mtv)

    Note

    Specifying the Migration Toolkit for Virtualization (MTV) Operator automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator.

  • Multicluster engine Operator (mce)

    Note

    Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations:

    • Multi-node cluster: No storage is configured. You must configure storage after the installation.
    • Single-node OpenShift: LVM Storage is installed.
  • OpenShift Data Foundation Operator (odf)
  • Logical Volume Manager Storage Operator (lvm)
  • OpenShift AI Operator (openshift-ai)
  • OpenShift sandboxed containers Operator (osc)

    Important

    The integration of the OpenShift sandboxed containers Operator into the Assisted Installer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

  • Kubernetes NMState Operator (nmstate)

    Note

    Currently, you cannot install the Kubernetes NMState Operator on the Nutanix or Oracle Cloud Infrastructure (OCI) third-party platforms.

  • AMD GPU Operator (amd-gpu)

    Note

    Installing the AMD GPU Operator automatically activates the Kernel Module Management Operator.

  • Kernel Module Management Operator (kmm)
  • Node Feature Discovery Operator (node-feature-discovery)
  • Self Node Remediation (self-node-remediation)
  • NVIDIA GPU Operator (nvidia-gpu)

    Note

    Installing the NVIDIA GPU Operator automatically activates the Node Feature Discovery Operator.

Important

The integration of the OpenShift AI, AMD GPU, Kernel Module Management, Node Feature Discovery, Self Node Remediation, and NVIDIA GPU Operators into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

Prerequisites

Procedure

  1. Optional: Check which Operators are available in the Assisted Installer, by running the following command:

    $ curl -s "https://api.openshift.com/api/assisted-install/v2/supported-operators" -H "Authorization: Bearer ${API_TOKEN}" | jq .
    Copy to Clipboard Toggle word wrap
  2. Check whether an Operator is supported for a specified OCP version, CPU architecture, or platform by running the following command:

    $ curl -s "https://api.openshift.com/api/assisted-install/v2/support-levels/features?openshift_version=4.13&cpu_architecture=x86_64&platform_type=baremetal" -H "Authorization: Bearer ${API_TOKEN}" | jq .features.SNO 
    1
     
    2
    Copy to Clipboard Toggle word wrap
    1
    Replace the attributes as follows:
    • For openshift_version, specify the OpenShift Container Platform version number. This attribute is mandatory.
    • For cpu_architecture, specify x86_64, aarch64, arm64,ppc64le,s390x, or multi. This attribute is optional.
    • For platform_type, specify baremetal, none, nutanix, vsphere, or external. This attribute is optional.
    2
    Specify the Operator in upper case, for example, .NODE-FEATURE-DISCOVERY for Node Feature Discovery, .OPENSHIFT-AI for OpenShift AI, .OSC for OpenShift sandboxed containers, .SELF-NODE-REMEDIATION for Self Node Remediation, or .MTV for Migration Toolkit for Virtualization.

    Example output

    "supported"
    Copy to Clipboard Toggle word wrap

    Where possible statuses are "supported", "dev-preview", "tech-preview", and "unavailable".

  3. Get the full list of supported Operators and additional features for a specified OCP version, CPU architecture, or platform by running the following command:

    $ curl -s "https://api.openshift.com/api/assisted-install/v2/support-levels/features?openshift_version=4.13&cpu_architecture=x86_64&platform_type=baremetal" -H "Authorization: Bearer ${API_TOKEN}" | jq
    Copy to Clipboard Toggle word wrap
  4. Specify the Operators to install by running the following command:

    $ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
       --slurpfile pull_secret ~/Downloads/pull-secret.txt '
    {
       "name": "testcluster",
       "openshift_version": "4.15",
       "cpu_architecture" : "x86_64",
       "base_dns_domain": "example.com",
       "olm_operators": [
      { "name": "mce" } 
    1
    
      ,
      { "name": "odf" }
      ,
     { "name": "amd-gpu" }
        ]
       "pull_secret": $pull_secret[0] | tojson
    }
    ')" | jq '.id'
    Copy to Clipboard Toggle word wrap
    1
    List the Operators that you want to install. Specify cnv for OpenShift Virtualization, mtv for Migration Toolkit for Virtualization, mce for multicluster engine, odf for Red Hat OpenShift Data Foundation, lvm for Logical Volume Manager Storage, openshift-ai for OpenShift AI, osc for OpenShift sandboxed containers, nmstate for Kubernetes NMState, amd-gpu for AMD GPU, kmm for Kernel Module Management, node-feature-discovery for Node Feature Discovery, nvidia-gpu for NVIDIA GPU and self-node-remediation for Self Node Remediation. Installing an Operator automatically activates any dependent Operators.

5.5.1.2. Installing bundle Operators

Although you cannot install an Operator bundle directly through the API, you can verify which Operators are included in a bundle and specify each Operator individually.

The Assisted Installer currently supports the following Operator bundles:

  • Virtualization Operator bundle - Contains the following Operators:

    • Kube Descheduler Operator (kube-descheduler)
    • Node Maintenance Operator {node-maintenance}
    • Migration Toolkit for Virtualization Operator (mtv)
    • Kubernetes NMState Operator (nmstate)
    • Fence Agents Remediation Operator (fence-agents-remediation)
    • OpenShift Virtualization Operator (cnv)
    • Node Health Check Operator (node-healthcheck)
    • Local Storage Operator (LSO) Operator (lso)
    • Cluster Observability Operator (cluster-observability)
    • MetalLB Operator (metallb)
    • NUMA Resources Operator (numaresources)
    • OpenShift API for Data Protection Operator (oadp)
  • OpenShift AI Operator bundle - Contains the following Operators:

    • Kubernetes Authorino Operator (authorino)
    • OpenShift Data Foundation Operator (odf)
    • OpenShift AI Operator (openshift-ai)
    • AMD GPU Operator (amd-gpu)
    • Node Feature Discovery Operator (node-feature-discovery)
    • NVIDIA GPU Operator (nvidia-gpu)
    • OpenShift Pipelines Operator (pipelines)
    • OpenShift Service Mesh Operator (servicemesh)
    • OpenShift Serverless Operator (serverless)
    • Kernel Module Management Operator (kmm)
Important

The introduction of the Virtualization and OpenShift AI Operator bundles into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

Prerequisites

Procedure

  1. Optional: Check which Operator bundles are available in the Assisted Installer by running the following command:

    $ curl -s "https://api.openshift.com/api/assisted-install/v2/operators/bundles" -H "Authorization: Bearer ${API_TOKEN}" | jq .
    Copy to Clipboard Toggle word wrap
  2. Optional: Check which Operators are associated with a specific bundle by running the following command:

    $ curl -s "https://api.openshift.com/api/assisted-install/v2/operators/bundles/virtualization" -H "Authorization: Bearer ${API_TOKEN}" | jq . 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify virtualization for the Virtualization Operator bundle or openshift-ai for the OpenShift AI Operator bundle. The example specifies the Virtualization Operator bundle.

    Example output

    {
      "description": "Run virtual machines alongside containers on one platform.",
      "id": "virtualization",
      "operators": [
        "kube-descheduler",
        "node-maintenance",
        "mtv",
        "nmstate",
        "fence-agents-remediation",
        "cnv",
        "node-healthcheck",
        "cluster-observability"
        "metallb"
        "numaresources"
        "oadp"
      ],
      "title": "Virtualization"
    }
    Copy to Clipboard Toggle word wrap

  3. Install the Operators associated with the bundle by running the following command:

    $ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
       --slurpfile pull_secret ~/Downloads/pull-secret.txt '
    {
       "name": "testcluster",
       "openshift_version": "4.15",
       "cpu_architecture" : "x86_64",
       "base_dns_domain": "example.com",
       "olm_operators": [ 
    1
    
      { "name": "node-healthcheck" }
    
      { "name": "fence-agents-remediation" }
    
      { "name": "kube-descheduler" }
    
      { "name": "mtv" }
    
      { "name": "nmstate" }
    
      { "name": "node-maintenance" }
    
      { "name": "cnv" } 
    2
    
    
      { "name": "nmstate" }
    
      { "name": "cluster-observability" }
    
      { "name": "metallb" }
    
      { "name": "numaresources" }
    
      { "name": "oadp" }
        ]
         "pull_secret": $pull_secret[0] | tojson
    }
    ')" | jq '.id'
    Copy to Clipboard Toggle word wrap
    1
    Specify the Operators in the Operator bundle you are installing. The example lists the Operators for the Virtualization bundle.
    2
    Note the following:
    • In the Virtualization bundle, specifying cnv automatically installs lso in the background.
    • In the OpenShift AI Operator bundle:

      • Specifying nvidia-gpu automatically installs node-feature-discovery.
      • Specifying amd-gpu automatically installs kmm.

Use the schedulable_masters attribute to enable workloads to run on control plane nodes.

Prerequisites

  • You have generated a valid API_TOKEN. Tokens expire every 15 minutes.
  • You have created a $PULL_SECRET variable.
  • You are installing OpenShift Container Platform 4.14 or later.

Procedure

  1. Follow the instructions for installing Assisted Installer using the Assisted Installer API.
  2. When you reach the step for registering a new cluster, set the schedulable_masters attribute as follows:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
    {
      "schedulable_masters": true 
    1
    
    }
    ' | jq
    Copy to Clipboard Toggle word wrap
    1
    Enables the scheduling of workloads on the control plane nodes.

5.5.3. Configuring the network management type

The Assisted Installer lets you install the following network management types:

You define the network management type by adding the user_managed_networking and load_balancer attributes to the cluster definition, as in the example below:

$ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
   -H "Authorization: Bearer ${API_TOKEN}" \
   -H "Content-Type: application/json" \
   -d "$(jq --null-input \
      --slurpfile pull_secret ~/Downloads/pull-secret.txt '
   {
      "name": "testcluster",
      "openshift_version": "4.18",
      "cpu_architecture" : "x86_64",
      "base_dns_domain": "example.com",
      "user_managed_networking": "false",
      "load_balancer": {
         "type": "cluster-managed"
      }
   "pull_secret": $pull_secret[0] | tojson
   }
   ')" | jq '.id'
Copy to Clipboard Toggle word wrap

Where:

  • user_managed_networking is either true or false.
  • load_balancer can have the type user-managed or cluster-managed.

You can review the user_managed_networking and load_balancer valid values in the swagger.yaml file.

This step is optional. If you do not define a network management type, the Assisted Installer applies cluster-managed networking by default to all highly available clusters. For single-node OpenShift, the Assisted Installer applies user-managed networking by default.

5.5.3.1. Installing cluster-managed networking

Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology. This configuration includes an integrated load balancer and virtual routing for managing the API and Ingress VIP addresses. For details, see Network management types.

Prerequisites

  • You are installing an OpenShift Container Platform cluster of three or more control plane nodes.

    Note

    Currently, cluster-managed networking is not supported on IBM Z® and IBM Power®.

Procedure

  • To define cluster-managed networking, add the following attributes and values to your cluster definition:

    "user_managed_networking": false,
    "load_balancer": {
            "type": "cluster-managed"
    }
    Copy to Clipboard Toggle word wrap

    Where the load_balancer attribute is optional. If omitted for this configuration, the type is automatically set to user-managed for single-node OpenShift or to cluster-managed for all other implementations.

5.5.3.2. Installing user-managed networking

Selecting user-managed networking deploys OpenShift Container Platform with a non-standard network topology. Select user-managed networking if you want to deploy a cluster with an external load balancer and DNS, or if you intend to deploy the cluster nodes across many distinct subnets.

For details, see Network management types.

The Assisted Installer lets you deploy more than one external load balancer for user-managed networking.

Note

Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only.

Procedure

  • To define user-managed networking, add the following attributes to your cluster definition:

    "user_managed_networking": true,
    Copy to Clipboard Toggle word wrap
    Note

    The load_balancer attribute is not required when user-managed networking is set to true, because you will be provisioning your own load balancer.

Network Validations

When you enable user-managed networking, the following network validations change:

  • The L3 connectivity check (ICMP) replaces the L2 check (ARP).
  • The maximum transmission unit (MTU) validation verifies the MTU value for all interfaces and not only for the machine network.

Cluster-managed networking with a user-managed load balancer is a hybrid network management type designed for scenarios that require automated cluster networking with external control over load balancing. This approach enables users to provide one or more external load balancers (for example, an API load balancer and an Ingress load balancer), while retaining the bare-metal features installed in cluster-managed networking.

For details, see Network management types.

Important

Cluster-managed networking with a user-managed load balancer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

Use the Assisted Installer API to deploy cluster-managed networking with a user-managed load balancer on a bare-metal or vSphere platform.

Prerequisites

  • You are installing OpenShift Container Platform version 4.16 or higher.
  • You are installing on a bare-metal or vSphere platform.
  • You are using IPv4 single-stack networking.
  • You are installing an OpenShift Container Platform cluster of three or more control plane nodes.
  • For a vSphere platform installation, you meet the additional requirements specified in vSphere installation requirements.

Procedure

  1. Configure the load balancer to be accessible from all hosts and have access to the following services:

    • OpenShift Machine Config Operator (MCO) - On control plane nodes.
    • OpenShift API - On control plane nodes.
    • Ingress Controller - On compute (worker) nodes.

    For details, see Configuring a user-managed load balancer (steps 1 and 2).

  2. Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer:

    • Configure the DNS record to make your primary API accessible:

      <load_balancer_ip_address> <record_name> api.<cluster_name>.<base_domain>
      Copy to Clipboard Toggle word wrap
    • Configure the DNS record to route external traffic to your applications via an ingress controller:

      <load_balancer_ip_address> <record_name> apps.<cluster_name>.<base_domain>
      Copy to Clipboard Toggle word wrap
    • For vSphere only, configure the DNS record to support internal API access within your network:

      <load_balancer_ip_address> <record_name> api-int.<cluster_name>.<base_domain>
      Copy to Clipboard Toggle word wrap

    For details, see Configuring a user-managed load balancer (step 3).

  3. Add the following configurations to the Assisted Installer API cluster definitions:

    1. Set the user_managed_networking and load_balancer fields to the following values:

      "user_managed_networking": false,
      "load_balancer": {
              "type": "user-managed"
      }
      Copy to Clipboard Toggle word wrap

      For details, see Changing the network management type.

    2. Specify the Ingress and API VIPs. These should correspond to the load balancer IP address:

          "ingress_vips": [
              {
                  "cluster_id": "<cluster-id>",
                  "ip": "<load-balancer-ip>"
              }
          ],
          "api_vips": [
              {
                  "cluster_id": "<cluster-id>",
                  "ip": "<load-balancer-ip>"
              }
          ]
      Copy to Clipboard Toggle word wrap
    3. Specify a list of machine networks to ensure the following:

      • Each node has at least one network interface card (NIC) with an IP address in at least one machine network.
      • The load balancer IP, which is also the API VIP and Ingress VIP, is included in at least one of the machine networks.

        Example

         "machine_networks": [
            {
                "cidr": "<hosts-cidr-1>",
                "cluster_id": "<cluster-id>"
            },
            {
                "cidr": "<hosts-cidr-2>",
                "cluster_id": "<cluster-id>"
            },
            {
                "cidr": "<load-balancer-cidr>",
                "cluster_id": "<cluster-id>"
            }
        ]
        Copy to Clipboard Toggle word wrap

      For more details, see Machine network.

Network Validations

When you enable this network management type, the following network validations change:

  • The L3 connectivity check (ICMP) replaces the L2 check (ARP).
  • The maximum transmission unit (MTU) validation verifies the MTU value for all interfaces and not only for the machine network.

5.6. Modifying a cluster

To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params model in the API viewer for details on the fields you can set when modifying a cluster definition.

You can add or remove Operators from a cluster resource that has already been registered.

Note

To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation.

Prerequisites

  • You have created a new cluster resource.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Modify the cluster. For example, change the SSH key:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
    {
        "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZrD4LMkAEeoU2vShhF8VM+cCZtVRgB7tqtsMxms2q3TOJZAgfuqReKYWm+OLOZTD+DO3Hn1pah/mU3u7uJfTUg4wEX0Le8zBu9xJVym0BVmSFkzHfIJVTn6SfZ81NqcalisGWkpmkKXVCdnVAX6RsbHfpGKk9YPQarmRCn5KzkelJK4hrSWpBPjdzkFXaIpf64JBZtew9XVYA3QeXkIcFuq7NBuUH9BonroPEmIXNOa41PUP1IWq3mERNgzHZiuU8Ks/pFuU5HCMvv4qbTOIhiig7vidImHPpqYT/TCkuVi5w0ZZgkkBeLnxWxH0ldrfzgFBYAxnpTU8Ih/4VhG538Ix1hxPaM6cXds2ic71mBbtbSrk+zjtNPaeYk1O7UpcCw4jjHspU/rVV/DY51D5gSiiuaFPBMucnYPgUxy4FMBFfGrmGLIzTKiLzcz0DiSz1jBeTQOX++1nz+KDLBD8CPdi5k4dq7lLkapRk85qdEvgaG5RlHMSPSS3wDrQ51fD8= user@hostname"
    }
    ' | jq
    Copy to Clipboard Toggle word wrap

5.6.1. Modifying Operators by using the API

You can add or remove Operators from a cluster resource that has already been registered as part of a previous installation. This is only possible before you start the OpenShift Container Platform installation.

You modify the required Operator definition by using the PATCH method for the assisted-service/v2/clusters/{cluster_id} endpoint and by setting the olm_operators parameter.

Prerequisites

  • You have refreshed the API token.
  • You have exported the CLUSTER_ID as an environment variable.

Procedure

  • Run the following command to modify the Operators:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
    {
        "olm_operators": [{"name": "mce"}, {"name": "cnv"}], 
    1
     
    2
    
    }
    ' | jq '.id'
    Copy to Clipboard Toggle word wrap
    1
    Specify cnv for OpenShift Virtualization, mtv for Migration Toolkit for Virtualization, mce for multicluster engine, odf for Red Hat OpenShift Data Foundation, lvm for Logical Volume Manager Storage, openshift-ai for OpenShift AI, osc for OpenShift sandboxed containers, nmstate for Kubernetes NMState, amd-gpu for AMD GPU, kmm for Kernel Module Management, node-feature-discovery for Node Feature Discovery, nvidia-gpu for NVIDIA GPU, self-node-remediation for Self Node Remediation, pipelines for OpenShift Pipelines, servicemesh for OpenShift Service Mesh, node-healthcheck for Node Health Check, lso for Local Storage Operator, fence-agents-remediation for Fence Agents Remediation, kube-descheduler for Kube Descheduler, serverless for OpenShift Serverless, authorino for Authorino, cluster-observability for Cluster Observability Operator, metallb for MetalLB, numaresources for NUMA Resources, and oadp for OpenShift API for Data Protection.
    2
    To modify the Operators, add a new complete list of Operators that you want to install, and not just the differences. To remove all Operators, specify an empty array: "olm_operators": [].

    Example output

    {
      <various cluster properties>,
      "monitored_operators": [
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "console",
          "operator_type": "builtin",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "cvo",
          "operator_type": "builtin",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "mce",
          "namespace": "multicluster-engine",
          "operator_type": "olm",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "subscription_name": "multicluster-engine",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "cnv",
          "namespace": "openshift-cnv",
          "operator_type": "olm",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "subscription_name": "hco-operatorhub",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "lvm",
          "namespace": "openshift-local-storage",
          "operator_type": "olm",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "subscription_name": "local-storage-operator",
          "timeout_seconds": 4200
        }
      ],
      <more cluster properties>
    Copy to Clipboard Toggle word wrap

    Note

    The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types:

    • "operator_type": "builtin": Operators of this type are an integral part of OpenShift Container Platform.
    • "operator_type": "olm": Operators of this type are added manually by a user or automatically, as a dependency. In this example, the LVM Storage Operator is added automatically as a dependency of OpenShift Virtualization.

Additional resources

5.7. Registering a new infrastructure environment

Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings:

  • name
  • pull_secret
  • cpu_architecture

See the infra-env-create-params model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id when creating a new infrastructure environment. The cluster_id will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO.

Prerequisites

  • You have generated a valid API_TOKEN. Tokens expire every 15 minutes.
  • You have downloaded the pull secret.
  • Optional: You have registered a new cluster definition and exported the cluster_id.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the image_type. You can specify either full-iso or minimal-iso. The default value is minimal-iso.

    1. Optional: You can register a new infrastructure environment by slurping the pull secret file in the request:

      $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs \
      -H "Authorization: Bearer ${API_TOKEN}" \
      -H "Content-Type: application/json" \
      -d "$(jq --null-input \
        --slurpfile pull_secret ~/Downloads/pull-secret.txt \
        --arg cluster_id ${CLUSTER_ID} '
          {
            "name": "testcluster-infra-env",
            "image_type":"full-iso",
            "cluster_id": $cluster_id,
            "cpu_architecture" : "<architecture_name>", 
      1
      
            "pull_secret": $pull_secret[0] | tojson
          }
      ')" | jq '.id'
      Copy to Clipboard Toggle word wrap
      Note
      1
      Valid values are x86_64, arm64, ppc64le, s390x, and multi.
    2. Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request:

      $ cat << EOF > infra-envs.json
      {
       "name": "testcluster",
       "pull_secret": $PULL_SECRET,
       "proxy": {
          "http_proxy": "",
          "https_proxy": "",
          "no_proxy": ""
        },
        "ssh_authorized_key": "$CLUSTER_SSHKEY",
        "image_type": "full-iso",
        "cluster_id": "${CLUSTER_ID}",
        "openshift_version": "4.11"
      }
      EOF
      Copy to Clipboard Toggle word wrap
      $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/infra-envs"
       -d @./infra-envs.json
       -H "Content-Type: application/json"
       -H "Authorization: Bearer $API_TOKEN"
       | jq '.id'
      Copy to Clipboard Toggle word wrap
  3. Assign the returned id to the INFRA_ENV_ID variable and export it:

    $ export INFRA_ENV_ID=<id>
    Copy to Clipboard Toggle word wrap
Note

Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id, you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id in a new terminal session.

5.8. Modifying an infrastructure environment

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides.

See the infra-env-update-params model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

Prerequisites

  • You have created a new infrastructure environment.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Modify the infrastructure environment:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
      --slurpfile pull_secret ~/Downloads/pull-secret.txt '
        {
          "image_type":"minimal-iso",
          "pull_secret": $pull_secret[0] | tojson
        }
    ')" | jq
    Copy to Clipboard Toggle word wrap

5.8.1. Adding kernel arguments

Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel’s behavior and the operating system’s configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node’s RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings.

The RHCOS installer kargs modify command supports the append, delete, and replace options.

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Modify the kernel arguments:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
      --slurpfile pull_secret ~/Downloads/pull-secret.txt '
        {
          "kernel_arguments": [{ "operation": "append", "value": "<karg>=<value>" }], 
    1
    
          "image_type":"minimal-iso",
          "pull_secret": $pull_secret[0] | tojson
        }
    ')" | jq
    Copy to Clipboard Toggle word wrap
    1
    Replace <karg> with the the kernel argument and <value> with the kernal argument value. For example: rd.net.timeout.carrier=60. You can specify multiple kernel arguments by adding a JSON object for each kernel argument.

5.9. Applying a static network configuration

You can apply a static network configuration by using the Assisted Installer API. This step is optional.

Important

A static IP configuration is not supported in the following scenarios:

  • OpenShift Container Platform installations on Oracle Cloud Infrastructure.
  • OpenShift Container Platform installations on iSCSI boot volumes.

Prerequisites

  1. You have created an infrastructure environment using the API or have created a cluster using the web console.
  2. You have your infrastructure environment ID exported in your shell as $INFRA_ENV_ID.
  3. You have credentials to use when accessing the API and have exported a token as $API_TOKEN in your shell.
  4. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml.

Procedure

  1. Create a temporary file /tmp/request-body.txt with the API request:

    jq -n --arg NMSTATE_YAML1 "$(cat server-a.yaml)" --arg NMSTATE_YAML2 "$(cat server-b.yaml)" \
    '{
      "static_network_config": [
        {
          "network_yaml": $NMSTATE_YAML1,
          "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}]
        },
        {
          "network_yaml": $NMSTATE_YAML2,
          "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}]
         }
      ]
    }' >> /tmp/request-body.txt
    Copy to Clipboard Toggle word wrap
  2. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  3. Send the request to the Assisted Service API endpoint:

    $ curl -H "Content-Type: application/json" \
    -X PATCH -d @/tmp/request-body.txt \
    -H "Authorization: Bearer ${API_TOKEN}" \
    https://api.openshift.com/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID
    Copy to Clipboard Toggle word wrap

5.10. Adding hosts

After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images:

  • Full ISO image: Use the full ISO image when booting must be self-contained. The image includes everything needed to boot and start the Assisted Installer agent. The ISO image is about 1GB in size. This is the recommended method for the s390x architecture when installing with RHEL KVM.
  • Minimal ISO image: Use the minimal ISO image when the virtual media connection has limited bandwidth. This is the default setting. The image includes only what the agent requires to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

    This option is mandatory in the following scenarios:

    • If you are installing OpenShift Container Platform on Oracle Cloud Infrastructure.
    • If you are installing OpenShift Container Platform on iSCSI boot volumes.
Note

Currently, ISO images are supported on IBM Z® (s390x) with KVM, iPXE with z/VM, and LPAR (both static and DPM). For details, see Booting hosts using iPXE.

You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image.

Prerequisites

  • You have created a cluster.
  • You have created an infrastructure environment.
  • You have completed the configuration.
  • If the cluster hosts require the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, required domains or IP addresses, and port for the HTTP and HTTPS URLs of the proxy server. If the cluster hosts are behind a firewall, allow the nodes to access the required domains or IP addresses through the firewall. See Configuring your firewall for OpenShift Container Platform for more information.

    Note

    The proxy username and password must be URL-encoded.

  • You have selected an image type or will use the default minimal-iso.

Procedure

  1. Configure the discovery image if needed. For details, see Configuring the discovery image.
  2. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  3. Get the download URL:

    $ curl -H "Authorization: Bearer ${API_TOKEN}" \
    https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-url
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "expires_at": "2024-02-07T20:20:23.000Z",
      "url": "https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso"
    }
    Copy to Clipboard Toggle word wrap

  4. Download the discovery image:

    $ wget -O discovery.iso <url>
    Copy to Clipboard Toggle word wrap

    Replace <url> with the download URL from the previous step.

  5. Boot the host(s) with the discovery image.
  6. Assign a role to host(s).

5.10.1. Selecting a role

You can select a role for the host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host can have one of the following roles:

  • master - Assigns the control plane role to the host, allowing the host to manage and coordinate the cluster.
  • arbiter - Assigns the arbiter role to the host, providing a cost-effective solution for components that require a quorum.
  • worker - Assigns the compute role to the host, enabling the host to run application workloads.
  • auto-assign - Automatically determines whether the host is a master, worker, or arbiter. This is the default setting.

Use this procedure to assign a role to the host. If the host_role setting is omitted, the host defaults to auto-assign.

Prerequisites

  • You have added hosts to the cluster.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Get the host IDs:

    $ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \
    --header "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
    | jq '.host_networks[].host_ids'
    Copy to Clipboard Toggle word wrap

    Example output

    [
      "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5"
    ]
    Copy to Clipboard Toggle word wrap

  3. Add the host_role setting:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
        {
          "host_role":"worker"
        }
    ' | jq
    Copy to Clipboard Toggle word wrap

    Where:

5.11. Modifying hosts

After adding hosts, modify the hosts as needed. The most common modifications are to the host_name and the host_role parameters.

You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params model in the API viewer for details on the fields you can set when modifying a host.

A host might be one of the following roles:

  • master - Assigns the control plane role to the host, allowing the host to manage and coordinate the cluster.
  • arbiter - Assigns the arbiter role to the host, providing a cost-effective solution for components that require a quorum.
  • worker - Assigns the compute role to the host, enabling the host to run application workloads.
  • auto-assign - Automatically determines whether the host is a master, worker, or `arbiter' node.

Use the following procedure to set the host’s role. If the host_role setting is omitted, the host defaults to auto-assign.

Prerequisites

  • You have added hosts to the cluster.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Get the host IDs:

    $ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \
    --header "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
    | jq '.host_networks[].host_ids'
    Copy to Clipboard Toggle word wrap
  3. Modify the host settings by using the example below:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 
    1
    
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
        {
          "host_role":"worker"
          "host_name" : "worker-1"
        }
    ' | jq
    Copy to Clipboard Toggle word wrap
    1
    Replace <host_id> with the ID of the host.

5.11.1. Modifying storage disk configuration

Each host retrieved during host discovery can have multiple storage disks. You can optionally change the default configurations for each disk.

Important
  • Starting from OpenShift Container Platform 4.14, you can configure nodes with Intel® Virtual RAID on CPU (VROC) to manage NVMe RAIDs. For details, see Configuring an Intel® Virtual RAID on CPU (VROC) data volume.
  • Starting from OpenShift Container Platform 4.15, you can install a cluster on a single or multipath iSCSI boot device using the Assisted Installer.

Prerequisites

  • Configure the cluster and discover the hosts. For details, see Additional resources.

5.11.1.1. Viewing the storage disks

You can view the hosts in your cluster, and the disks on each host. You can then perform actions on a specific disk.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Get the host IDs for the cluster:

    $ curl -s "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \
      -H "Authorization: Bearer $API_TOKEN" \
    | jq '.host_networks[].host_ids'
    Copy to Clipboard Toggle word wrap

    Example output

    "1022623e-7689-8b2d-7fbd-e6f4d5bb28e5"
    Copy to Clipboard Toggle word wrap

    Note

    This is the ID of a single host. Multiple host IDs are separated by commas.

  3. Get the disks for a specific host:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 
    1
    
    -H "Authorization: Bearer ${API_TOKEN}" \
    | jq '.inventory | fromjson | .disks'
    Copy to Clipboard Toggle word wrap
    1
    Replace <host_id> with the ID of the relevant host.

    Example output

      [
      {
        "by_id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506",
        "by_path": "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0",
        "drive_type": "HDD",
        "has_uuid": true,
        "hctl": "1:2:0:0",
        "id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506",
        "installation_eligibility": {
          "eligible": true,
          "not_eligible_reasons": null
        },
        "model": "PERC_H710P",
        "name": "sda",
        "path": "/dev/sda",
        "serial": "0006a560141adc3a2d00fb8af960f681",
        "size_bytes": 6595056500736,
        "vendor": "DELL",
        "wwn": "0x6c81f660f98afb002d3adc1a1460a506"
      }
    ]
    Copy to Clipboard Toggle word wrap

    Note

    This is the output for one disk. It has the disk_id and installation_eligibility properties for the disk.

5.11.1.2. Changing the installation disk

The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.

You can select any disk whose installation_eligibility property is eligible: true to be the installation disk.

Note

Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing over Fibre Channel on the installation disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with an /etc/multipath.conf configuration. For details, see Modifying the DM Multipath configuration file.

Procedure

  1. Get the host and storage disk IDs. For details, see Viewing the storage disks.
  2. Optional: Identify the current installation disk:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 
    1
    
    -H "Authorization: Bearer ${API_TOKEN}" \
    | jq '.installation_disk_id'
    Copy to Clipboard Toggle word wrap
    1
    Replace <host_id> with the ID of the relevant host.
  3. Assign a new installation disk:

    Note

    Multipath devices are automatically discovered and listed in the host’s inventory. To assign a multipath Fibre Channel disk as the installation disk, choose a disk with "drive_type" set to "Multipath", rather than to "FC" which indicates a single path.

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 
    1
    
    -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer ${API_TOKEN}" \
    
    {
      "disks_selected_config": [
        {
          "id": "<disk_id>", 
    2
    
          "role": "install"
        }
      ]
    }
    Copy to Clipboard Toggle word wrap
    1
    Replace <host_id> with the ID of the host.
    2
    Replace <disk_id> with the ID of the new installation disk.

5.11.1.3. Disabling disk formatting

The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.

You can choose to disable the formatting of a specific disk. Disable formatting with caution, as bootable disks can interfere with the installation process, specifically the boot order.

You cannot disable formatting for the installation disk.

Procedure

  1. Get the host and storage disk IDs. For details, see Viewing the storage disks.
  2. Run the following command:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 
    1
    
    -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer ${API_TOKEN}" \
    
    {
     "disks_skip_formatting": [
       {
         "disk_id": "<disk_id>", 
    2
    
         "skip_formatting": true 
    3
    
       }
     ]
    }
    Copy to Clipboard Toggle word wrap
    Note
    1
    Replace <host_id> with the ID of the host.
    2
    Replace <disk_id> with the ID of the disk. If there is more than one disk, separate the IDs with a comma.
    3
    To re-enable formatting, change the value to false.

5.12. Adding custom manifests

A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/$CLUSTER_ID/manifests endpoint.

You can upload a base64-encoded custom manifest to either the openshift folder or the manifests folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted.

You can only upload one base64-encoded JSON manifest at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.

For a file containing a single custom manifest, accepted file extensions include .yaml, .yml, or .json.

Single custom manifest example

{
    "apiVersion": "machineconfiguration.openshift.io/v1",
    "kind": "MachineConfig",
    "metadata": {
        "labels": {
            "machineconfiguration.openshift.io/role": "primary"
        },
        "name": "10_primary_storage_config"
    },
    "spec": {
        "config": {
            "ignition": {
                "version": "3.2.0"
            },
            "storage": {
                "disks": [
                    {
                        "device": "</dev/xxyN>",
                        "partitions": [
                            {
                                "label": "recovery",
                                "startMiB": 32768,
                                "sizeMiB": 16384
                            }
                        ]
                    }
                ],
                "filesystems": [
                    {
                        "device": "/dev/disk/by-partlabel/recovery",
                        "label": "recovery",
                        "format": "xfs"
                    }
                ]
            }
        }
    }
}
Copy to Clipboard Toggle word wrap

For a file containing multiple custom manifests, accepted file types include .yaml or .yml.

Multiple custom manifest example

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-openshift-machineconfig-master-kargs
spec:
  kernelArguments:
    - loglevel=7
---
apiVersion: machineconfiguration.openshift.io/v2
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 98-openshift-machineconfig-worker-kargs
spec:
  kernelArguments:
    - loglevel=5
Copy to Clipboard Toggle word wrap

Note
  • When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional.
  • For more information about custom manifests, see Additional Resources.

Prerequisites

  • You have generated a valid API_TOKEN. Tokens expire every 15 minutes.
  • You have registered a new cluster definition and exported the cluster_id to the $CLUSTER_ID BASH variable.

Procedure

  1. Create a custom manifest file.
  2. Save the custom manifest file using the appropriate extension for the file format.
  3. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  4. Add the custom manifest to the cluster by executing the following command:

    $ curl -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests" \
        -H "Authorization: Bearer $API_TOKEN" \
        -H "Content-Type: application/json" \
        -d '{
                "file_name":"manifest.json",
                "folder":"manifests",
                "content":"'"$(base64 -w 0 ~/manifest.json)"'"
        }' | jq
    Copy to Clipboard Toggle word wrap

    Replace manifest.json with the name of your manifest file. The second instance of manifest.json is the path to the file. Ensure the path is correct.

    Example output

    {
      "file_name": "manifest.json",
      "folder": "manifests"
    }
    Copy to Clipboard Toggle word wrap

    Note

    The base64 -w 0 command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception.

  5. Verify that the Assisted Installer added the manifest:

    $ curl -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json" -H "Authorization: Bearer $API_TOKEN"
    Copy to Clipboard Toggle word wrap

    Replace manifest.json with the name of your manifest file.

5.13. Preinstallation validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.

5.14. Installing the cluster

Once the cluster hosts past validation, you can install the cluster.

Prerequisites

  • You have created a cluster and infrastructure environment.
  • You have added hosts to the infrastructure environment.
  • The hosts have passed validation.

Procedure

  1. Refresh the API token:

    $ source refresh-token
    Copy to Clipboard Toggle word wrap
  2. Install the cluster:

    $ curl -H "Authorization: Bearer $API_TOKEN" \
    -X POST \
    https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/actions/install | jq
    Copy to Clipboard Toggle word wrap
  3. Complete any postinstallation platform integration steps.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat