Search

Chapter 4. Installing with the Assisted Installer API

download PDF

After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster by using the Assisted Installer API. To use the API, you must perform the following procedures:

  • Set up the API authentication.
  • Configure the pull secret.
  • Register a new cluster definition.
  • Create an infrastructure environment for the cluster.

Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API, but you can review all of the endpoints in the API viewer or the swagger.yaml file.

4.1. Generating the offline token

Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token.

Prerequisites

Procedure

  1. In the menu, click Downloads.
  2. In the Tokens section under OpenShift Cluster Manager API Token, click View API Token.
  3. Click Load Token.

    Important

    Disable pop-up blockers.

  4. In the Your API token section, copy the offline token.
  5. In your terminal, set the offline token to the OFFLINE_TOKEN variable:

    $ export OFFLINE_TOKEN=<copied_token>
    Tip

    To make the offline token permanent, add it to your profile.

  6. (Optional) Confirm the OFFLINE_TOKEN variable definition.

    $ echo ${OFFLINE_TOKEN}

4.2. Authenticating with the REST API

API calls require authentication with the API token. Assuming you use API_TOKEN as a variable name, add -H "Authorization: Bearer ${API_TOKEN}" to API calls to authenticate with the REST API.

Note

The API token expires after 15 minutes.

Prerequisites

  • You have generated the OFFLINE_TOKEN variable.

Procedure

  1. On the command line terminal, set the API_TOKEN variable using the OFFLINE_TOKEN to validate the user.

    $ export API_TOKEN=$( \
      curl \
      --silent \
      --header "Accept: application/json" \
      --header "Content-Type: application/x-www-form-urlencoded" \
      --data-urlencode "grant_type=refresh_token" \
      --data-urlencode "client_id=cloud-services" \
      --data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
      "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \
      | jq --raw-output ".access_token" \
    )
  1. Confirm the API_TOKEN variable definition:

    $ echo ${API_TOKEN}
  2. Create a script in your path for one of the token generating methods. For example:

    $ vim ~/.local/bin/refresh-token
    export API_TOKEN=$( \
      curl \
      --silent \
      --header "Accept: application/json" \
      --header "Content-Type: application/x-www-form-urlencoded" \
      --data-urlencode "grant_type=refresh_token" \
      --data-urlencode "client_id=cloud-services" \
      --data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
      "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \
      | jq --raw-output ".access_token" \
    )

    Then, save the file.

  3. Change the file mode to make it executable:

    $ chmod +x ~/.local/bin/refresh-token
  4. Refresh the API token:

    $ source refresh-token
  5. Verify that you can access the API by running the following command:

    $ curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer ${API_TOKEN}" | jq

    Example output

    {
      "release_tag": "v2.11.3",
      "versions": {
        "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-211",
        "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-266",
        "assisted-installer-service": "quay.io/app-sre/assisted-service:78d113a",
        "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-195"
      }
    }

4.3. Configuring the pull secret

Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request’s JSON object. The pull secret JSON must be formatted to escape the quotes. For example:

Before

{"auths":{"cloud.openshift.com": ...

After

{\"auths\":{\"cloud.openshift.com\": ...

Procedure

  1. In the menu, click OpenShift.
  2. In the submenu, click Downloads.
  3. In the Tokens section under Pull secret, click Download.
  4. To use the pull secret from a shell variable, execute the following command:

    $ export PULL_SECRET=$(cat ~/Downloads/pull-secret.txt | jq -R .)
  5. To slurp the pull secret file using jq, reference it in the pull_secret variable, piping the value to tojson to ensure that it is properly formatted as escaped JSON. For example:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
            --slurpfile pull_secret ~/Downloads/pull-secret.txt ' 1
        {
            "name": "testcluster",
            "high_availability_mode": "None",
            "openshift_version": "4.11",
            "pull_secret": $pull_secret[0] | tojson, 2
            "base_dns_domain": "example.com"
        }
    ')"
    1
    Slurp the pull secret file.
    2
    Format the pull secret to escaped JSON format.
  6. Confirm the PULL_SECRET variable definition:

    $ echo ${PULL_SECRET}

4.4. Generating the SSH public key

During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubeshooting an installation error.

If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now.

Prerequisites

  • Generate the OFFLINE_TOKEN and API_TOKEN variables.

Procedure

  1. From the root user in your terminal, get the SSH public key:

    $ cat /root/.ssh/id_rsa.pub
  2. Set the SSH public key to the CLUSTER_SSHKEY variable:

    $ CLUSTER_SSHKEY=<downloaded_ssh_key>
  3. Confirm the CLUSTER_SSHKEY variable definition:

    $ echo ${CLUSTER_SSHKEY}

4.5. Registering a new cluster

To register a new cluster definition with the API, use the /v2/clusters endpoint.

The following parameters are mandatory:

  • name
  • openshift-version
  • pull_secret
  • cpu_architecture

See the cluster-create-params model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators field, see Additional Resources for details on installing Operators.

Prerequisites

  • You have generated a valid API_TOKEN. Tokens expire every 15 minutes.
  • You have downloaded the pull secret.
  • Optional: You have assigned the pull secret to the $PULL_SECRET variable.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Register a new cluster by using one of the following methods:

    • Register the cluster by referencing the pull secret file in the request:

      $ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
        -H "Authorization: Bearer ${API_TOKEN}" \
        -H "Content-Type: application/json" \
        -d "$(jq --null-input \
            --slurpfile pull_secret ~/Downloads/pull-secret.txt ' \
        { \
            "name": "testcluster", \
            "openshift_version": "4.16", \ 1
            "high_availability_mode": "<mode>", \ 2
            "cpu_architecture" : "<architecture_name>", \ 3
            "base_dns_domain": "example.com", \
            "pull_secret": $pull_secret[0] | tojson \
        } \
        ')" | jq '.id'
    • Register the cluster by doing the following:

      1. Writing the configuration to a JSON file:

        $ cat << EOF > cluster.json
        {
          "name": "testcluster",
          "openshift_version": "4.16", 1
          "high_availability_mode": "<mode>", 2
          "base_dns_domain": "example.com",
          "network_type": "examplenetwork",
          "cluster_network_cidr":"11.111.1.0/14"
          "cluster_network_host_prefix": 11,
          "service_network_cidr": "111.11.1.0/16",
          "api_vips":[{"ip": ""}],
          "ingress_vips": [{"ip": ""}],
          "vip_dhcp_allocation": false,
          "additional_ntp_source": "clock.redhat.com,clock2.redhat.com",
          "ssh_public_key": "$CLUSTER_SSHKEY",
          "pull_secret": $PULL_SECRET
        }
        EOF
      2. Referencing it in the request:

        $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters" \
          -d @./cluster.json \
          -H "Content-Type: application/json" \
          -H "Authorization: Bearer $API_TOKEN" \
          | jq '.id'
        1 1
        To install the latest OpenShift version, use the x.y format, such as 4.16 for version 4.16.10. To install a specific OpenShift version, use the x.y.z format, such as 4.16.3 for version 4.16.3. To install a mixed architecture cluster, add the -multi extension, such as 4.16-multi for the latest version or 4.16.3-multi for a specific version.
        2 2
        Set the value to Full for a high-availability multi-node cluster or None for a single-node OpenShift cluster.
        3
        Valid values are x86_64, arm64, ppc64le, s390x, or multi. Specify multi for a mixed-architecture cluster.
  3. Assign the returned cluster_id to the CLUSTER_ID variable and export it:

    $ export CLUSTER_ID=<cluster_id>
    Note

    If you close your terminal session, you need to export the CLUSTER_ID variable again in a new terminal session.

  4. Check the status of the new cluster:

    $ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
      | jq

Once you register a new cluster definition, create the infrastructure environment for the cluster.

Note

You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment.

4.5.1. Installing Operators

You can install the following Operators when you register a new cluster:

  • OpenShift Virtualization Operator

    Note

    Currently, OpenShift Virtualization is not supported on IBM Z® and IBM Power®.

  • Multicluster engine Operator
  • OpenShift Data Foundation Operator
  • LVM Storage Operator

If you require advanced options, install the Operators after you have installed the cluster.

Procedure

  • Run the following command:

    $ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
       --slurpfile pull_secret ~/Downloads/pull-secret.txt '
    {
       "name": "testcluster",
       "openshift_version": "4.15",
       "cpu_architecture" : "x86_64",
       "base_dns_domain": "example.com",
      "olm_operators": [
      { "name": "mce" } 1
      ,
      { "name": "odf" } 2
        ]
       "pull_secret": $pull_secret[0] | tojson
    }
    ')" | jq '.id'
    1
    Specify cnv for OpenShift Virtualization, mce for multicluster engine, odf for OpenShift Data Foundation, or lvm for LVM Storage.
    2
    This example installs multicluster engine and OpenShift Data Foundation on a multi-node cluster. Specify mce and lvm for a single-node OpenShift cluster.

4.6. Modifying a cluster

To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params model in the API viewer for details on the fields you can set when modifying a cluster definition.

You can add or remove Operators from a cluster resource that has already been registered.

Note

To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation.

Prerequisites

  • You have created a new cluster resource.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Modify the cluster. For example, change the SSH key:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
    {
        "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZrD4LMkAEeoU2vShhF8VM+cCZtVRgB7tqtsMxms2q3TOJZAgfuqReKYWm+OLOZTD+DO3Hn1pah/mU3u7uJfTUg4wEX0Le8zBu9xJVym0BVmSFkzHfIJVTn6SfZ81NqcalisGWkpmkKXVCdnVAX6RsbHfpGKk9YPQarmRCn5KzkelJK4hrSWpBPjdzkFXaIpf64JBZtew9XVYA3QeXkIcFuq7NBuUH9BonroPEmIXNOa41PUP1IWq3mERNgzHZiuU8Ks/pFuU5HCMvv4qbTOIhiig7vidImHPpqYT/TCkuVi5w0ZZgkkBeLnxWxH0ldrfzgFBYAxnpTU8Ih/4VhG538Ix1hxPaM6cXds2ic71mBbtbSrk+zjtNPaeYk1O7UpcCw4jjHspU/rVV/DY51D5gSiiuaFPBMucnYPgUxy4FMBFfGrmGLIzTKiLzcz0DiSz1jBeTQOX++1nz+KDLBD8CPdi5k4dq7lLkapRk85qdEvgaG5RlHMSPSS3wDrQ51fD8= user@hostname"
    }
    ' | jq

4.6.1. Modifying Operators

You can add or remove Operators from a cluster resource that has already been registered as part of a previous installation. This is only possible before you start the OpenShift Container Platform installation.

You set the required Operator definition by using the PATCH method for the /v2/clusters/{cluster_id} endpoint.

Prerequisites

  • You have refreshed the API token.
  • You have exported the CLUSTER_ID as an environment variable.

Procedure

  • Run the following command to modify the Operators:

    $ curl https://api.openshift.com/api/assisted-install/v2/clusters/${CLUSTER_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
    {
        "olm_operators": [{"name": "mce"}, {"name": "cnv"}], 1
    }
    ' | jq '.id'
    1
    Specify cnv for OpenShift Virtualization, mce for multicluster engine, odf for Red Hat OpenShift Data Foundation, or lvm for Logical Volume Manager Storage. To remove a previously installed Operator, exclude it from the list of values. To remove all previously installed Operators, specify an empty array: "olm_operators": [].

    Sample output

    {
      <various cluster properties>,
      "monitored_operators": [
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "console",
          "operator_type": "builtin",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "cvo",
          "operator_type": "builtin",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "mce",
          "namespace": "multicluster-engine",
          "operator_type": "olm",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "subscription_name": "multicluster-engine",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "cnv",
          "namespace": "openshift-cnv",
          "operator_type": "olm",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "subscription_name": "hco-operatorhub",
          "timeout_seconds": 3600
        },
        {
          "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a",
          "name": "lvm",
          "namespace": "openshift-local-storage",
          "operator_type": "olm",
          "status_updated_at": "0001-01-01T00:00:00.000Z",
          "subscription_name": "local-storage-operator",
          "timeout_seconds": 4200
        }
      ],
      <more cluster properties>

    Note

    The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types:

    • "operator_type": "builtin": Operators of this type are an integral part of OpenShift Container Platform.
    • "operator_type": "olm": Operators of this type are added manually by a user or automatically, as a dependency. In this example, the LVM Storage Operator is added automatically as a dependency of OpenShift Virtualization.

4.7. Registering a new infrastructure environment

Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings:

  • name
  • pull_secret
  • cpu_architecture

See the infra-env-create-params model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id when creating a new infrastructure environment. The cluster_id will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO.

Prerequisites

  • You have generated a valid API_TOKEN. Tokens expire every 15 minutes.
  • You have downloaded the pull secret.
  • Optional: You have registered a new cluster definition and exported the cluster_id.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the image_type. You can specify either full-iso or minimal-iso. The default value is minimal-iso.

    1. Optional: You can register a new infrastructure environment by slurping the pull secret file in the request:

      $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs \
      -H "Authorization: Bearer ${API_TOKEN}" \
      -H "Content-Type: application/json" \
      -d "$(jq --null-input \
        --slurpfile pull_secret ~/Downloads/pull-secret.txt \
        --arg cluster_id ${CLUSTER_ID} '
          {
            "name": "testcluster-infra-env",
            "image_type":"full-iso",
            "cluster_id": $cluster_id,
            "cpu_architecture" : "<architecture_name>", 1
            "pull_secret": $pull_secret[0] | tojson
          }
      ')" | jq '.id'
      Note
      1
      Valid values are x86_64, arm64, ppc64le, s390x, and multi.
    2. Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request:

      $ cat << EOF > infra-envs.json
      {
       "name": "testcluster",
       "pull_secret": $PULL_SECRET,
       "proxy": {
          "http_proxy": "",
          "https_proxy": "",
          "no_proxy": ""
        },
        "ssh_authorized_key": "$CLUSTER_SSHKEY",
        "image_type": "full-iso",
        "cluster_id": "${CLUSTER_ID}",
        "openshift_version": "4.11"
      }
      EOF
      $ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/infra-envs"
       -d @./infra-envs.json
       -H "Content-Type: application/json"
       -H "Authorization: Bearer $API_TOKEN"
       | jq '.id'
  3. Assign the returned id to the INFRA_ENV_ID variable and export it:

    $ export INFRA_ENV_ID=<id>
Note

Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id, you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id in a new terminal session.

4.8. Modifying an infrastructure environment

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides.

See the infra-env-update-params model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

Prerequisites

  • You have created a new infrastructure environment.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Modify the infrastructure environment:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
      --slurpfile pull_secret ~/Downloads/pull-secret.txt '
        {
          "image_type":"minimal-iso",
          "pull_secret": $pull_secret[0] | tojson
        }
    ')" | jq

4.8.1. Adding kernel arguments

Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel’s behavior and the operating system’s configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node’s RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings.

The RHCOS installer kargs modify command supports the append, delete, and replace options.

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Modify the kernel arguments:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID} \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
      --slurpfile pull_secret ~/Downloads/pull-secret.txt '
        {
          "kernel_arguments": [{ "operation": "append", "value": "<karg>=<value>" }], 1
          "image_type":"minimal-iso",
          "pull_secret": $pull_secret[0] | tojson
        }
    ')" | jq
    1
    Replace <karg> with the the kernel argument and <value> with the kernal argument value. For example: rd.net.timeout.carrier=60. You can specify multiple kernel arguments by adding a JSON object for each kernel argument.

4.9. Adding hosts

After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images:

  • Full ISO image: Use the full ISO image when booting must be self-contained. The image includes everything needed to boot and start the Assisted Installer agent. The ISO image is about 1GB in size. This is the recommended method for the s390x architecture when installing with RHEL KVM.
  • Minimal ISO image: Use the minimal ISO image when bandwidth over the virtual media connection is limited. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
Note

Currently, ISO images are supported on IBM Z® (s390x) with KVM, iPXE with z/VM, and LPAR (both static and DPM). For details, see Booting hosts using iPXE.

You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image.

Prerequisites

  • You have created a cluster.
  • You have created an infrastructure environment.
  • You have completed the configuration.
  • If the cluster hosts are behind a firewall that requires the use of a proxy, you have configured the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.

    Note

    The proxy username and password must be URL-encoded.

  • You have selected an image type or will use the default minimal-iso.

Procedure

  1. Configure the discovery image if needed. For details, see Configuring the discovery image.
  2. Refresh the API token:

    $ source refresh-token
  3. Get the download URL:

    $ curl -H "Authorization: Bearer ${API_TOKEN}" \
    https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-url

    Example output

    {
      "expires_at": "2024-02-07T20:20:23.000Z",
      "url": "https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso"
    }

  4. Download the discovery image:

    $ wget -O discovery.iso <url>

    Replace <url> with the download URL from the previous step.

  5. Boot the host(s) with the discovery image.
  6. Assign a role to host(s).

4.10. Modifying hosts

After adding hosts, modify the hosts as needed. The most common modifications are to the host_name and the host_role parameters.

You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params model in the API viewer for details on the fields you can set when modifying a host.

A host might be one of two roles:

  • master: A host with the master role will operate as a control plane host.
  • worker: A host with the worker role will operate as a worker host.

By default, the Assisted Installer sets a host to auto-assign, which means the installation program determines whether the host is a master or worker role automatically. Use the following procedure to set the host’s role:

Prerequisites

  • You have added hosts to the cluster.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Get the host IDs:

    $ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \
    --header "Content-Type: application/json" \
      -H "Authorization: Bearer $API_TOKEN" \
    | jq '.host_networks[].host_ids'
  3. Modify the host:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 1
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
        {
          "host_role":"worker"
          "host_name" : "worker-1"
        }
    ' | jq
    1
    Replace <host_id> with the ID of the host.

4.10.1. Modifying storage disk configuration

Each host retrieved during host discovery can have multiple storage disks. You can optionally modify the default configurations for each disk.

Important

Starting from OpenShift Container Platform 4.16, you can install a cluster on a single iSCSI boot device using the Assisted Installer. Although OpenShift Container Platform also supports multipathing for iSCSI, this feature is currently not available for Assisted Installer deployments.

Prerequisites

  • Configure the cluster and discover the hosts. For details, see Additional resources.
Viewing the storage disks

You can view the hosts in your cluster, and the disks on each host. This enables you to perform actions on a specific disk.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Get the host IDs for the cluster:

    $ curl -s "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \
      -H "Authorization: Bearer $API_TOKEN" \
    | jq '.host_networks[].host_ids'

    Sample output

    $ "1022623e-7689-8b2d-7fbd-e6f4d5bb28e5"

    Note

    This is the ID of a single host. Multiple host IDs are separated by commas.

  3. Get the disks for a specific host:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 1
    -H "Authorization: Bearer ${API_TOKEN}" \
    | jq '.inventory | fromjson | .disks'
    1
    Replace <host_id> with the ID of the relevant host.

    Sample output

    $ [
      {
        "by_id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506",
        "by_path": "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0",
        "drive_type": "HDD",
        "has_uuid": true,
        "hctl": "1:2:0:0",
        "id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506",
        "installation_eligibility": {
          "eligible": true,
          "not_eligible_reasons": null
        },
        "model": "PERC_H710P",
        "name": "sda",
        "path": "/dev/sda",
        "serial": "0006a560141adc3a2d00fb8af960f681",
        "size_bytes": 6595056500736,
        "vendor": "DELL",
        "wwn": "0x6c81f660f98afb002d3adc1a1460a506"
      }
    ]

    Note

    This is the output for one disk. It contains the disk_id and installation_eligibility properties for the disk.

Changing the installation disk

The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.

You can select any disk whose installation_eligibility property is eligible: true to be the installation disk.

Note

Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing over Fibre Channel on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with an /etc/multipath.conf configuration. For details, see The DM Multipath Configuration File.

Procedure

  1. Get the host and storage disk IDs. For details, see Viewing the storage disks.
  2. Optional: Identify the current installation disk:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 1
    -H "Authorization: Bearer ${API_TOKEN}" \
    | jq '.installation_disk_id'
    1
    Replace <host_id> with the ID of the relevant host.
  3. Assign a new installation disk:

    Note

    Multipath devices are automatically discovered and listed in the host’s inventory. To assign a multipath Fibre Channel disk as the installation disk, choose a disk with "drive_type" set to "Multipath", rather than to "FC" which indicates a single path.

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 1
    -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer ${API_TOKEN}" \
    
    {
      "disks_selected_config": [
        {
          "id": "<disk_id>", 2
          "role": "install"
        }
      ]
    }
    1
    Replace <host_id> with the ID of the host.
    2
    Replace <disk_id> with the ID of the new installation disk.
Disabling disk formatting

The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.

You can choose to disable the formatting of a specific disk. This should be performed with caution, as bootable disks may interfere with the installation process, mainly in terms of boot order.

You cannot disable formatting for the installation disk.

Procedure

  1. Get the host and storage disk IDs. For details, see Viewing the storage disks.
  2. Run the following command:

    $  curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ 1
    -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer ${API_TOKEN}" \
    
    {
     "disks_skip_formatting": [
       {
         "disk_id": "<disk_id>", 2
         "skip_formatting": true 3
       }
     ]
    }
    Note
    1
    Replace <host_id> with the ID of the host.
    2
    Replace <disk_id> with the ID of the disk. If there is more than one disk, separate the IDs with a comma.
    3
    To re-enable formatting, change the value to false.

4.11. Adding custom manifests

A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/$CLUSTER_ID/manifests endpoint.

You can upload a base64-encoded custom manifest to either the openshift folder or the manifests folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted.

You can only upload one base64-encoded JSON manifest at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.

For a file containing a single custom manifest, accepted file extensions include .yaml, .yml, or .json.

Single custom manifest example

{
    "apiVersion": "machineconfiguration.openshift.io/v1",
    "kind": "MachineConfig",
    "metadata": {
        "labels": {
            "machineconfiguration.openshift.io/role": "primary"
        },
        "name": "10_primary_storage_config"
    },
    "spec": {
        "config": {
            "ignition": {
                "version": "3.2.0"
            },
            "storage": {
                "disks": [
                    {
                        "device": "</dev/xxyN>",
                        "partitions": [
                            {
                                "label": "recovery",
                                "startMiB": 32768,
                                "sizeMiB": 16384
                            }
                        ]
                    }
                ],
                "filesystems": [
                    {
                        "device": "/dev/disk/by-partlabel/recovery",
                        "label": "recovery",
                        "format": "xfs"
                    }
                ]
            }
        }
    }
}

For a file containing multiple custom manifests, accepted file types include .yaml or .yml.

Multiple custom manifest example

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-openshift-machineconfig-master-kargs
spec:
  kernelArguments:
    - loglevel=7
---
apiVersion: machineconfiguration.openshift.io/v2
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 98-openshift-machineconfig-worker-kargs
spec:
  kernelArguments:
    - loglevel=5

Note
  • When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional.
  • For more information about custom manifests, see Additional Resources.

Prerequisites

  • You have generated a valid API_TOKEN. Tokens expire every 15 minutes.
  • You have registered a new cluster definition and exported the cluster_id to the $CLUSTER_ID BASH variable.

Procedure

  1. Create a custom manifest file.
  2. Save the custom manifest file using the appropriate extension for the file format.
  3. Refresh the API token:

    $ source refresh-token
  4. Add the custom manifest to the cluster by executing the following command:

    $ curl -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests" \
        -H "Authorization: Bearer $API_TOKEN" \
        -H "Content-Type: application/json" \
        -d '{
                "file_name":"manifest.json",
                "folder":"manifests",
                "content":"'"$(base64 -w 0 ~/manifest.json)"'"
        }' | jq

    Replace manifest.json with the name of your manifest file. The second instance of manifest.json is the path to the file. Ensure the path is correct.

    Example output

    {
      "file_name": "manifest.json",
      "folder": "manifests"
    }

    Note

    The base64 -w 0 command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception.

  5. Verify that the Assisted Installer added the manifest:

    $ curl -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json" -H "Authorization: Bearer $API_TOKEN"

    Replace manifest.json with the name of your manifest file.

4.12. Preinstallation validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.

Additional resources

4.13. Installing the cluster

Once the cluster hosts past validation, you can install the cluster.

Prerequisites

  • You have created a cluster and infrastructure environment.
  • You have added hosts to the infrastructure environment.
  • The hosts have passed validation.

Procedure

  1. Refresh the API token:

    $ source refresh-token
  2. Install the cluster:

    $ curl -H "Authorization: Bearer $API_TOKEN" \
    -X POST \
    https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/actions/install | jq
  3. Complete any postinstallation platform integration steps.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.