Chapter 5. Installing with the Assisted Installer API
After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster by using the Assisted Installer API. To use the API, you must perform the following procedures:
- Set up the API authentication.
- Configure the pull secret.
- Register a new cluster definition.
- Create an infrastructure environment for the cluster.
Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API, but you can review all of the endpoints in the API viewer or the swagger.yaml file.
5.1. Generating the offline token Copy linkLink copied to clipboard!
Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token.
Prerequisites
-
Install
jq
. - Log in to the OpenShift Cluster Manager as a user with cluster creation privileges.
Procedure
- In the menu, click Downloads.
- In the Tokens section under OpenShift Cluster Manager API Token, click View API Token.
Click Load Token.
ImportantDisable pop-up blockers.
- In the Your API token section, copy the offline token.
In your terminal, set the offline token to the
OFFLINE_TOKEN
variable:export OFFLINE_TOKEN=<copied_token>
$ export OFFLINE_TOKEN=<copied_token>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipTo make the offline token permanent, add it to your profile.
(Optional) Confirm the
OFFLINE_TOKEN
variable definition.echo ${OFFLINE_TOKEN}
$ echo ${OFFLINE_TOKEN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Authenticating with the REST API Copy linkLink copied to clipboard!
API calls require authentication with the API token. Assuming you use API_TOKEN
as a variable name, add -H "Authorization: Bearer ${API_TOKEN}"
to API calls to authenticate with the REST API.
The API token expires after 15 minutes.
Prerequisites
-
You have generated the
OFFLINE_TOKEN
variable.
Procedure
On the command line terminal, set the
API_TOKEN
variable using theOFFLINE_TOKEN
to validate the user.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm the
API_TOKEN
variable definition:echo ${API_TOKEN}
$ echo ${API_TOKEN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a script in your path for one of the token generating methods. For example:
vim ~/.local/bin/refresh-token
$ vim ~/.local/bin/refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then, save the file.
Change the file mode to make it executable:
chmod +x ~/.local/bin/refresh-token
$ chmod +x ~/.local/bin/refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that you can access the API by running the following command:
curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer ${API_TOKEN}" | jq
$ curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer ${API_TOKEN}" | jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Configuring the pull secret Copy linkLink copied to clipboard!
Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request’s JSON object. The pull secret JSON must be formatted to escape the quotes. For example:
Before
{"auths":{"cloud.openshift.com": ...
{"auths":{"cloud.openshift.com": ...
After
{\"auths\":{\"cloud.openshift.com\": ...
{\"auths\":{\"cloud.openshift.com\": ...
Procedure
- In the menu, click OpenShift.
- In the submenu, click Downloads.
- In the Tokens section under Pull secret, click Download.
To use the pull secret from a shell variable, execute the following command:
export PULL_SECRET=$(cat ~/Downloads/pull-secret.txt | jq -R .)
$ export PULL_SECRET=$(cat ~/Downloads/pull-secret.txt | jq -R .)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To slurp the pull secret file using
jq
, reference it in thepull_secret
variable, piping the value totojson
to ensure that it is properly formatted as escaped JSON. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the
PULL_SECRET
variable definition:echo ${PULL_SECRET}
$ echo ${PULL_SECRET}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Generating the SSH public key Copy linkLink copied to clipboard!
During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubleshooting an installation error.
If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now.
For more information, see Generating a key pair for cluster node SSH access.
Prerequisites
-
Generate the
OFFLINE_TOKEN
andAPI_TOKEN
variables.
Procedure
From the root user in your terminal, get the SSH public key:
cat /root/.ssh/id_rsa.pub
$ cat /root/.ssh/id_rsa.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the SSH public key to the
CLUSTER_SSHKEY
variable:CLUSTER_SSHKEY=<downloaded_ssh_key>
$ CLUSTER_SSHKEY=<downloaded_ssh_key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the
CLUSTER_SSHKEY
variable definition:echo ${CLUSTER_SSHKEY}
$ echo ${CLUSTER_SSHKEY}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Registering a new cluster Copy linkLink copied to clipboard!
To register a new cluster definition with the API, use the /v2/clusters endpoint.
The following parameters are mandatory:
-
name
-
openshift-version
-
pull_secret
-
cpu_architecture
See the cluster-create-params
model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators
field, see Additional Resources for details on installing Operators.
Prerequisites
-
You have generated a valid
API_TOKEN
. Tokens expire every 15 minutes. - You have downloaded the pull secret.
-
Optional: You have assigned the pull secret to the
$PULL_SECRET
variable.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register a new cluster by using one of the following methods:
Reference the pull secret file in the request:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Write the configuration to a JSON file and then reference it in the request:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters" \ -d @./cluster.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.id'
$ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters" \ -d @./cluster.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.id'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 1
- Pay attention to the following:
-
To install the latest OpenShift version, use the
x.y
format, such as4.16
for version 4.16.10. To install a specific OpenShift version, use thex.y.z
format, such as4.16.3
for version 4.16.3. -
To install a multi-architecture compute cluster, add the
-multi
extension, such as4.16-multi
for the latest version or4.16.3-multi
for a specific version. - If you are booting from an iSCSI drive, enter OpenShift Container Platform version 4.15 or later.
-
To install the latest OpenShift version, use the
- 2 2
- Optionally set the number of control plane nodes to
1
for a single-node OpenShift cluster, to2
or more for a Two-Node OpenShift with Arbiter cluster, or to3
,4
, or5
for a multi-node OpenShift Container Platform cluster. If this setting is omitted, the Assisted Installer sets3
as the default.Note-
The
control_plane_count
field replaces thehigh_availability_mode
field, which is deprecated. For details, see API deprecation notice. - Currently, single-node OpenShift is not supported on IBM Z® and IBM Power® platforms.
-
The Assisted Installer supports
4
or5
control plane nodes from OpenShift Container Platform 4.18 and later, on a bare metal or user-managed networking platform with an x86_64 CPU architecture. For details, see About specifying the number of control plane nodes. -
The Assisted Installer supports
2
control plane nodes from OpenShift Container Platform 4.19 and later, for a Two-Node OpenShift with Arbiter cluster topology. If the number of control plane nodes for a cluster is2
, then it must have at least one additional arbiter host. For details, see Two-Node OpenShift with Arbiter resource requirements.
-
The
- 3
- Valid values are
x86_64
,arm64
,ppc64le
,s390x
, ormulti
. Specifymulti
for a multi-architecture compute cluster.
Assign the returned
cluster_id
to theCLUSTER_ID
variable and export it:export CLUSTER_ID=<cluster_id>
$ export CLUSTER_ID=<cluster_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you close your terminal session, you need to export the
CLUSTER_ID
variable again in a new terminal session.Check the status of the new cluster:
curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq
$ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Once you register a new cluster definition, create the infrastructure environment for the cluster.
You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment.
5.5.1. Installing Operators Copy linkLink copied to clipboard!
You can customize your deployment by adding Operators to the cluster during installation. You can install one or more Operators individually or add a group of Operators that form a bundle. If you require advanced options, add the Operators after you have installed the cluster.
This step is optional.
5.5.1.1. Installing standalone Operators Copy linkLink copied to clipboard!
Before selecting Operators for installation, you can verify which Operators are available in the Assisted Installer. You can also check whether an Operator is supported for a specific OCP version, CPU architecture, or platform.
You set the required Operator definitions by using the POST method for the assisted-service/v2/clusters/{cluster_id} endpoint and by setting the olm_operators
parameter.
The Assisted Installer allows you to install the following standalone Operators. For additional Operators that you can select as part of a bundle, see Installing bundle Operators.
OpenShift Virtualization Operator (
cnv
)Note- Currently, OpenShift Virtualization is not supported on IBM Z® and IBM Power®.
The OpenShift Virtualization Operator requires backend storage and might automatically activate a storage Operator in the background, according to the following criteria:
- None - If the CPU architecture is ARM64, no storage Operator is activated.
- LVM Storage - For single-node OpenShift clusters on any other CPU architecture deploying OpenShift Container Platform 4.12 or higher.
- Local Storage Operator (LSO) - For all other deployments.
Migration Toolkit for Virtualization Operator (
mtv
)NoteSpecifying the Migration Toolkit for Virtualization (MTV) Operator automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator.
Multicluster engine Operator (
mce
)NoteDeploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations:
- Multi-node cluster: No storage is configured. You must configure storage after the installation.
- Single-node OpenShift: LVM Storage is installed.
-
OpenShift Data Foundation Operator (
odf
) -
Logical Volume Manager Storage Operator (
lvm
) -
OpenShift AI Operator (
openshift-ai
) OpenShift sandboxed containers Operator (
osc
)ImportantThe integration of the OpenShift sandboxed containers Operator into the Assisted Installer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Kubernetes NMState Operator (
nmstate
)NoteCurrently, you cannot install the Kubernetes NMState Operator on the Nutanix or Oracle Cloud Infrastructure (OCI) third-party platforms.
AMD GPU Operator (
amd-gpu
)NoteInstalling the AMD GPU Operator automatically activates the Kernel Module Management Operator.
-
Kernel Module Management Operator (
kmm
) -
Node Feature Discovery Operator (
node-feature-discovery
) -
Self Node Remediation (
self-node-remediation
) NVIDIA GPU Operator (
nvidia-gpu
)NoteInstalling the NVIDIA GPU Operator automatically activates the Node Feature Discovery Operator.
The integration of the OpenShift AI, AMD GPU, Kernel Module Management, Node Feature Discovery, Self Node Remediation, and NVIDIA GPU Operators into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- You have reviewed Customizing your installation using Operators for an overview of each Operator that you intend to install, together with its prerequisites and dependencies.
Procedure
Optional: Check which Operators are available in the Assisted Installer, by running the following command:
curl -s "https://api.openshift.com/api/assisted-install/v2/supported-operators" -H "Authorization: Bearer ${API_TOKEN}" | jq .
$ curl -s "https://api.openshift.com/api/assisted-install/v2/supported-operators" -H "Authorization: Bearer ${API_TOKEN}" | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether an Operator is supported for a specified OCP version, CPU architecture, or platform by running the following command:
curl -s "https://api.openshift.com/api/assisted-install/v2/support-levels/features?openshift_version=4.13&cpu_architecture=x86_64&platform_type=baremetal" -H "Authorization: Bearer ${API_TOKEN}" | jq .features.SNO
$ curl -s "https://api.openshift.com/api/assisted-install/v2/support-levels/features?openshift_version=4.13&cpu_architecture=x86_64&platform_type=baremetal" -H "Authorization: Bearer ${API_TOKEN}" | jq .features.SNO
1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the attributes as follows:
-
For
openshift_version
, specify the OpenShift Container Platform version number. This attribute is mandatory. -
For
cpu_architecture
, specifyx86_64
,aarch64
,arm64
,ppc64le
,s390x
, ormulti
. This attribute is optional. -
For
platform_type
, specifybaremetal
,none
,nutanix
,vsphere
, orexternal
. This attribute is optional.
-
For
- 2
- Specify the Operator in upper case, for example,
.NODE-FEATURE-DISCOVERY
for Node Feature Discovery,.OPENSHIFT-AI
for OpenShift AI,.OSC
for OpenShift sandboxed containers,.SELF-NODE-REMEDIATION
for Self Node Remediation, or.MTV
for Migration Toolkit for Virtualization.Example output
"supported"
"supported"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where possible statuses are "supported", "dev-preview", "tech-preview", and "unavailable".
Get the full list of supported Operators and additional features for a specified OCP version, CPU architecture, or platform by running the following command:
curl -s "https://api.openshift.com/api/assisted-install/v2/support-levels/features?openshift_version=4.13&cpu_architecture=x86_64&platform_type=baremetal" -H "Authorization: Bearer ${API_TOKEN}" | jq
$ curl -s "https://api.openshift.com/api/assisted-install/v2/support-levels/features?openshift_version=4.13&cpu_architecture=x86_64&platform_type=baremetal" -H "Authorization: Bearer ${API_TOKEN}" | jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the Operators to install by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- List the Operators that you want to install. Specify
cnv
for OpenShift Virtualization,mtv
for Migration Toolkit for Virtualization,mce
for multicluster engine,odf
for Red Hat OpenShift Data Foundation,lvm
for Logical Volume Manager Storage,openshift-ai
for OpenShift AI,osc
for OpenShift sandboxed containers,nmstate
for Kubernetes NMState,amd-gpu
for AMD GPU,kmm
for Kernel Module Management,node-feature-discovery
for Node Feature Discovery,nvidia-gpu
for NVIDIA GPU andself-node-remediation
for Self Node Remediation. Installing an Operator automatically activates any dependent Operators.
5.5.1.2. Installing bundle Operators Copy linkLink copied to clipboard!
Although you cannot install an Operator bundle directly through the API, you can verify which Operators are included in a bundle and specify each Operator individually.
The Assisted Installer currently supports the following Operator bundles:
Virtualization Operator bundle - Contains the following Operators:
-
Kube Descheduler Operator (
kube-descheduler
) -
Node Maintenance Operator {
node-maintenance
} -
Migration Toolkit for Virtualization Operator (
mtv
) -
Kubernetes NMState Operator (
nmstate
) -
Fence Agents Remediation Operator (
fence-agents-remediation
) -
OpenShift Virtualization Operator (
cnv
) -
Node Health Check Operator (
node-healthcheck
) -
Local Storage Operator (LSO) Operator (
lso
) -
Cluster Observability Operator (
cluster-observability
) -
MetalLB Operator (
metallb
) -
NUMA Resources Operator (
numaresources
) -
OpenShift API for Data Protection Operator (
oadp
)
-
Kube Descheduler Operator (
OpenShift AI Operator bundle - Contains the following Operators:
-
Kubernetes Authorino Operator (
authorino
) -
OpenShift Data Foundation Operator (
odf
) -
OpenShift AI Operator (
openshift-ai
) -
AMD GPU Operator (
amd-gpu
) -
Node Feature Discovery Operator (
node-feature-discovery
) -
NVIDIA GPU Operator (
nvidia-gpu
) -
OpenShift Pipelines Operator (
pipelines
) -
OpenShift Service Mesh Operator (
servicemesh
) -
OpenShift Serverless Operator (
serverless
) -
Kernel Module Management Operator (
kmm
)
-
Kubernetes Authorino Operator (
The introduction of the Virtualization and OpenShift AI Operator bundles into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Prerequisites
- You have reviewed Customizing your installation using Operator bundles for an overview of the Operator bundles, together with their prerequisites and associated Operators.
Procedure
Optional: Check which Operator bundles are available in the Assisted Installer by running the following command:
curl -s "https://api.openshift.com/api/assisted-install/v2/operators/bundles" -H "Authorization: Bearer ${API_TOKEN}" | jq .
$ curl -s "https://api.openshift.com/api/assisted-install/v2/operators/bundles" -H "Authorization: Bearer ${API_TOKEN}" | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Check which Operators are associated with a specific bundle by running the following command:
curl -s "https://api.openshift.com/api/assisted-install/v2/operators/bundles/virtualization" -H "Authorization: Bearer ${API_TOKEN}" | jq .
$ curl -s "https://api.openshift.com/api/assisted-install/v2/operators/bundles/virtualization" -H "Authorization: Bearer ${API_TOKEN}" | jq .
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
virtualization
for the Virtualization Operator bundle oropenshift-ai
for the OpenShift AI Operator bundle. The example specifies the Virtualization Operator bundle.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Operators associated with the bundle by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the Operators in the Operator bundle you are installing. The example lists the Operators for the Virtualization bundle.
- 2
- Note the following:
-
In the Virtualization bundle, specifying
cnv
automatically installslso
in the background. In the OpenShift AI Operator bundle:
-
Specifying
nvidia-gpu
automatically installsnode-feature-discovery
. -
Specifying
amd-gpu
automatically installskmm
.
-
Specifying
-
In the Virtualization bundle, specifying
5.5.2. Scheduling workloads to run on control plane nodes Copy linkLink copied to clipboard!
Use the schedulable_masters
attribute to enable workloads to run on control plane nodes.
Prerequisites
-
You have generated a valid
API_TOKEN
. Tokens expire every 15 minutes. -
You have created a
$PULL_SECRET
variable. - You are installing OpenShift Container Platform 4.14 or later.
Procedure
- Follow the instructions for installing Assisted Installer using the Assisted Installer API.
When you reach the step for registering a new cluster, set the
schedulable_masters
attribute as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the scheduling of workloads on the control plane nodes.
5.5.3. Configuring the network management type Copy linkLink copied to clipboard!
The Assisted Installer lets you install the following network management types:
You define the network management type by adding the user_managed_networking
and load_balancer
attributes to the cluster definition, as in the example below:
Where:
-
user_managed_networking
is eithertrue
orfalse
. -
load_balancer
can have the typeuser-managed
orcluster-managed
.
You can review the user_managed_networking
and load_balancer
valid values in the swagger.yaml file.
This step is optional. If you do not define a network management type, the Assisted Installer applies cluster-managed networking by default to all highly available clusters. For single-node OpenShift, the Assisted Installer applies user-managed networking by default.
5.5.3.1. Installing cluster-managed networking Copy linkLink copied to clipboard!
Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology. This configuration includes an integrated load balancer and virtual routing for managing the API and Ingress VIP addresses. For details, see Network management types.
Prerequisites
You are installing an OpenShift Container Platform cluster of three or more control plane nodes.
NoteCurrently, cluster-managed networking is not supported on IBM Z® and IBM Power®.
Procedure
To define cluster-managed networking, add the following attributes and values to your cluster definition:
"user_managed_networking": false, "load_balancer": { "type": "cluster-managed" }
"user_managed_networking": false, "load_balancer": { "type": "cluster-managed" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the
load_balancer
attribute is optional. If omitted for this configuration, thetype
is automatically set touser-managed
for single-node OpenShift or tocluster-managed
for all other implementations.
Additional resources
5.5.3.2. Installing user-managed networking Copy linkLink copied to clipboard!
Selecting user-managed networking deploys OpenShift Container Platform with a non-standard network topology. Select user-managed networking if you want to deploy a cluster with an external load balancer and DNS, or if you intend to deploy the cluster nodes across many distinct subnets.
For details, see Network management types.
The Assisted Installer lets you deploy more than one external load balancer for user-managed networking.
Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only.
Procedure
To define user-managed networking, add the following attributes to your cluster definition:
"user_managed_networking": true,
"user_managed_networking": true,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
load_balancer
attribute is not required when user-managed networking is set totrue
, because you will be provisioning your own load balancer.
Network Validations
When you enable user-managed networking, the following network validations change:
- The L3 connectivity check (ICMP) replaces the L2 check (ARP).
- The maximum transmission unit (MTU) validation verifies the MTU value for all interfaces and not only for the machine network.
Additional resources
5.5.3.3. Installing cluster-managed networking with a user-managed load balancer Copy linkLink copied to clipboard!
Cluster-managed networking with a user-managed load balancer is a hybrid network management type designed for scenarios that require automated cluster networking with external control over load balancing. This approach enables users to provide one or more external load balancers (for example, an API load balancer and an Ingress load balancer), while retaining the bare-metal features installed in cluster-managed networking.
For details, see Network management types.
Cluster-managed networking with a user-managed load balancer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Use the Assisted Installer API to deploy cluster-managed networking with a user-managed load balancer on a bare-metal or vSphere platform.
Prerequisites
- You are installing OpenShift Container Platform version 4.16 or higher.
- You are installing on a bare-metal or vSphere platform.
- You are using IPv4 single-stack networking.
- You are installing an OpenShift Container Platform cluster of three or more control plane nodes.
- For a vSphere platform installation, you meet the additional requirements specified in vSphere installation requirements.
Procedure
Configure the load balancer to be accessible from all hosts and have access to the following services:
- OpenShift Machine Config Operator (MCO) - On control plane nodes.
- OpenShift API - On control plane nodes.
- Ingress Controller - On compute (worker) nodes.
For details, see Configuring a user-managed load balancer (steps 1 and 2).
Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer:
Configure the DNS record to make your primary API accessible:
<load_balancer_ip_address> <record_name> api.<cluster_name>.<base_domain>
<load_balancer_ip_address> <record_name> api.<cluster_name>.<base_domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the DNS record to route external traffic to your applications via an ingress controller:
<load_balancer_ip_address> <record_name> apps.<cluster_name>.<base_domain>
<load_balancer_ip_address> <record_name> apps.<cluster_name>.<base_domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For vSphere only, configure the DNS record to support internal API access within your network:
<load_balancer_ip_address> <record_name> api-int.<cluster_name>.<base_domain>
<load_balancer_ip_address> <record_name> api-int.<cluster_name>.<base_domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For details, see Configuring a user-managed load balancer (step 3).
Add the following configurations to the Assisted Installer API cluster definitions:
Set the
user_managed_networking
andload_balancer
fields to the following values:"user_managed_networking": false, "load_balancer": { "type": "user-managed" }
"user_managed_networking": false, "load_balancer": { "type": "user-managed" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details, see Changing the network management type.
Specify the Ingress and API VIPs. These should correspond to the load balancer IP address:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify a list of machine networks to ensure the following:
- Each node has at least one network interface card (NIC) with an IP address in at least one machine network.
The load balancer IP, which is also the API VIP and Ingress VIP, is included in at least one of the machine networks.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more details, see Machine network.
Network Validations
When you enable this network management type, the following network validations change:
- The L3 connectivity check (ICMP) replaces the L2 check (ARP).
- The maximum transmission unit (MTU) validation verifies the MTU value for all interfaces and not only for the machine network.
5.6. Modifying a cluster Copy linkLink copied to clipboard!
To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params
model in the API viewer for details on the fields you can set when modifying a cluster definition.
You can add or remove Operators from a cluster resource that has already been registered.
To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation.
Prerequisites
- You have created a new cluster resource.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the cluster. For example, change the SSH key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.1. Modifying Operators by using the API Copy linkLink copied to clipboard!
You can add or remove Operators from a cluster resource that has already been registered as part of a previous installation. This is only possible before you start the OpenShift Container Platform installation.
You modify the required Operator definition by using the PATCH method for the assisted-service/v2/clusters/{cluster_id} endpoint and by setting the olm_operators
parameter.
Prerequisites
- You have refreshed the API token.
-
You have exported the
CLUSTER_ID
as an environment variable.
Procedure
Run the following command to modify the Operators:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
cnv
for OpenShift Virtualization,mtv
for Migration Toolkit for Virtualization,mce
for multicluster engine,odf
for Red Hat OpenShift Data Foundation,lvm
for Logical Volume Manager Storage,openshift-ai
for OpenShift AI,osc
for OpenShift sandboxed containers,nmstate
for Kubernetes NMState,amd-gpu
for AMD GPU,kmm
for Kernel Module Management,node-feature-discovery
for Node Feature Discovery,nvidia-gpu
for NVIDIA GPU,self-node-remediation
for Self Node Remediation,pipelines
for OpenShift Pipelines,servicemesh
for OpenShift Service Mesh,node-healthcheck
for Node Health Check,lso
for Local Storage Operator,fence-agents-remediation
for Fence Agents Remediation,kube-descheduler
for Kube Descheduler,serverless
for OpenShift Serverless,authorino
for Authorino,cluster-observability
for Cluster Observability Operator,metallb
for MetalLB,numaresources
for NUMA Resources, andoadp
for OpenShift API for Data Protection. - 2
- To modify the Operators, add a new complete list of Operators that you want to install, and not just the differences. To remove all Operators, specify an empty array:
"olm_operators": []
.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe output is the description of the new cluster state. The
monitored_operators
property in the output contains Operators of two types:-
"operator_type": "builtin"
: Operators of this type are an integral part of OpenShift Container Platform. -
"operator_type": "olm"
: Operators of this type are added manually by a user or automatically, as a dependency. In this example, the LVM Storage Operator is added automatically as a dependency of OpenShift Virtualization.
Additional resources
- See Customizing your installation using Operators and Operator Bundles for an overview of each Operator that you intend to install, together with its prerequisites and dependencies.
5.7. Registering a new infrastructure environment Copy linkLink copied to clipboard!
Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings:
-
name
-
pull_secret
-
cpu_architecture
See the infra-env-create-params
model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id
when creating a new infrastructure environment. The cluster_id
will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO.
Prerequisites
-
You have generated a valid
API_TOKEN
. Tokens expire every 15 minutes. - You have downloaded the pull secret.
-
Optional: You have registered a new cluster definition and exported the
cluster_id
.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the
image_type
. You can specify eitherfull-iso
orminimal-iso
. The default value isminimal-iso
.Optional: You can register a new infrastructure environment by slurping the pull secret file in the request:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note- 1
- Valid values are
x86_64
,arm64
,ppc64le
,s390x
, andmulti
.
Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/infra-envs"
$ curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/infra-envs" -d @./infra-envs.json -H "Content-Type: application/json" -H "Authorization: Bearer $API_TOKEN" | jq '.id'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Assign the returned
id
to theINFRA_ENV_ID
variable and export it:export INFRA_ENV_ID=<id>
$ export INFRA_ENV_ID=<id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id
, you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id
in a new terminal session.
5.8. Modifying an infrastructure environment Copy linkLink copied to clipboard!
You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides.
See the infra-env-update-params
model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.
Prerequisites
- You have created a new infrastructure environment.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the infrastructure environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1. Adding kernel arguments Copy linkLink copied to clipboard!
Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel’s behavior and the operating system’s configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node’s RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings.
The RHCOS installer kargs modify
command supports the append
, delete
, and replace
options.
You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the kernel arguments:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<karg>
with the the kernel argument and<value>
with the kernal argument value. For example:rd.net.timeout.carrier=60
. You can specify multiple kernel arguments by adding a JSON object for each kernel argument.
5.9. Applying a static network configuration Copy linkLink copied to clipboard!
You can apply a static network configuration by using the Assisted Installer API. This step is optional.
A static IP configuration is not supported in the following scenarios:
- OpenShift Container Platform installations on Oracle Cloud Infrastructure.
- OpenShift Container Platform installations on iSCSI boot volumes.
Prerequisites
- You have created an infrastructure environment using the API or have created a cluster using the web console.
-
You have your infrastructure environment ID exported in your shell as
$INFRA_ENV_ID
. -
You have credentials to use when accessing the API and have exported a token as
$API_TOKEN
in your shell. -
You have YAML files with a static network configuration available as
server-a.yaml
andserver-b.yaml
.
Procedure
Create a temporary file
/tmp/request-body.txt
with the API request:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Send the request to the Assisted Service API endpoint:
curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID
$ curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10. Adding hosts Copy linkLink copied to clipboard!
After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images:
-
Full ISO image: Use the full ISO image when booting must be self-contained. The image includes everything needed to boot and start the Assisted Installer agent. The ISO image is about 1GB in size. This is the recommended method for the
s390x
architecture when installing with RHEL KVM. Minimal ISO image: Use the minimal ISO image when the virtual media connection has limited bandwidth. This is the default setting. The image includes only what the agent requires to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
This option is mandatory in the following scenarios:
- If you are installing OpenShift Container Platform on Oracle Cloud Infrastructure.
- If you are installing OpenShift Container Platform on iSCSI boot volumes.
Currently, ISO images are supported on IBM Z® (s390x
) with KVM, iPXE with z/VM, and LPAR (both static and DPM). For details, see Booting hosts using iPXE.
You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image.
Prerequisites
- You have created a cluster.
- You have created an infrastructure environment.
- You have completed the configuration.
If the cluster hosts require the use of a proxy, select Configure cluster-wide proxy settings. Enter the username, password, required domains or IP addresses, and port for the HTTP and HTTPS URLs of the proxy server. If the cluster hosts are behind a firewall, allow the nodes to access the required domains or IP addresses through the firewall. See Configuring your firewall for OpenShift Container Platform for more information.
NoteThe proxy username and password must be URL-encoded.
-
You have selected an image type or will use the default
minimal-iso
.
Procedure
- Configure the discovery image if needed. For details, see Configuring the discovery image.
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the download URL:
curl -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-url
$ curl -H "Authorization: Bearer ${API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/downloads/image-url
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{ "expires_at": "2024-02-07T20:20:23.000Z", "url": "https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso" }
{ "expires_at": "2024-02-07T20:20:23.000Z", "url": "https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the discovery image:
wget -O discovery.iso <url>
$ wget -O discovery.iso <url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<url>
with the download URL from the previous step.- Boot the host(s) with the discovery image.
- Assign a role to host(s).
5.10.1. Selecting a role Copy linkLink copied to clipboard!
You can select a role for the host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host can have one of the following roles:
-
master
- Assigns the control plane role to the host, allowing the host to manage and coordinate the cluster. -
arbiter
- Assigns the arbiter role to the host, providing a cost-effective solution for components that require a quorum. -
worker
- Assigns the compute role to the host, enabling the host to run application workloads. -
auto-assign
- Automatically determines whether the host is amaster
,worker
, orarbiter
. This is the default setting.
Use this procedure to assign a role to the host. If the host_role
setting is omitted, the host defaults to auto-assign
.
Prerequisites
- You have added hosts to the cluster.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the host IDs:
curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.host_networks[].host_ids'
$ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.host_networks[].host_ids'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ]
[ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
host_role
setting:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
<host_id>
is the ID of the host. -
host_role
is"master", `"arbiter"
, or"worker"
. For details, see About assigning roles to hosts.
-
5.11. Modifying hosts Copy linkLink copied to clipboard!
After adding hosts, modify the hosts as needed. The most common modifications are to the host_name
and the host_role
parameters.
You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params
model in the API viewer for details on the fields you can set when modifying a host.
A host might be one of the following roles:
-
master
- Assigns the control plane role to the host, allowing the host to manage and coordinate the cluster. -
arbiter
- Assigns the arbiter role to the host, providing a cost-effective solution for components that require a quorum. -
worker
- Assigns the compute role to the host, enabling the host to run application workloads. -
auto-assign
- Automatically determines whether the host is amaster
,worker
, or `arbiter' node.
Use the following procedure to set the host’s role. If the host_role
setting is omitted, the host defaults to auto-assign
.
Prerequisites
- You have added hosts to the cluster.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the host IDs:
curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.host_networks[].host_ids'
$ curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.host_networks[].host_ids'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the host settings by using the example below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<host_id>
with the ID of the host.
5.11.1. Modifying storage disk configuration Copy linkLink copied to clipboard!
Each host retrieved during host discovery can have multiple storage disks. You can optionally change the default configurations for each disk.
- Starting from OpenShift Container Platform 4.14, you can configure nodes with Intel® Virtual RAID on CPU (VROC) to manage NVMe RAIDs. For details, see Configuring an Intel® Virtual RAID on CPU (VROC) data volume.
- Starting from OpenShift Container Platform 4.15, you can install a cluster on a single or multipath iSCSI boot device using the Assisted Installer.
Prerequisites
- Configure the cluster and discover the hosts. For details, see Additional resources.
5.11.1.1. Viewing the storage disks Copy linkLink copied to clipboard!
You can view the hosts in your cluster, and the disks on each host. You can then perform actions on a specific disk.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the host IDs for the cluster:
curl -s "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.host_networks[].host_ids'
$ curl -s "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID" \ -H "Authorization: Bearer $API_TOKEN" \ | jq '.host_networks[].host_ids'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
"1022623e-7689-8b2d-7fbd-e6f4d5bb28e5"
"1022623e-7689-8b2d-7fbd-e6f4d5bb28e5"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis is the ID of a single host. Multiple host IDs are separated by commas.
Get the disks for a specific host:
curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ -H "Authorization: Bearer ${API_TOKEN}" \ | jq '.inventory | fromjson | .disks'
$ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \
1 -H "Authorization: Bearer ${API_TOKEN}" \ | jq '.inventory | fromjson | .disks'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<host_id>
with the ID of the relevant host.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis is the output for one disk. It has the
disk_id
andinstallation_eligibility
properties for the disk.
5.11.1.2. Changing the installation disk Copy linkLink copied to clipboard!
The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.
You can select any disk whose installation_eligibility
property is eligible: true
to be the installation disk.
Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing over Fibre Channel on the installation disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with an /etc/multipath.conf
configuration. For details, see Modifying the DM Multipath configuration file.
Procedure
- Get the host and storage disk IDs. For details, see Viewing the storage disks.
Optional: Identify the current installation disk:
curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \ -H "Authorization: Bearer ${API_TOKEN}" \ | jq '.installation_disk_id'
$ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \
1 -H "Authorization: Bearer ${API_TOKEN}" \ | jq '.installation_disk_id'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<host_id>
with the ID of the relevant host.
Assign a new installation disk:
NoteMultipath devices are automatically discovered and listed in the host’s inventory. To assign a multipath Fibre Channel disk as the installation disk, choose a disk with
"drive_type"
set to"Multipath"
, rather than to"FC"
which indicates a single path.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.11.1.3. Disabling disk formatting Copy linkLink copied to clipboard!
The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.
You can choose to disable the formatting of a specific disk. Disable formatting with caution, as bootable disks can interfere with the installation process, specifically the boot order.
You cannot disable formatting for the installation disk.
Procedure
- Get the host and storage disk IDs. For details, see Viewing the storage disks.
Run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
5.12. Adding custom manifests Copy linkLink copied to clipboard!
A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/$CLUSTER_ID/manifests endpoint.
You can upload a base64-encoded custom manifest to either the openshift
folder or the manifests
folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted.
You can only upload one base64-encoded JSON manifest at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.
For a file containing a single custom manifest, accepted file extensions include .yaml
, .yml
, or .json
.
Single custom manifest example
For a file containing multiple custom manifests, accepted file types include .yaml
or .yml
.
Multiple custom manifest example
- When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional.
- For more information about custom manifests, see Additional Resources.
Prerequisites
-
You have generated a valid
API_TOKEN
. Tokens expire every 15 minutes. -
You have registered a new cluster definition and exported the
cluster_id
to the$CLUSTER_ID
BASH variable.
Procedure
- Create a custom manifest file.
- Save the custom manifest file using the appropriate extension for the file format.
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the custom manifest to the cluster by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
manifest.json
with the name of your manifest file. The second instance ofmanifest.json
is the path to the file. Ensure the path is correct.Example output
{ "file_name": "manifest.json", "folder": "manifests" }
{ "file_name": "manifest.json", "folder": "manifests" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
base64 -w 0
command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception.Verify that the Assisted Installer added the manifest:
curl -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json" -H "Authorization: Bearer $API_TOKEN"
$ curl -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json" -H "Authorization: Bearer $API_TOKEN"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
manifest.json
with the name of your manifest file.
5.13. Preinstallation validations Copy linkLink copied to clipboard!
The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.
5.14. Installing the cluster Copy linkLink copied to clipboard!
Once the cluster hosts past validation, you can install the cluster.
Prerequisites
- You have created a cluster and infrastructure environment.
- You have added hosts to the infrastructure environment.
- The hosts have passed validation.
Procedure
Refresh the API token:
source refresh-token
$ source refresh-token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the cluster:
curl -H "Authorization: Bearer $API_TOKEN" \ -X POST \ https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/actions/install | jq
$ curl -H "Authorization: Bearer $API_TOKEN" \ -X POST \ https://api.openshift.com/api/assisted-install/v2/clusters/$CLUSTER_ID/actions/install | jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Complete any postinstallation platform integration steps.