Installing IBM Cloud Bare Metal (Classic)
Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic)
Abstract
Chapter 1. Prerequisites Copy linkLink copied to clipboard!
You can use installer-provisioned installation to install OpenShift Container Platform on IBM Cloud® Bare Metal (Classic) nodes. This document describes the prerequisites and procedures when installing OpenShift Container Platform on IBM Cloud nodes.
Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. A provisioning network is required.
Installer-provisioned installation of OpenShift Container Platform requires:
- One node with Red Hat Enterprise Linux CoreOS (RHCOS) 8.x installed, for running the provisioner
- Three control plane nodes
- One routable network
- One provisioning network
Before starting an installer-provisioned installation of OpenShift Container Platform on IBM Cloud® Bare Metal (Classic), address the following prerequisites and requirements.
1.1. Setting up IBM Cloud Bare Metal (Classic) infrastructure Copy linkLink copied to clipboard!
To deploy an OpenShift Container Platform cluster on IBM Cloud® Bare Metal (Classic) infrastructure, you must first provision the IBM Cloud nodes.
Red Hat supports IPMI and PXE on the provisioning
network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. The provisioning
network is required.
You can customize IBM Cloud nodes using the IBM Cloud API. When creating IBM Cloud nodes, you must consider the following requirements.
Use one data center per cluster
All nodes in the OpenShift Container Platform cluster must run in the same IBM Cloud data center.
Create public and private VLANs
Create all nodes with a single public VLAN and a single private VLAN.
Ensure subnets have sufficient IP addresses
IBM Cloud public VLAN subnets use a /28
prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the baremetal
network. For larger clusters, you might need a smaller prefix.
IBM Cloud private VLAN subnets use a /26
prefix by default, which provides 64 IP addresses. IBM Cloud® Bare Metal (Classic) uses private network IP addresses to access the Baseboard Management Controller (BMC) of each node. OpenShift Container Platform creates an additional subnet for the provisioning
network. Network traffic for the provisioning
network subnet routes through the private VLAN. For larger clusters, you might need a smaller prefix.
IP addresses | Prefix |
---|---|
32 |
|
64 |
|
128 |
|
256 |
|
Configuring NICs
OpenShift Container Platform deploys with two networks:
-
provisioning
: Theprovisioning
network is a non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. -
baremetal
: Thebaremetal
network is a routable network. You can use any NIC order to interface with thebaremetal
network, provided it is not the NIC specified in theprovisioningNetworkInterface
configuration setting or the NIC associated to a node’sbootMACAddress
configuration setting for theprovisioning
network.
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. For example:
NIC | Network | VLAN |
---|---|---|
NIC1 |
| <provisioning_vlan> |
NIC2 |
| <baremetal_vlan> |
In the previous example, NIC1 on all control plane and worker nodes connects to the non-routable network (provisioning
) that is only used for the installation of the OpenShift Container Platform cluster. NIC2 on all control plane and worker nodes connects to the routable baremetal
network.
PXE | Boot order |
---|---|
NIC1 PXE-enabled | 1 |
NIC2 | 2 |
Ensure PXE is enabled on the NIC used for the provisioning
network and is disabled on all other NICs.
Configuring canonical names
Clients access the OpenShift Container Platform cluster nodes over the baremetal
network. Configure IBM Cloud subdomains or subzones where the canonical name extension is the cluster name.
<cluster_name>.<domain>
<cluster_name>.<domain>
For example:
test-cluster.example.com
test-cluster.example.com
Creating DNS entries
You must create DNS A
record entries resolving to unused IP addresses on the public subnet for the following:
Usage | Host Name | IP |
---|---|---|
API | api.<cluster_name>.<domain> | <ip> |
Ingress LB (apps) | *.apps.<cluster_name>.<domain> | <ip> |
Control plane and worker nodes already have DNS entries after provisioning.
The following table provides an example of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are examples, so you can use any host naming convention you prefer.
Usage | Host Name | IP |
---|---|---|
API | api.<cluster_name>.<domain> | <ip> |
Ingress LB (apps) | *.apps.<cluster_name>.<domain> | <ip> |
Provisioner node | provisioner.<cluster_name>.<domain> | <ip> |
Master-0 | openshift-master-0.<cluster_name>.<domain> | <ip> |
Master-1 | openshift-master-1.<cluster_name>.<domain> | <ip> |
Master-2 | openshift-master-2.<cluster_name>.<domain> | <ip> |
Worker-0 | openshift-worker-0.<cluster_name>.<domain> | <ip> |
Worker-1 | openshift-worker-1.<cluster_name>.<domain> | <ip> |
Worker-n | openshift-worker-n.<cluster_name>.<domain> | <ip> |
OpenShift Container Platform includes functionality that uses cluster membership information to generate A
records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
After provisioning the IBM Cloud nodes, you must create a DNS entry for the api.<cluster_name>.<domain>
domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the api.<cluster_name>.<domain>
domain name in the external DNS server prevents worker nodes from joining the cluster.
Network Time Protocol (NTP)
Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
Define a consistent clock date and time format in each cluster node’s BIOS settings, or installation might fail.
Configure a DHCP server
IBM Cloud® Bare Metal (Classic) does not run DHCP on the public or private VLANs. After provisioning IBM Cloud nodes, you must set up a DHCP server for the public VLAN, which corresponds to OpenShift Container Platform’s baremetal
network.
The IP addresses allocated to each node do not need to match the IP addresses allocated by the IBM Cloud® Bare Metal (Classic) provisioning system.
See the "Configuring the public subnet" section for details.
Ensure BMC access privileges
The "Remote management" page for each node on the dashboard contains the node’s intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to OPERATOR
so that Ironic can make those changes.
In the install-config.yaml
file, add the privilegelevel
parameter to the URLs used to configure each BMC. See the "Configuring the install-config.yaml file" section for additional details. For example:
ipmi://<IP>:<port>?privilegelevel=OPERATOR
ipmi://<IP>:<port>?privilegelevel=OPERATOR
Alternatively, contact IBM Cloud support and request that they increase the IPMI privileges to ADMINISTRATOR
for each node.
Create bare metal servers
Create bare metal servers in the IBM Cloud dashboard by navigating to Create resource → Bare Metal Servers for Classic.
Alternatively, you can create bare metal servers with the ibmcloud
CLI utility. For example:
See Installing the stand-alone IBM Cloud CLI for details on installing the IBM Cloud CLI.
IBM Cloud servers might take 3-5 hours to become available.
Chapter 2. Setting up the environment for an OpenShift Container Platform installation Copy linkLink copied to clipboard!
2.1. Preparing the provisioner node on IBM Cloud Bare Metal (Classic) infrastructure Copy linkLink copied to clipboard!
Perform the following steps to prepare the provisioner node.
Procedure
-
Log in to the provisioner node via
ssh
. Create a non-root user (
kni
) and provide that user withsudo
privileges:useradd kni
# useradd kni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow passwd kni
# passwd kni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
# echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chmod 0440 /etc/sudoers.d/kni
# chmod 0440 /etc/sudoers.d/kni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
ssh
key for the new user:su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''"
# su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in as the new user on the provisioner node:
su - kni
# su - kni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use Red Hat Subscription Manager to register the provisioner node:
sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
$ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhel-8-for-x86_64-baseos-rpms
$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhel-8-for-x86_64-baseos-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager.
Install the following packages:
sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the user to add the
libvirt
group to the newly created user:sudo usermod --append --groups libvirt kni
$ sudo usermod --append --groups libvirt kni
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start
firewalld
:sudo systemctl start firewalld
$ sudo systemctl start firewalld
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable
firewalld
:sudo systemctl enable firewalld
$ sudo systemctl enable firewalld
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
http
service:sudo firewall-cmd --zone=public --add-service=http --permanent
$ sudo firewall-cmd --zone=public --add-service=http --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo firewall-cmd --reload
$ sudo firewall-cmd --reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start and enable the
libvirtd
service:sudo systemctl enable libvirtd --now
$ sudo systemctl enable libvirtd --now
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the ID of the provisioner node:
PRVN_HOST_ID=<ID>
$ PRVN_HOST_ID=<ID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the ID with the following
ibmcloud
command:ibmcloud sl hardware list
$ ibmcloud sl hardware list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the ID of the public subnet:
PUBLICSUBNETID=<ID>
$ PUBLICSUBNETID=<ID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the ID with the following
ibmcloud
command:ibmcloud sl subnet list
$ ibmcloud sl subnet list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the ID of the private subnet:
PRIVSUBNETID=<ID>
$ PRIVSUBNETID=<ID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the ID with the following
ibmcloud
command:ibmcloud sl subnet list
$ ibmcloud sl subnet list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the provisioner node public IP address:
PRVN_PUB_IP=$(ibmcloud sl hardware detail $PRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)
$ PRVN_PUB_IP=$(ibmcloud sl hardware detail $PRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the CIDR for the public network:
PUBLICCIDR=$(ibmcloud sl subnet detail $PUBLICSUBNETID --output JSON | jq .cidr)
$ PUBLICCIDR=$(ibmcloud sl subnet detail $PUBLICSUBNETID --output JSON | jq .cidr)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the IP address and CIDR for the public network:
PUB_IP_CIDR=$PRVN_PUB_IP/$PUBLICCIDR
$ PUB_IP_CIDR=$PRVN_PUB_IP/$PUBLICCIDR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the gateway for the public network:
PUB_GATEWAY=$(ibmcloud sl subnet detail $PUBLICSUBNETID --output JSON | jq .gateway -r)
$ PUB_GATEWAY=$(ibmcloud sl subnet detail $PUBLICSUBNETID --output JSON | jq .gateway -r)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the private IP address of the provisioner node:
PRVN_PRIV_IP=$(ibmcloud sl hardware detail $PRVN_HOST_ID --output JSON | \ jq .primaryBackendIpAddress -r)
$ PRVN_PRIV_IP=$(ibmcloud sl hardware detail $PRVN_HOST_ID --output JSON | \ jq .primaryBackendIpAddress -r)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the CIDR for the private network:
PRIVCIDR=$(ibmcloud sl subnet detail $PRIVSUBNETID --output JSON | jq .cidr)
$ PRIVCIDR=$(ibmcloud sl subnet detail $PRIVSUBNETID --output JSON | jq .cidr)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the IP address and CIDR for the private network:
PRIV_IP_CIDR=$PRVN_PRIV_IP/$PRIVCIDR
$ PRIV_IP_CIDR=$PRVN_PRIV_IP/$PRIVCIDR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the gateway for the private network:
PRIV_GATEWAY=$(ibmcloud sl subnet detail $PRIVSUBNETID --output JSON | jq .gateway -r)
$ PRIV_GATEWAY=$(ibmcloud sl subnet detail $PRIVSUBNETID --output JSON | jq .gateway -r)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set up the bridges for the
baremetal
andprovisioning
networks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor
eth1
andeth2
, substitute the appropriate interface name, as needed.If required, SSH back into the
provisioner
node:ssh kni@provisioner.<cluster-name>.<domain>
# ssh kni@provisioner.<cluster-name>.<domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the connection bridges have been properly created:
sudo nmcli con show
$ sudo nmcli con show
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
pull-secret.txt
file:vim pull-secret.txt
$ vim pull-secret.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In a web browser, navigate to Install on Bare Metal with user-provisioned infrastructure. In step 1, click Download pull secret. Paste the contents into the
pull-secret.txt
file and save the contents in thekni
user’s home directory.
2.2. Configuring the public subnet Copy linkLink copied to clipboard!
All of the OpenShift Container Platform cluster nodes must be on the public subnet. IBM Cloud® Bare Metal (Classic) does not provide a DHCP server on the subnet. Set it up separately on the provisioner node.
You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set.
Procedure
Install
dnsmasq
:sudo dnf install dnsmasq
$ sudo dnf install dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
dnsmasq
configuration file:sudo vi /etc/dnsmasq.conf
$ sudo vi /etc/dnsmasq.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following configuration to the
dnsmasq
configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the DHCP range. Replace both instances of
<ip_addr>
with one unused IP address from the public subnet so that thedhcp-range
for thebaremetal
network begins and ends with the same the IP address. Replace<pub_cidr>
with the CIDR of the public subnet. - 2
- Set the DHCP option. Replace
<pub_gateway>
with the IP address of the gateway for thebaremetal
network. Replace<prvn_priv_ip>
with the IP address of the provisioner node’s private IP address on theprovisioning
network. Replace<prvn_pub_ip>
with the IP address of the provisioner node’s public IP address on thebaremetal
network.
To retrieve the value for
<pub_cidr>
, execute:ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr
$ ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<publicsubnetid>
with the ID of the public subnet.To retrieve the value for
<pub_gateway>
, execute:ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r
$ ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<publicsubnetid>
with the ID of the public subnet.To retrieve the value for
<prvn_priv_ip>
, execute:ibmcloud sl hardware detail <id> --output JSON | \ jq .primaryBackendIpAddress -r
$ ibmcloud sl hardware detail <id> --output JSON | \ jq .primaryBackendIpAddress -r
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<id>
with the ID of the provisioner node.To retrieve the value for
<prvn_pub_ip>
, execute:ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r
$ ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<id>
with the ID of the provisioner node.Obtain the list of hardware for the cluster:
ibmcloud sl hardware list
$ ibmcloud sl hardware list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the MAC addresses and IP addresses for each node:
ibmcloud sl hardware detail <id> --output JSON | \ jq '.networkComponents[] | \ "\(.primaryIpAddress) \(.macAddress)"' | grep -v null
$ ibmcloud sl hardware detail <id> --output JSON | \ jq '.networkComponents[] | \ "\(.primaryIpAddress) \(.macAddress)"' | grep -v null
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<id>
with the ID of the node.Example output
"10.196.130.144 00:e0:ed:6a:ca:b4" "141.125.65.215 00:e0:ed:6a:ca:b5"
"10.196.130.144 00:e0:ed:6a:ca:b4" "141.125.65.215 00:e0:ed:6a:ca:b5"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the
install-config.yaml
file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the publicbaremetal
network, and the MAC addresses of the privateprovisioning
network.Add the MAC and IP address pair of the public
baremetal
network for each node into thednsmasq.hostsfile
file:sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile
$ sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example input
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<mac>,<ip>
with the public MAC address and public IP address of the corresponding node name.Start
dnsmasq
:sudo systemctl start dnsmasq
$ sudo systemctl start dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable
dnsmasq
so that it starts when booting the node:sudo systemctl enable dnsmasq
$ sudo systemctl enable dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify
dnsmasq
is running:sudo systemctl status dnsmasq
$ sudo systemctl status dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open ports
53
and67
with UDP protocol:sudo firewall-cmd --add-port 53/udp --permanent
$ sudo firewall-cmd --add-port 53/udp --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo firewall-cmd --add-port 67/udp --permanent
$ sudo firewall-cmd --add-port 67/udp --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add
provisioning
to the external zone with masquerade:sudo firewall-cmd --change-zone=provisioning --zone=external --permanent
$ sudo firewall-cmd --change-zone=provisioning --zone=external --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This step ensures network address translation for IPMI calls to the management subnet.
Reload the
firewalld
configuration:sudo firewall-cmd --reload
$ sudo firewall-cmd --reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Retrieving the OpenShift Container Platform installer Copy linkLink copied to clipboard!
Use the stable-4.x
version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform:
export VERSION=stable-4.13
$ export VERSION=stable-4.13
export RELEASE_ARCH=<architecture>
$ export RELEASE_ARCH=<architecture>
export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/$RELEASE_ARCH/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
$ export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/$RELEASE_ARCH/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
2.4. Extracting the OpenShift Container Platform installer Copy linkLink copied to clipboard!
After retrieving the installer, the next step is to extract it.
Procedure
Set the environment variables:
export cmd=openshift-baremetal-install
$ export cmd=openshift-baremetal-install
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export pullsecret_file=~/pull-secret.txt
$ export pullsecret_file=~/pull-secret.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export extract_dir=$(pwd)
$ export extract_dir=$(pwd)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
oc
binary:curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the installer:
sudo cp oc /usr/local/bin
$ sudo cp oc /usr/local/bin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
$ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cp openshift-baremetal-install /usr/local/bin
$ sudo cp openshift-baremetal-install /usr/local/bin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Configuring the install-config.yaml file Copy linkLink copied to clipboard!
The install-config.yaml
file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud® Bare Metal (Classic) hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud® Bare Metal (Classic) is that you must explicitly set the privilege level for IPMI in the BMC section of the install-config.yaml
file.
Procedure
Configure
install-config.yaml
. Change the appropriate variables to match the environment, includingpullSecret
andsshKey
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the
ibmcloud
command-line utility to retrieve the password.ibmcloud sl hardware detail <id> --output JSON | \ jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"'
$ ibmcloud sl hardware detail <id> --output JSON | \ jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<id>
with the ID of the node.Create a directory to store the cluster configuration:
mkdir ~/clusterconfigs
$ mkdir ~/clusterconfigs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the
install-config.yaml
file into the directory:cp install-config.yaml ~/clusterconfigs
$ cp install-config.yaml ~/clusterconfigs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster:
ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off
$ ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove old bootstrap resources if any are left over from a previous deployment attempt:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Additional install-config parameters Copy linkLink copied to clipboard!
See the following tables for the required parameters, the hosts
parameter, and the bmc
parameter for the install-config.yaml
file.
Parameters | Default | Description |
---|---|---|
|
The domain name for the cluster. For example, | |
|
|
The boot mode for a node. Options are |
| The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. | |
| The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. | |
|
The | |
|
The | |
metadata: name:
|
The name to be given to the OpenShift Container Platform cluster. For example, | |
networking: machineNetwork: - cidr:
|
The public CIDR (Classless Inter-Domain Routing) of the external network. For example, | |
compute: - name: worker
| The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. | |
compute: replicas: 2
| Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. | |
controlPlane: name: master
| The OpenShift Container Platform cluster requires a name for control plane (master) nodes. | |
controlPlane: replicas: 3
| Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. | |
|
The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the | |
| The default configuration used for machine pools without a platform configuration. | |
| (Optional) The virtual IP address for Kubernetes API communication.
This setting must either be provided in the Note
Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the | |
|
|
|
| (Optional) The virtual IP address for ingress traffic.
This setting must either be provided in the Note
Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the |
Parameters | Default | Description |
---|---|---|
|
| Defines the IP range for nodes on the provisioning network. |
|
| The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. |
|
The third IP address of the |
The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, |
|
The second IP address of the |
The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, |
|
| The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. |
|
|
The name of the provisioning bridge on the |
|
Defines the host architecture for your cluster. Valid values are | |
| The default configuration used for machine pools without a platform configuration. | |
|
A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: | |
|
The
| |
| Set this parameter to the appropriate HTTP proxy used within your environment. | |
| Set this parameter to the appropriate HTTPS proxy used within your environment. | |
| Set this parameter to the appropriate list of exclusions for proxy usage within your environment. |
Hosts
The hosts
parameter is a list of separate bare metal assets used to build the cluster.
Name | Default | Description |
---|---|---|
|
The name of the | |
|
The role of the bare metal node. Either | |
| Connection details for the baseboard management controller. See the BMC addressing section for additional details. | |
|
The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the Note You must provide a valid MAC address from the host if you disabled the provisioning network. | |
| Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. |
2.7. Root device hints Copy linkLink copied to clipboard!
The rootDeviceHints
parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.
Subfield | Description |
---|---|
|
A string containing a Linux device name such as |
|
A string containing a SCSI bus address like |
| A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. |
| A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. |
| A string containing the device serial number. The hint must match the actual value exactly. |
| An integer representing the minimum size of the device in gigabytes. |
| A string containing the unique storage identifier. The hint must match the actual value exactly. |
| A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. |
| A string containing the unique vendor storage identifier. The hint must match the actual value exactly. |
| A boolean indicating whether the device should be a rotating disk (true) or not (false). |
Example usage
2.8. Creating the OpenShift Container Platform manifests Copy linkLink copied to clipboard!
Create the OpenShift Container Platform manifests.
./openshift-baremetal-install --dir ~/clusterconfigs create manifests
$ ./openshift-baremetal-install --dir ~/clusterconfigs create manifests
Copy to Clipboard Copied! Toggle word wrap Toggle overflow INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Deploying the cluster via the OpenShift Container Platform installer Copy linkLink copied to clipboard!
Run the OpenShift Container Platform installer:
./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
$ ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
2.10. Following the progress of the installation Copy linkLink copied to clipboard!
During the deployment process, you can check the installation’s overall status by issuing the tail
command to the .openshift_install.log
log file in the install directory folder:
tail -f /path/to/install-dir/.openshift_install.log
$ tail -f /path/to/install-dir/.openshift_install.log
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.