Chapter 4. Configuring multi-architecture compute machines on an OpenShift Container Platform cluster
An OpenShift Container Platform cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. You can deploy a cluster with multi-architecture compute machines by creating an Azure installer-provisioned cluster using the multi-architecture installer binary. For Azure installation, see Installing a cluster on Azure with customizations.
The multi-architecture compute machines Technology Preview feature has limited usability with installing, upgrading, and running payloads.
The following procedures explain how to generate an ARM64 boot image and create an Azure compute machine set with the ARM64 boot image. This adds ARM64 compute nodes to your cluster and deploys the desired amount of ARM64 virtual machines (VM). This section also shows how to upgrade your existing cluster to a cluster that supports multi-architecture compute machines. Clusters with multi-architecture compute machines are only available on Azure installer-provisioned infrastructures with x86_64 control plane machines.
OpenShift Container Platform clusters with multi-architecture compute machines on Azure installer-provisioned infrastructure installations is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.1. Creating an ARM64 boot image using the Azure image gallery Copy linkLink copied to clipboard!
To configure your cluster with multi-architecture compute machines, you must create an ARM64 boot image and add it to your Azure compute machine set. The following procedure describes how to manually generate an ARM64 boot image.
Prerequisites
-
You installed the Azure CLI (
az
). - You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
Procedure
Log in to your Azure account:
az login
$ az login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a storage account and upload the ARM64 virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:
az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS
$ az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
westus
object is an example region.
Create a storage container using the storage account you generated:
az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME}
$ az storage container create -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must use the OpenShift Container Platform installation program JSON file to extract the URL and
aarch64
VHD name:Extract the
URL
field and set it toRHCOS_VHD_ORIGIN_URL
as the file name by running the following command:RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
$ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
aarch64
VHD name and set it toBLOB_NAME
as the file name by running the following command:BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
$ BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands:
end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
$ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry $end -o tsv`
$ sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry $end -o tsv`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the RHCOS VHD into the storage container:
az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \ --source-uri "${RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "${BLOB_NAME}" --destination-container ${CONTAINER_NAME}
$ az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \ --source-uri "${RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "${BLOB_NAME}" --destination-container ${CONTAINER_NAME}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the status of the copying process with the following command:
az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STORAGE_ACCOUNT_NAME} | jq .properties.copy
$ az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STORAGE_ACCOUNT_NAME} | jq .properties.copy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the status parameter displays the
success
object, the copying process is complete.
Create an image gallery using the following command:
az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
$ az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the image gallery to create an image definition. In the following example command,
rhcos-arm64
is the name of the image definition.az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
$ az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the URL of the VHD and set it to
RHCOS_VHD_URL
as the file name, run the following command:RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ${CONTAINER_NAME} -n "${BLOB_NAME}" -o tsv)
$ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ${CONTAINER_NAME} -n "${BLOB_NAME}" -o tsv)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
RHCOS_VHD_URL
file, your storage account, resource group, and image gallery to create an image version. In the following example,1.0.0
is the image version.az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
$ az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Your ARM64 boot image is now generated. You can access the ID of your image with the following command:
az sig image-version show -r $GALLERY_NAME -g $RESOURCE_GROUP -i rhcos-arm64 -e 1.0.0
$ az sig image-version show -r $GALLERY_NAME -g $RESOURCE_GROUP -i rhcos-arm64 -e 1.0.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example image ID is used in the
recourseID
parameter of the compute machine set:Example
resourceID
/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0
/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Adding a multi-architecture compute machine set to your cluster using the ARM64 boot image Copy linkLink copied to clipboard!
To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure".
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
-
Create a machine set and modify the
resourceID
andvmSize
parameters with the following command. This machine set will control the ARM64 worker nodes in your cluster:
oc create -f arm64-machine-set-0.yaml
$ oc create -f arm64-machine-set-0.yaml
Sample YAML machine set with ARM64 boot image
+
Verification
Verify that the new ARM64 machines are running by entering the following command:
oc get machineset -n openshift-machine-api
$ oc get machineset -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m
NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check that the nodes are ready and scheduable with the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Upgrading a cluster with multi-architecture compute machines Copy linkLink copied to clipboard!
To upgrade your cluster with multi-architecture compute machines, use the candidate-4.12
update channel. For more information, see "Understanding upgrade channels".
Only OpenShift Container Platform clusters that are already using a multi-architecture payload can update with the candidate-4.12
channel.
If you want to upgrade an existing cluster to support multi-architecture compute machines, you can perform an explicit upgrade command, as shown in the following procedure. This updates your current single-architecture cluster to a cluster that uses the multi-architecture payload.
Prerequisites
-
You installed the OpenShift CLI (
oc
).
Procedure
To manually upgrade your cluster, use the following command:
oc adm upgrade --allow-explicit-upgrade --to-image <image-pullspec>
$ oc adm upgrade --allow-explicit-upgrade --to-image <image-pullspec>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Importing manifest lists in image streams on your multi-architecture compute machines Copy linkLink copied to clipboard!
On an OpenShift Container Platform 4.12 cluster with multi-architecture compute machines, the image streams in the cluster do not import manifest lists automatically. You must manually change the default importMode
option to the PreserveOriginal
option in order to import the manifest list.
The referencePolicy.type
field of your ImageStream
object must be set to the Source
type for this procedure to run successfully.
referencePolicy: type: Source
referencePolicy:
type: Source
Prerequisites
-
You installed the OpenShift Container Platform CLI (
oc
).
Procedure
The following example command shows how to patch the
ImageStream
cli-artifacts so that thecli-artifacts:latest
image stream tag is imported as a manifest list.oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}'
oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can check that the manifest lists imported properly by inspecting the image stream tag. The following command will list the individual architecture manifests for a particular tag.
oc get istag cli-artifacts:latest -n openshift -oyaml
oc get istag cli-artifacts:latest -n openshift -oyaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
dockerImageManifests
object is present, then the manifest list import was successful.Example output of the
dockerImageManifests
objectCopy to Clipboard Copied! Toggle word wrap Toggle overflow