Installing on Azure
Installing OpenShift Container Platform on Azure
Abstract
Chapter 1. Preparing to install on Azure
1.1. Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
1.2. Requirements for installing OpenShift Container Platform on Azure
Before installing OpenShift Container Platform on Microsoft Azure, you must configure an Azure account. See Configuring an Azure account for details about account configuration, account limits, public DNS zone configuration, required roles, creating service principals, and supported Azure regions.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system
namespace, see Alternatives to storing administrator-level secrets in the kube-system project for other options.
1.3. Choosing a method to install OpenShift Container Platform on Azure
You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself.
See Installation process for more information about installer-provisioned and user-provisioned installation processes.
1.3.1. Installing a cluster on installer-provisioned infrastructure
You can install a cluster on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:
- Installing a cluster quickly on Azure: You can install OpenShift Container Platform on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options.
- Installing a customized cluster on Azure: You can install a customized cluster on Azure infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
- Installing a cluster on Azure with network customizations: You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements.
- Installing a cluster on Azure into an existing VNet: You can install OpenShift Container Platform on an existing Azure Virtual Network (VNet) on Azure. You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure.
- Installing a private cluster on Azure: You can install a private cluster into an existing Azure Virtual Network (VNet) on Azure. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet.
- Installing a cluster on Azure into a government region: OpenShift Container Platform can be deployed into Microsoft Azure Government (MAG) regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure.
1.3.2. Installing a cluster on user-provisioned infrastructure
You can install a cluster on Azure infrastructure that you provision, by using the following method:
- Installing a cluster on Azure using ARM templates: You can install OpenShift Container Platform on Azure by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation.
1.4. Next steps
Chapter 2. Configuring an Azure account
Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account to meet installation requirements.
All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
2.1. Azure account limits
The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters.
Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores.
Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure.
The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters.
Component | Number of components required by default | Default Azure limit | Description | ||||||
---|---|---|---|---|---|---|---|---|---|
vCPU | 44 | 20 per region | A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances:
Because the bootstrap and control plane machines use To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. | ||||||
OS Disk | 7 | Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. | |||||||
VNet | 1 | 1000 per region | Each default cluster requires one Virtual Network (VNet), which contains two subnets. | ||||||
Network interfaces | 7 | 65,536 per region | Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. | ||||||
Network security groups | 2 | 5000 | Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets:
| ||||||
Network load balancers | 3 | 1000 per region | Each cluster creates the following load balancers:
If your applications create more Kubernetes | ||||||
Public IP addresses | 3 | Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. | |||||||
Private IP addresses | 7 | The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. | |||||||
Spot VM vCPUs (optional) | 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. | 20 per region | This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. |
Additional resources
2.2. Configuring a public DNS zone in Azure
To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source.
NoteFor more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation.
- If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation.
Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses.
Use an appropriate root domain, such as
openshiftcorp.com
, or subdomain, such asclusters.openshiftcorp.com
.- If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain.
2.3. Increasing Azure account limits
To increase an account limit, file a support request on the Azure portal.
You can increase only one type of quota per support request.
Procedure
- From the Azure portal, click Help + support in the lower left corner.
Click New support request and then select the required values:
- From the Issue type list, select Service and subscription limits (quotas).
- From the Subscription list, select the subscription to modify.
- From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster.
- Click Next: Solutions.
On the Problem Details page, provide the required information for your quota increase:
- Click Provide details and provide the required details in the Quota details window.
- In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details.
- Click Next: Review + create and then click Create.
2.4. Recording the subscription and tenant IDs
The installation program requires the subscription and tenant IDs that are associated with your Azure account. You can use the Azure CLI to gather this information.
Prerequisites
- You have installed or updated the Azure CLI.
Procedure
Log in to the Azure CLI by running the following command:
$ az login
Ensure that you are using the right subscription:
View a list of available subscriptions by running the following command:
$ az account list --refresh
Example output
[ { "cloudName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "you@example.com", "type": "user" } }, { "cloudName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": false, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "you2@example.com", "type": "user" } } ]
View the details of the active account, and confirm that this is the subscription you want to use, by running the following command:
$ az account show
Example output
{ "environmentName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "you@example.com", "type": "user" } }
If you are not using the right subscription:
Change the active subscription by running the following command:
$ az account set -s <subscription_id>
Verify that you are using the subscription you need by running the following command:
$ az account show
Example output
{ "environmentName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "you2@example.com", "type": "user" } }
-
Record the
id
andtenantId
parameter values from the output. You require these values to install an OpenShift Container Platform cluster.
2.5. Supported identities to access Azure resources
An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. As such, you need one of the following types of identities to complete the installation:
- A service principal
- A system-assigned managed identity
- A user-assigned managed identity
2.5.1. Required Azure roles
An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements:
The Azure account that you use to create the identity is assigned the
User Access Administrator
andContributor
roles. These roles are required when:- Creating a service principal or user-assigned managed identity.
- Enabling a system-assigned managed identity on a virtual machine.
-
If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the
microsoft.directory/servicePrincipals/createAsOwner
permission in Microsoft Entra ID.
To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation.
2.5.2. Required Azure permissions for installer-provisioned infrastructure
The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity.
The following options are available to you:
You can assign the identity the
Contributor
andUser Access Administrator
roles. Assigning these roles is the quickest way to grant all of the required permissions.For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal.
- If your organization’s security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions.
The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure.
Example 2.1. Required permissions for creating authorization resources
-
Microsoft.Authorization/policies/audit/action
-
Microsoft.Authorization/policies/auditIfNotExists/action
-
Microsoft.Authorization/roleAssignments/read
-
Microsoft.Authorization/roleAssignments/write
Example 2.2. Required permissions for creating compute resources
-
Microsoft.Compute/availabilitySets/read
-
Microsoft.Compute/availabilitySets/write
-
Microsoft.Compute/disks/beginGetAccess/action
-
Microsoft.Compute/disks/delete
-
Microsoft.Compute/disks/read
-
Microsoft.Compute/disks/write
-
Microsoft.Compute/galleries/images/read
-
Microsoft.Compute/galleries/images/versions/read
-
Microsoft.Compute/galleries/images/versions/write
-
Microsoft.Compute/galleries/images/write
-
Microsoft.Compute/galleries/read
-
Microsoft.Compute/galleries/write
-
Microsoft.Compute/snapshots/read
-
Microsoft.Compute/snapshots/write
-
Microsoft.Compute/snapshots/delete
-
Microsoft.Compute/virtualMachines/delete
-
Microsoft.Compute/virtualMachines/powerOff/action
-
Microsoft.Compute/virtualMachines/read
-
Microsoft.Compute/virtualMachines/write
Example 2.3. Required permissions for creating identity management resources
-
Microsoft.ManagedIdentity/userAssignedIdentities/assign/action
-
Microsoft.ManagedIdentity/userAssignedIdentities/read
-
Microsoft.ManagedIdentity/userAssignedIdentities/write
Example 2.4. Required permissions for creating network resources
-
Microsoft.Network/dnsZones/A/write
-
Microsoft.Network/dnsZones/CNAME/write
-
Microsoft.Network/dnszones/CNAME/read
-
Microsoft.Network/dnszones/read
-
Microsoft.Network/loadBalancers/backendAddressPools/join/action
-
Microsoft.Network/loadBalancers/backendAddressPools/read
-
Microsoft.Network/loadBalancers/backendAddressPools/write
-
Microsoft.Network/loadBalancers/read
-
Microsoft.Network/loadBalancers/write
-
Microsoft.Network/networkInterfaces/delete
-
Microsoft.Network/networkInterfaces/join/action
-
Microsoft.Network/networkInterfaces/read
-
Microsoft.Network/networkInterfaces/write
-
Microsoft.Network/networkSecurityGroups/join/action
-
Microsoft.Network/networkSecurityGroups/read
-
Microsoft.Network/networkSecurityGroups/securityRules/delete
-
Microsoft.Network/networkSecurityGroups/securityRules/read
-
Microsoft.Network/networkSecurityGroups/securityRules/write
-
Microsoft.Network/networkSecurityGroups/write
-
Microsoft.Network/privateDnsZones/A/read
-
Microsoft.Network/privateDnsZones/A/write
-
Microsoft.Network/privateDnsZones/A/delete
-
Microsoft.Network/privateDnsZones/SOA/read
-
Microsoft.Network/privateDnsZones/read
-
Microsoft.Network/privateDnsZones/virtualNetworkLinks/read
-
Microsoft.Network/privateDnsZones/virtualNetworkLinks/write
-
Microsoft.Network/privateDnsZones/write
-
Microsoft.Network/publicIPAddresses/delete
-
Microsoft.Network/publicIPAddresses/join/action
-
Microsoft.Network/publicIPAddresses/read
-
Microsoft.Network/publicIPAddresses/write
-
Microsoft.Network/virtualNetworks/join/action
-
Microsoft.Network/virtualNetworks/read
-
Microsoft.Network/virtualNetworks/subnets/join/action
-
Microsoft.Network/virtualNetworks/subnets/read
-
Microsoft.Network/virtualNetworks/subnets/write
-
Microsoft.Network/virtualNetworks/write
The following permissions are not required to create the private OpenShift Container Platform cluster on Azure.
-
Microsoft.Network/dnsZones/A/write
-
Microsoft.Network/dnsZones/CNAME/write
-
Microsoft.Network/dnszones/CNAME/read
-
Microsoft.Network/dnszones/read
Example 2.5. Required permissions for checking the health of resources
-
Microsoft.Resourcehealth/healthevent/Activated/action
-
Microsoft.Resourcehealth/healthevent/InProgress/action
-
Microsoft.Resourcehealth/healthevent/Pending/action
-
Microsoft.Resourcehealth/healthevent/Resolved/action
-
Microsoft.Resourcehealth/healthevent/Updated/action
Example 2.6. Required permissions for creating a resource group
-
Microsoft.Resources/subscriptions/resourceGroups/read
-
Microsoft.Resources/subscriptions/resourcegroups/write
Example 2.7. Required permissions for creating resource tags
-
Microsoft.Resources/tags/write
Example 2.8. Required permissions for creating storage resources
-
Microsoft.Storage/storageAccounts/blobServices/read
-
Microsoft.Storage/storageAccounts/blobServices/containers/write
-
Microsoft.Storage/storageAccounts/fileServices/read
-
Microsoft.Storage/storageAccounts/fileServices/shares/read
-
Microsoft.Storage/storageAccounts/fileServices/shares/write
-
Microsoft.Storage/storageAccounts/fileServices/shares/delete
-
Microsoft.Storage/storageAccounts/listKeys/action
-
Microsoft.Storage/storageAccounts/read
-
Microsoft.Storage/storageAccounts/write
Example 2.9. Optional permissions for creating marketplace virtual machine resources
-
Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read
-
Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write
Example 2.10. Optional permissions for creating compute resources
-
Microsoft.Compute/availabilitySets/delete
-
Microsoft.Compute/images/read
-
Microsoft.Compute/images/write
-
Microsoft.Compute/images/delete
Example 2.11. Optional permissions for enabling user-managed encryption
-
Microsoft.Compute/diskEncryptionSets/read
-
Microsoft.Compute/diskEncryptionSets/write
-
Microsoft.Compute/diskEncryptionSets/delete
-
Microsoft.KeyVault/vaults/read
-
Microsoft.KeyVault/vaults/write
-
Microsoft.KeyVault/vaults/delete
-
Microsoft.KeyVault/vaults/deploy/action
-
Microsoft.KeyVault/vaults/keys/read
-
Microsoft.KeyVault/vaults/keys/write
-
Microsoft.Features/providers/features/register/action
Example 2.12. Optional permissions for installing a cluster using the NatGateway
outbound type
-
Microsoft.Network/natGateways/read
-
Microsoft.Network/natGateways/write
Example 2.13. Optional permissions for installing a private cluster with Azure Network Address Translation (NAT)
-
Microsoft.Network/natGateways/join/action
-
Microsoft.Network/natGateways/read
-
Microsoft.Network/natGateways/write
Example 2.14. Optional permissions for installing a private cluster with Azure firewall
-
Microsoft.Network/azureFirewalls/applicationRuleCollections/write
-
Microsoft.Network/azureFirewalls/read
-
Microsoft.Network/azureFirewalls/write
-
Microsoft.Network/routeTables/join/action
-
Microsoft.Network/routeTables/read
-
Microsoft.Network/routeTables/routes/read
-
Microsoft.Network/routeTables/routes/write
-
Microsoft.Network/routeTables/write
-
Microsoft.Network/virtualNetworks/peer/action
-
Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read
-
Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write
Example 2.15. Optional permission for running gather bootstrap
-
Microsoft.Compute/virtualMachines/retrieveBootDiagnosticsData/action
The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. You can use the same permissions to delete a private OpenShift Container Platform cluster on Azure.
Example 2.16. Required permissions for deleting authorization resources
-
Microsoft.Authorization/roleAssignments/delete
Example 2.17. Required permissions for deleting compute resources
-
Microsoft.Compute/disks/delete
-
Microsoft.Compute/galleries/delete
-
Microsoft.Compute/galleries/images/delete
-
Microsoft.Compute/galleries/images/versions/delete
-
Microsoft.Compute/virtualMachines/delete
Example 2.18. Required permissions for deleting identity management resources
-
Microsoft.ManagedIdentity/userAssignedIdentities/delete
Example 2.19. Required permissions for deleting network resources
-
Microsoft.Network/dnszones/read
-
Microsoft.Network/dnsZones/A/read
-
Microsoft.Network/dnsZones/A/delete
-
Microsoft.Network/dnsZones/CNAME/read
-
Microsoft.Network/dnsZones/CNAME/delete
-
Microsoft.Network/loadBalancers/delete
-
Microsoft.Network/networkInterfaces/delete
-
Microsoft.Network/networkSecurityGroups/delete
-
Microsoft.Network/privateDnsZones/read
-
Microsoft.Network/privateDnsZones/A/read
-
Microsoft.Network/privateDnsZones/delete
-
Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete
-
Microsoft.Network/publicIPAddresses/delete
-
Microsoft.Network/virtualNetworks/delete
The following permissions are not required to delete a private OpenShift Container Platform cluster on Azure.
-
Microsoft.Network/dnszones/read
-
Microsoft.Network/dnsZones/A/read
-
Microsoft.Network/dnsZones/A/delete
-
Microsoft.Network/dnsZones/CNAME/read
-
Microsoft.Network/dnsZones/CNAME/delete
Example 2.20. Required permissions for checking the health of resources
-
Microsoft.Resourcehealth/healthevent/Activated/action
-
Microsoft.Resourcehealth/healthevent/Resolved/action
-
Microsoft.Resourcehealth/healthevent/Updated/action
Example 2.21. Required permissions for deleting a resource group
-
Microsoft.Resources/subscriptions/resourcegroups/delete
Example 2.22. Required permissions for deleting storage resources
-
Microsoft.Storage/storageAccounts/delete
-
Microsoft.Storage/storageAccounts/listKeys/action
To install OpenShift Container Platform on Azure, you must scope the permissions to your subscription. Later, you can re-scope these permissions to the installer created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. By default, the OpenShift Container Platform installation program assigns the Azure identity the Contributor
role.
You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster.
2.5.3. Using Azure managed identities
The installation program requires an Azure identity to complete the installation. You can use either a system-assigned or user-assigned managed identity.
If you are unable to use a managed identity, you can use a service principal.
Procedure
- If you are using a system-assigned managed identity, enable it on the virtual machine that you will run the installation program from.
If you are using a user-assigned managed identity:
- Assign it to the virtual machine that you will run the installation program from.
Record its client ID. You require this value when installing the cluster.
For more information about viewing the details of a user-assigned managed identity, see the Microsoft Azure documentation for listing user-assigned managed identities.
- Verify that the required permissions are assigned to the managed identity.
2.5.4. Creating a service principal
The installation program requires an Azure identity to complete the installation. You can use a service principal.
If you are unable to use a service principal, you can use a managed identity.
Prerequisites
- You have installed or updated the Azure CLI.
- You have an Azure subscription ID.
-
If you are not going to assign the
Contributor
andUser Administrator Access
roles to the service principal, you have created a custom role with the required Azure permissions.
Procedure
Create the service principal for your account by running the following command:
$ az ad sp create-for-rbac --role <role_name> \1 --name <service_principal> \2 --scopes /subscriptions/<subscription_id> 3
Example output
Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" }
-
Record the values of the
appId
andpassword
parameters from the output. You require these values when installing the cluster. If you applied the
Contributor
role to your service principal, assign theUser Administrator Access
role by running the following command:$ az role assignment create --role "User Access Administrator" \ --assignee-object-id $(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2
Additional resources
2.6. Supported Azure Marketplace regions
Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA.
While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports.
Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions.
2.7. Supported Azure regions
The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription.
Supported Azure public regions
-
australiacentral
(Australia Central) -
australiaeast
(Australia East) -
australiasoutheast
(Australia South East) -
brazilsouth
(Brazil South) -
canadacentral
(Canada Central) -
canadaeast
(Canada East) -
centralindia
(Central India) -
centralus
(Central US) -
eastasia
(East Asia) -
eastus
(East US) -
eastus2
(East US 2) -
francecentral
(France Central) -
germanywestcentral
(Germany West Central) -
israelcentral
(Israel Central) -
italynorth
(Italy North) -
japaneast
(Japan East) -
japanwest
(Japan West) -
koreacentral
(Korea Central) -
koreasouth
(Korea South) -
mexicocentral
(Mexico Central) -
northcentralus
(North Central US) -
northeurope
(North Europe) -
norwayeast
(Norway East) -
polandcentral
(Poland Central) -
qatarcentral
(Qatar Central) -
southafricanorth
(South Africa North) -
southcentralus
(South Central US) -
southeastasia
(Southeast Asia) -
southindia
(South India) -
spaincentral
(Spain Central) -
swedencentral
(Sweden Central) -
switzerlandnorth
(Switzerland North) -
uaenorth
(UAE North) -
uksouth
(UK South) -
ukwest
(UK West) -
westcentralus
(West Central US) -
westeurope
(West Europe) -
westindia
(West India) -
westus
(West US) -
westus2
(West US 2) -
westus3
(West US 3)
Supported Azure Government regions
Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6:
-
usgovtexas
(US Gov Texas) -
usgovvirginia
(US Gov Virginia)
You can reference all available MAG regions in the Azure documentation. Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested.
2.8. Next steps
- Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options.
Chapter 3. Enabling user-managed encryption for Azure
In OpenShift Container Platform version 4.14, you can install a cluster with a user-managed encryption key in Azure. To enable this feature, you can prepare an Azure DiskEncryptionSet before installation, modify the install-config.yaml
file, and then complete the installation.
3.1. Preparing an Azure Disk Encryption Set
The OpenShift Container Platform installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in Azure and provide the key to the installer.
Procedure
Set the following environment variables for the Azure resource group by running the following command:
$ export RESOURCEGROUP="<resource_group>" \1 LOCATION="<location>" 2
- 1
- Specifies the name of the Azure resource group where you will create the Disk Encryption Set and encryption key. To avoid losing access to your keys after destroying the cluster, you should create the Disk Encryption Set in a different resource group than the resource group where you install the cluster.
- 2
- Specifies the Azure location where you will create the resource group.
Set the following environment variables for the Azure Key Vault and Disk Encryption Set by running the following command:
$ export KEYVAULT_NAME="<keyvault_name>" \1 KEYVAULT_KEY_NAME="<keyvault_key_name>" \2 DISK_ENCRYPTION_SET_NAME="<disk_encryption_set_name>" 3
Set the environment variable for the ID of your Azure Service Principal by running the following command:
$ export CLUSTER_SP_ID="<service_principal_id>" 1
- 1
- Specifies the ID of the service principal you will use for this installation.
Enable host-level encryption in Azure by running the following commands:
$ az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost"
$ az feature show --namespace Microsoft.Compute --name EncryptionAtHost
$ az provider register -n Microsoft.Compute
Create an Azure Resource Group to hold the disk encryption set and associated resources by running the following command:
$ az group create --name $RESOURCEGROUP --location $LOCATION
Create an Azure key vault by running the following command:
$ az keyvault create -n $KEYVAULT_NAME -g $RESOURCEGROUP -l $LOCATION \ --enable-purge-protection true
Create an encryption key in the key vault by running the following command:
$ az keyvault key create --vault-name $KEYVAULT_NAME -n $KEYVAULT_KEY_NAME \ --protection software
Capture the ID of the key vault by running the following command:
$ KEYVAULT_ID=$(az keyvault show --name $KEYVAULT_NAME --query "[id]" -o tsv)
Capture the key URL in the key vault by running the following command:
$ KEYVAULT_KEY_URL=$(az keyvault key show --vault-name $KEYVAULT_NAME --name \ $KEYVAULT_KEY_NAME --query "[key.kid]" -o tsv)
Create a disk encryption set by running the following command:
$ az disk-encryption-set create -n $DISK_ENCRYPTION_SET_NAME -l $LOCATION -g \ $RESOURCEGROUP --source-vault $KEYVAULT_ID --key-url $KEYVAULT_KEY_URL
Grant the DiskEncryptionSet resource access to the key vault by running the following commands:
$ DES_IDENTITY=$(az disk-encryption-set show -n $DISK_ENCRYPTION_SET_NAME -g \ $RESOURCEGROUP --query "[identity.principalId]" -o tsv)
$ az keyvault set-policy -n $KEYVAULT_NAME -g $RESOURCEGROUP --object-id \ $DES_IDENTITY --key-permissions wrapkey unwrapkey get
Grant the Azure Service Principal permission to read the DiskEncryptionSet by running the following commands:
$ DES_RESOURCE_ID=$(az disk-encryption-set show -n $DISK_ENCRYPTION_SET_NAME -g \ $RESOURCEGROUP --query "[id]" -o tsv)
$ az role assignment create --assignee $CLUSTER_SP_ID --role "<reader_role>" \1 --scope $DES_RESOURCE_ID -o jsonc
- 1
- Specifies an Azure role with read permissions to the disk encryption set. You can use the
Owner
role or a custom role with the necessary permissions.
3.2. Next steps
Install an OpenShift Container Platform cluster:
- Install a cluster with customizations on installer-provisioned infrastructure
- Install a cluster with network customizations on installer-provisioned infrastructure
- Install a cluster into an existing VNet on installer-provisioned infrastructure
- Install a private cluster on installer-provisioned infrastructure
- Install a cluster into an government region on installer-provisioned infrastructure
Chapter 4. Installing a cluster quickly on Azure
In OpenShift Container Platform version 4.14, you can install a cluster on Microsoft Azure that uses the default configuration options.
4.1. Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
4.2. Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.14, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.3. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.4. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Procedure
- Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider from the Run it yourself section of the page.
- Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
Important- The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
- Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
4.5. Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster
command of the installation program only once, during initial installation.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have an Azure subscription ID and tenant ID.
- You have the application ID and password of a service principal.
Procedure
Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the
~/.azure/
directory and delete theosServicePrincipal.json
configuration file.Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2
When specifying the directory:
-
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory. - Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
-
Verify that the directory has the
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select azure as the platform to target.
If the installation program cannot locate the
osServicePrincipal.json
configuration file from a previous installation, you are prompted for Azure subscription and authentication values.Specify the following Azure parameter values for your subscription and service principal:
- azure subscription id: Enter the subscription ID to use for the cluster.
- azure tenant id: Enter the tenant ID.
- azure service principal client id: Enter its application ID.
- azure service principal client secret: Enter its password.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
- Paste the pull secret from Red Hat OpenShift Cluster Manager.
If previously not detected, the installation program creates an osServicePrincipal.json
configuration file and stores this file in the ~/.azure/
directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform.
Verification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user. -
Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
4.6. Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc
.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the architecture from the Product Variant drop-down list.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.14 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the
oc
command:$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.14 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
Verification
After you install the OpenShift CLI, it is available using the
oc
command:C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.14 macOS Client entry and save the file.
NoteFor macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
Verification
Verify your installation by using an
oc
command:$ oc <command>
4.7. Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Additional resources
- See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
4.8. Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
4.9. Next steps
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
Chapter 5. Installing a cluster on Azure with customizations
In OpenShift Container Platform version 4.14, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
5.1. Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
- If you use customer-managed encryption keys, you prepared your Azure environment for encryption.
5.2. Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.14, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.3. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.4. Using the Azure Marketplace offering
Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat.
To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following:
-
While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify
redhat
as the publisher. If you are located in EMEA, specifyredhat-limited
as the publisher. -
The offer includes a
rh-ocp-worker
SKU and arh-ocp-worker-gen1
SKU. Therh-ocp-worker
SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with therh-ocp-worker-gen1
SKU. Therh-ocp-worker-gen1
SKU represents a Hyper-V version 1 VM image.
Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances.
Prerequisites
-
You have installed the Azure CLI client
(az)
. - Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.
Procedure
Display all of the available OpenShift Container Platform images by running one of the following commands:
North America:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat -o table
Example output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700
EMEA:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table
Example output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700
NoteRegardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.13. If required, your VMs are automatically upgraded as part of the installation process.
Inspect the image for your offer by running one of the following commands:
North America:
$ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
EMEA:
$ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Review the terms of the offer by running one of the following commands:
North America:
$ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
EMEA:
$ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Accept the terms of the offering by running one of the following commands:
North America:
$ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
EMEA:
$ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
-
Record the image details of your offer. You must update the
compute
section in theinstall-config.yaml
file with values forpublisher
,offer
,sku
, andversion
before deploying the cluster.
Sample install-config.yaml
file with the Azure Marketplace worker nodes
apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3
5.5. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Procedure
- Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider from the Run it yourself section of the page.
- Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
Important- The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
- Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
5.6. Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Prerequisites
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have an Azure subscription ID and tenant ID.
- If you are installing the cluster using a service principal, you have its application ID and password.
- If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from.
If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites:
- You have its client ID.
- You have assigned it to the virtual machine that you will run the installation program from.
Procedure
Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the
~/.azure/
directory and delete theosServicePrincipal.json
configuration file.Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation.
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
When specifying the directory:
-
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
NoteAlways delete the
~/.powervs
directory to avoid reusing a stale configuration. Run the following command:$ rm -rf ~/.powervs
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select azure as the platform to target.
If the installation program cannot locate the
osServicePrincipal.json
configuration file from a previous installation, you are prompted for Azure subscription and authentication values.Enter the following Azure parameter values for your subscription:
- azure subscription id: Enter the subscription ID to use for the cluster.
- azure tenant id: Enter the tenant ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id:
- If you are using a service principal, enter its application ID.
- If you are using a system-assigned managed identity, leave this value blank.
- If you are using a user-assigned managed identity, specify its client ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret:
- If you are using a service principal, enter its password.
- If you are using a system-assigned managed identity, leave this value blank.
- If you are using a user-assigned managed identity, leave this value blank.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
Modify the
install-config.yaml
file. You can find more information about the available parameters in the "Installation configuration parameters" section.NoteIf you are installing a three-node cluster, be sure to set the
compute.replicas
parameter to0
. This ensures that the cluster’s control planes are schedulable. For more information, see "Installing a three-node cluster on Azure".Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
If previously not detected, the installation program creates an osServicePrincipal.json
configuration file and stores this file in the ~/.azure/
directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform.
Additional resources
5.6.1. Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
Compute | RHCOS, RHEL 8.6 and later [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
- x86-64 architecture requires x86-64-v2 ISA
- ARM64 architecture requires ARMv8.0-A ISA
- IBM Power architecture requires Power 9 ISA
- s390x architecture requires z14 ISA
For more information, see RHEL Architectures.
You are required to use Azure virtual machines that have the premiumIO
parameter set to true
.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Additional resources
5.6.2. Tested instance types for Azure
The following Microsoft Azure instance types have been tested with OpenShift Container Platform.
Example 5.1. Machine types based on 64-bit x86 architecture
-
standardBSFamily
-
standardDADSv5Family
-
standardDASv4Family
-
standardDASv5Family
-
standardDCSv3Family
-
standardDCSv2Family
-
standardDDCSv3Family
-
standardDDSv4Family
-
standardDDSv5Family
-
standardDLDSv5Family
-
standardDLSv5Family
-
standardDSFamily
-
standardDSv2Family
-
standardDSv2PromoFamily
-
standardDSv3Family
-
standardDSv4Family
-
standardDSv5Family
-
standardEADSv5Family
-
standardEASv4Family
-
standardEASv5Family
-
standardEBDSv5Family
-
standardEBSv5Family
-
standardEDSv4Family
-
standardEDSv5Family
-
standardEIADSv5Family
-
standardEIASv4Family
-
standardEIASv5Family
-
standardEIDSv5Family
-
standardEISv3Family
-
standardEISv5Family
-
standardESv3Family
-
standardESv4Family
-
standardESv5Family
-
standardFXMDVSFamily
-
standardFSFamily
-
standardFSv2Family
-
standardGSFamily
-
standardHBrsv2Family
-
standardHBSFamily
-
standardHCSFamily
-
standardLASv3Family
-
standardLSFamily
-
standardLSv2Family
-
standardLSv3Family
-
standardMDSMediumMemoryv2Family
-
standardMIDSMediumMemoryv2Family
-
standardMISMediumMemoryv2Family
-
standardMSFamily
-
standardMSMediumMemoryv2Family
-
StandardNCADSA100v4Family
-
Standard NCASv3_T4 Family
-
standardNCSv3Family
-
standardNDSv2Family
-
standardNPSFamily
-
StandardNVADSA10v5Family
-
standardNVSv3Family
-
standardXEISv4Family
5.6.3. Tested instance types for Azure on 64-bit ARM infrastructures
The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform.
Example 5.2. Machine types based on 64-bit ARM architecture
-
standardDPSv5Family
-
standardDPDSv5Family
-
standardDPLDSv5Family
-
standardDPLSv5Family
-
standardEPSv5Family
-
standardEPDSv5Family
5.6.4. Enabling trusted launch for Azure VMs
You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules.
See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features.
Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add the following stanza:controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4
- 1
- Specify
controlPlane.platform.azure
orcompute.platform.azure
to enable trusted launch on only control plane or compute nodes respectively. Specifyplatform.azure.defaultMachinePlatform
to enable trusted launch on all nodes. - 2
- Enable trusted launch features.
- 3
- Enable secure boot. For more information, see the Azure documentation about secure boot.
- 4
- Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules.
5.6.5. Enabling confidential VMs
You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes.
Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can use confidential VMs with the following VM sizes:
- DCasv5-series
- DCadsv5-series
- ECasv5-series
- ECadsv5-series
Confidential VMs are currently not supported on 64-bit ARM architectures.
Prerequisites
-
You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add the following stanza:controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5
- 1
- Specify
controlPlane.platform.azure
orcompute.platform.azure
to deploy confidential VMs on only control plane or compute nodes respectively. Specifyplatform.azure.defaultMachinePlatform
to deploy confidential VMs on all nodes. - 2
- Enable confidential VMs.
- 3
- Enable secure boot. For more information, see the Azure documentation about secure boot.
- 4
- Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules.
- 5
- Specify
VMGuestStateOnly
to encrypt the VM guest state.
5.6.6. Sample customized install-config.yaml file for Azure
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program and modify it.
apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18
- 1 10 14 16
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlane
section is a single mapping, but thecompute
section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3
, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 11
- The cluster network plugin to install. The supported values are
OVNKubernetes
andOpenShiftSDN
. The default value isOVNKubernetes
. - 12
- Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The
publisher
,offer
,sku
, andversion
parameters underplatform.azure.defaultMachinePlatform.osImage
apply to both control plane and compute machines. If the parameters undercontrolPlane.platform.azure.osImage
orcompute.platform.azure.osImage
are set, they override theplatform.azure.defaultMachinePlatform.osImage
parameters. - 13
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 15
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 17
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
- 18
- You can optionally provide the
sshKey
value that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.
5.6.7. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
You have an existing
install-config.yaml
file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.NoteThe
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.
NoteThe installation program does not support the proxy
readinessEndpoints
field.NoteIf the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete --log-level debug
- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
Additional resources
- For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs.
5.7. Configuring user-defined tags for Azure
In OpenShift Container Platform, you can use the tags for grouping resources and for managing resource access and cost. You can define the tags on the Azure resources in the install-config.yaml
file only during OpenShift Container Platform cluster creation. You cannot modify the user-defined tags after cluster creation.
Support for user-defined tags is available only for the resources created in the Azure Public Cloud. User-defined tags are not supported for the OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.14.
User-defined and OpenShift Container Platform specific tags are applied only to the resources created by the OpenShift Container Platform installer and its core operators such as Machine api provider azure Operator, Cluster Ingress Operator, Cluster Image Registry Operator.
By default, OpenShift Container Platform installer attaches the OpenShift Container Platform tags to the Azure resources. These OpenShift Container Platform tags are not accessible for the users.
You can use the .platform.azure.userTags
field in the install-config.yaml
file to define the list of user-defined tags as shown in the following install-config.yaml
file.
Sample install-config.yaml
file
additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 compute: 3 - architecture: amd64 hyperthreading: Enabled 4 name: worker platform: {} replicas: 3 controlPlane: 5 architecture: amd64 hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 7 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 9 cloudName: AzurePublicCloud 10 outboundType: Loadbalancer region: southindia 11 userTags: 12 createdBy: user environment: dev
- 1
- Defines the trust bundle policy.
- 2
- Required. The
baseDomain
parameter specifies the base domain of your cloud provider. The installation program prompts you for this value. - 3
- The configuration for the machines that comprise compute. The
compute
section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
. If you do not provide these parameters and values, the installation program provides the default value. - 4
- To enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. - 5
- The configuration for the machines that comprise the control plane. The
controlPlane
section is a single mapping. The first line of thecontrolPlane
section must not begin with a hyphen,-
. You can use only one control plane pool. If you do not provide these parameters and values, the installation program provides the default value. - 6
- To enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. - 7
- The installation program prompts you for this value.
- 8
- The cluster network plugin to install. The supported values are
OVNKubernetes
andOpenShiftSDN
. The default value isOVNKubernetes
. - 9
- Specifies the resource group for the base domain of the Azure DNS zone.
- 10
- Specifies the name of the Azure cloud environment. You can use the
cloudName
field to configure the Azure SDK with the Azure API endpoints. If you do not provide value, the default value is Azure Public Cloud. - 11
- Required. Specifies the name of the Azure region that hosts your cluster. The installation program prompts you for this value.
- 12
- Defines the additional keys and values that the installation program adds as tags to all Azure resources that it creates.
The user-defined tags have the following limitations:
- A tag key can have a maximum of 128 characters.
- A tag key must begin with a letter, end with a letter, number or underscore, and can contain only letters, numbers, underscores, periods, and hyphens.
- Tag keys are case-insensitive.
-
Tag keys cannot be
name
. It cannot have prefixes such askubernetes.io
,openshift.io
,microsoft
,azure
, andwindows
. - A tag value can have a maximum of 256 characters.
- You can configure a maximum of 10 tags for resource group and resources.
For more information about Azure tags, see Azure user-defined tags
5.8. Querying user-defined tags for Azure
After creating the OpenShift Container Platform cluster, you can access the list of defined tags for the Azure resources. The format of the OpenShift Container Platform tags is kubernetes.io_cluster.<cluster_id>:owned
. The cluster_id
parameter is the value of .status.infrastructureName
present in config.openshift.io/Infrastructure
.
Query the tags defined for Azure resources by running the following command:
$ oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}'
Example output
[ [ { "key": "createdBy", "value": "user" }, { "key": "environment", "value": "dev" } ] ]
5.9. Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc
.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the architecture from the Product Variant drop-down list.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.14 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the
oc
command:$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.14 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
Verification
After you install the OpenShift CLI, it is available using the
oc
command:C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.14 macOS Client entry and save the file.
NoteFor macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
Verification
Verify your installation by using an
oc
command:$ oc <command>
5.10. Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system
project. If you configured the credentialsMode
parameter in the install-config.yaml
file to Manual
, you must use one of the following alternatives:
- To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
- To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials.
5.10.1. Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
custom resources (CRs) from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \2 --to=<path_to_directory_for_credentials_requests> 3
- 1
- The
--included
parameter includes only the manifests that your specific cluster configuration requires. - 2
- Specify the location of the
install-config.yaml
file. - 3
- Specify the path to the directory where you want to store the
CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
This command creates a YAML file for each
CredentialsRequest
object.Sample
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ...
Create YAML files for secrets in the
openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretRef
for eachCredentialsRequest
object.Sample
CredentialsRequest
object with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ...
Sample
Secret
objectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
5.10.2. Configuring an Azure cluster to use short-term credentials
To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster.
5.10.2.1. Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The ccoctl
utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc
).
You have created a global Microsoft Azure account for the
ccoctl
utility to use with the following permissions:Example 5.3. Required Azure permissions
- Microsoft.Resources/subscriptions/resourceGroups/read
- Microsoft.Resources/subscriptions/resourceGroups/write
- Microsoft.Resources/subscriptions/resourceGroups/delete
- Microsoft.Authorization/roleAssignments/read
- Microsoft.Authorization/roleAssignments/delete
- Microsoft.Authorization/roleAssignments/write
- Microsoft.Authorization/roleDefinitions/read
- Microsoft.Authorization/roleDefinitions/write
- Microsoft.Authorization/roleDefinitions/delete
- Microsoft.Storage/storageAccounts/listkeys/action
- Microsoft.Storage/storageAccounts/delete
- Microsoft.Storage/storageAccounts/read
- Microsoft.Storage/storageAccounts/write
- Microsoft.Storage/storageAccounts/blobServices/containers/write
- Microsoft.Storage/storageAccounts/blobServices/containers/delete
- Microsoft.Storage/storageAccounts/blobServices/containers/read
- Microsoft.ManagedIdentity/userAssignedIdentities/delete
- Microsoft.ManagedIdentity/userAssignedIdentities/read
- Microsoft.ManagedIdentity/userAssignedIdentities/write
- Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read
- Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write
- Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete
- Microsoft.Storage/register/action
- Microsoft.ManagedIdentity/register/action
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
NoteEnsure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
Change the permissions to make
ccoctl
executable by running the following command:$ chmod 775 ccoctl
Verification
To verify that
ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctl.rhel9
Example output
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
5.10.2.2. Creating Azure resources with the Cloud Credential Operator utility
You can use the ccoctl azure create-all
command to automate the creation of Azure resources.
By default, ccoctl
creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir
flag. This procedure uses <path_to_ccoctl_output_dir>
to refer to this directory.
Prerequisites
You must have:
-
Extracted and prepared the
ccoctl
binary. - Access to your Microsoft Azure account by using the Azure CLI.
Procedure
Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
objects from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \2 --to=<path_to_directory_for_credentials_requests> 3
- 1
- The
--included
parameter includes only the manifests that your specific cluster configuration requires. - 2
- Specify the location of the
install-config.yaml
file. - 3
- Specify the path to the directory where you want to store the
CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
NoteThis command might take a few moments to run.
To enable the
ccoctl
utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command:$ az login
Use the
ccoctl
tool to process allCredentialsRequest
objects by running the following command:$ ccoctl azure create-all \ --name=<azure_infra_name> \1 --output-dir=<ccoctl_output_dir> \2 --region=<azure_region> \3 --subscription-id=<azure_subscription_id> \4 --credentials-requests-dir=<path_to_credentials_requests_directory> \5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \6 --tenant-id=<azure_tenant_id> 7
- 1
- Specify the user-defined name for all created Azure resources used for tracking.
- 2
- Optional: Specify the directory in which you want the
ccoctl
utility to create objects. By default, the utility creates objects in the directory in which the commands are run. - 3
- Specify the Azure region in which cloud resources will be created.
- 4
- Specify the Azure subscription ID to use.
- 5
- Specify the directory containing the files for the component
CredentialsRequest
objects. - 6
- Specify the name of the resource group containing the cluster’s base domain Azure DNS zone.
- 7
- Specify the Azure tenant ID to use.
NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the--enable-tech-preview
parameter.To see additional optional parameters and explanations of how to use them, run the
azure create-all --help
command.
Verification
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifests
directory:$ ls <path_to_ccoctl_output_dir>/manifests
Example output
azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml
You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts.
5.10.2.3. Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl
) created to the correct directories for the installation program.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
-
You have configured the Cloud Credential Operator utility (
ccoctl
). -
You have created the cloud provider resources that are required for your cluster with the
ccoctl
utility.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ...
If you used the
ccoctl
utility to create a new Azure resource group instead of using an existing resource group, modify theresourceGroupName
parameter in theinstall-config.yaml
as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ...
- 1
- This value must match the user-defined name for Azure resources that was specified with the
--name
argument of theccoctl azure create-all
command.
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Copy the manifests that the
ccoctl
utility generated to themanifests
directory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
Copy the
tls
directory that contains the private key to the installation directory:$ cp -a /<path_to_ccoctl_output_dir>/tls .
5.11. Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster
command of the installation program only once, during initial installation.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have an Azure subscription ID and tenant ID.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2
Verification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user. -
Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
5.12. Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Additional resources
- See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
5.13. Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
5.14. Next steps
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
Chapter 6. Installing a cluster on Azure with network customizations
In OpenShift Container Platform version 4.14, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
6.1. Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
- If you use customer-managed encryption keys, you prepared your Azure environment for encryption.
6.2. Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.14, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
6.3. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
6.4. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Procedure
- Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider from the Run it yourself section of the page.
- Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
Important- The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
- Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
6.5. Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Prerequisites
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have an Azure subscription ID and tenant ID.
- If you are installing the cluster using a service principal, you have its application ID and password.
- If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from.
If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites:
- You have its client ID.
- You have assigned it to the virtual machine that you will run the installation program from.
Procedure
Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the
~/.azure/
directory and delete theosServicePrincipal.json
configuration file.Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation.
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
When specifying the directory:
-
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
NoteAlways delete the
~/.powervs
directory to avoid reusing a stale configuration. Run the following command:$ rm -rf ~/.powervs
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select azure as the platform to target.
If the installation program cannot locate the
osServicePrincipal.json
configuration file from a previous installation, you are prompted for Azure subscription and authentication values.Enter the following Azure parameter values for your subscription:
- azure subscription id: Enter the subscription ID to use for the cluster.
- azure tenant id: Enter the tenant ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id:
- If you are using a service principal, enter its application ID.
- If you are using a system-assigned managed identity, leave this value blank.
- If you are using a user-assigned managed identity, specify its client ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret:
- If you are using a service principal, enter its password.
- If you are using a system-assigned managed identity, leave this value blank.
- If you are using a user-assigned managed identity, leave this value blank.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
-
Modify the
install-config.yaml
file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
If previously not detected, the installation program creates an osServicePrincipal.json
configuration file and stores this file in the ~/.azure/
directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform.
Additional resources
6.5.1. Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
Compute | RHCOS, RHEL 8.6 and later [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
- x86-64 architecture requires x86-64-v2 ISA
- ARM64 architecture requires ARMv8.0-A ISA
- IBM Power architecture requires Power 9 ISA
- s390x architecture requires z14 ISA
For more information, see RHEL Architectures.
You are required to use Azure virtual machines that have the premiumIO
parameter set to true
.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Additional resources
6.5.2. Tested instance types for Azure
The following Microsoft Azure instance types have been tested with OpenShift Container Platform.
Example 6.1. Machine types based on 64-bit x86 architecture
-
standardBSFamily
-
standardDADSv5Family
-
standardDASv4Family
-
standardDASv5Family
-
standardDCSv3Family
-
standardDCSv2Family
-
standardDDCSv3Family
-
standardDDSv4Family
-
standardDDSv5Family
-
standardDLDSv5Family
-
standardDLSv5Family
-
standardDSFamily
-
standardDSv2Family
-
standardDSv2PromoFamily
-
standardDSv3Family
-
standardDSv4Family
-
standardDSv5Family
-
standardEADSv5Family
-
standardEASv4Family
-
standardEASv5Family
-
standardEBDSv5Family
-
standardEBSv5Family
-
standardEDSv4Family
-
standardEDSv5Family
-
standardEIADSv5Family
-
standardEIASv4Family
-
standardEIASv5Family
-
standardEIDSv5Family
-
standardEISv3Family
-
standardEISv5Family
-
standardESv3Family
-
standardESv4Family
-
standardESv5Family
-
standardFXMDVSFamily
-
standardFSFamily
-
standardFSv2Family
-
standardGSFamily
-
standardHBrsv2Family
-
standardHBSFamily
-
standardHCSFamily
-
standardLASv3Family
-
standardLSFamily
-
standardLSv2Family
-
standardLSv3Family
-
standardMDSMediumMemoryv2Family
-
standardMIDSMediumMemoryv2Family
-
standardMISMediumMemoryv2Family
-
standardMSFamily
-
standardMSMediumMemoryv2Family
-
StandardNCADSA100v4Family
-
Standard NCASv3_T4 Family
-
standardNCSv3Family
-
standardNDSv2Family
-
standardNPSFamily
-
StandardNVADSA10v5Family
-
standardNVSv3Family
-
standardXEISv4Family
6.5.3. Tested instance types for Azure on 64-bit ARM infrastructures
The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform.
Example 6.2. Machine types based on 64-bit ARM architecture
-
standardDPSv5Family
-
standardDPDSv5Family
-
standardDPLDSv5Family
-
standardDPLSv5Family
-
standardEPSv5Family
-
standardEPDSv5Family
6.5.4. Enabling trusted launch for Azure VMs
You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules.
See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features.
Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add the following stanza:controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4
- 1
- Specify
controlPlane.platform.azure
orcompute.platform.azure
to enable trusted launch on only control plane or compute nodes respectively. Specifyplatform.azure.defaultMachinePlatform
to enable trusted launch on all nodes. - 2
- Enable trusted launch features.
- 3
- Enable secure boot. For more information, see the Azure documentation about secure boot.
- 4
- Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules.
6.5.5. Enabling confidential VMs
You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes.
Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can use confidential VMs with the following VM sizes:
- DCasv5-series
- DCadsv5-series
- ECasv5-series
- ECadsv5-series
Confidential VMs are currently not supported on 64-bit ARM architectures.
Prerequisites
-
You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add the following stanza:controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5
- 1
- Specify
controlPlane.platform.azure
orcompute.platform.azure
to deploy confidential VMs on only control plane or compute nodes respectively. Specifyplatform.azure.defaultMachinePlatform
to deploy confidential VMs on all nodes. - 2
- Enable confidential VMs.
- 3
- Enable secure boot. For more information, see the Azure documentation about secure boot.
- 4
- Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules.
- 5
- Specify
VMGuestStateOnly
to encrypt the VM guest state.
6.5.6. Sample customized install-config.yaml file for Azure
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program and modify it.
apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19
- 1 10 15 17
- Required. The installation program prompts you for this value.
- 2 6 11
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlane
section is a single mapping, but thecompute
section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3
, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 12
- The cluster network plugin to install. The supported values are
OVNKubernetes
andOpenShiftSDN
. The default value isOVNKubernetes
. - 13
- Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The
publisher
,offer
,sku
, andversion
parameters underplatform.azure.defaultMachinePlatform.osImage
apply to both control plane and compute machines. If the parameters undercontrolPlane.platform.azure.osImage
orcompute.platform.azure.osImage
are set, they override theplatform.azure.defaultMachinePlatform.osImage
parameters. - 14
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 16
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 18
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
- 19
- You can optionally provide the
sshKey
value that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.
6.5.7. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
You have an existing
install-config.yaml
file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.NoteThe
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.
NoteThe installation program does not support the proxy
readinessEndpoints
field.NoteIf the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete --log-level debug
- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
6.6. Network configuration phases
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
You can customize the following network-related fields in the
install-config.yaml
file before you create the manifest files:-
networking.networkType
-
networking.clusterNetwork
-
networking.serviceNetwork
networking.machineNetwork
For more information on these fields, refer to Installation configuration parameters.
NoteSet the
networking.machineNetwork
to match the CIDR that the preferred NIC resides in.ImportantThe CIDR range
172.17.0.0/16
is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests
, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml
file during phase 2. However, you can further customize the network plugin during phase 2.
6.7. Specifying advanced network configuration
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
Prerequisites
-
You have created the
install-config.yaml
file and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory> 1
- 1
<installation_directory>
specifies the name of the directory that contains theinstall-config.yaml
file for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:
Specify the advanced network configuration for your cluster in the
cluster-network-03-config.yml
file, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800
Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}
-
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program consumes themanifests/
directory when you create the Ignition config files.
6.8. Cluster Network Operator configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group:
clusterNetwork
- IP address pools from which pod IP addresses are allocated.
serviceNetwork
- IP address pool for services.
defaultNetwork.type
- Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
6.8.1. Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CNO object. This name is always |
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 |
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14
You can customize this field only in the |
|
| Configures the network plugin for the cluster network. |
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. |
For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix
parameter for each network type that is defined in the install-config.yaml
file. Setting a different value for each clusterNetwork.hostPrefix
parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes.
defaultNetwork object configuration
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
|
Either Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. |
|
| This object is only valid for the OpenShift SDN network plugin. |
|
| This object is only valid for the OVN-Kubernetes network plugin. |
Configuration for the OpenShift SDN network plugin
The following table describes the configuration fields for the OpenShift SDN network plugin:
Field | Type | Description |
---|---|---|
|
|
Configures the network isolation mode for OpenShift SDN. The default value is
The values |
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789
Configuration for the OVN-Kubernetes network plugin
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
Field | Type | Description |
---|---|---|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to |
|
|
The port to use for all Geneve packets. The default value is |
|
| Specify an empty object to enable IPsec encryption. |
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
|
| Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. |
|
If your existing network infrastructure overlaps with the This field cannot be changed after installation. |
The default value is |
|
If your existing network infrastructure overlaps with the This field cannot be changed after installation. |
The default value is |
Field | Type | Description |
---|---|---|
| integer |
The maximum number of messages to generate every second per node. The default value is |
| integer |
The maximum size for the audit log in bytes. The default value is |
| integer | The maximum number of log files that are retained. |
| string | One of the following additional audit log targets:
|
| string |
The syslog facility, such as |
Field | Type | Description |
---|---|---|
|
|
Set this field to
This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to |
|
|
You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the |
Example OVN-Kubernetes configuration with IPSec enabled
defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}
kubeProxyConfig object configuration (OpenShiftSDN container network interface only)
The values for the kubeProxyConfig
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
|
The refresh period for Note
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the |
|
|
The minimum duration before refreshing kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s |
6.9. Configuring hybrid networking with OVN-Kubernetes
You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations.
This configuration is necessary to run both Linux and Windows nodes in the same cluster.
Prerequisites
-
You defined
OVNKubernetes
for thenetworking.networkType
parameter in theinstall-config.yaml
file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>
where:
<installation_directory>
-
Specifies the name of the directory that contains the
install-config.yaml
file for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF
where:
<installation_directory>
-
Specifies the directory name that contains the
manifests/
directory for your cluster.
Open the
cluster-network-03-config.yml
file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example:Specify a hybrid networking configuration
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2
- 1
- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetwork
CIDR must not overlap with theclusterNetwork
CIDR. - 2
- Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789
port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPort
value because this Windows server version does not support selecting a custom VXLAN port.-
Save the
cluster-network-03-config.yml
file and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program deletes themanifests/
directory when creating the cluster.
For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
Additional resources
- For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs.
6.10. Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc
.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the architecture from the Product Variant drop-down list.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.14 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the
oc
command:$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
- Click Download Now next to the OpenShift v4.14 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
Verification
After you install the OpenShift CLI, it is available using the
oc
command:C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.14 macOS Client entry and save the file.
NoteFor macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
Verification
Verify your installation by using an
oc
command:$ oc <command>
6.11. Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system
project. If you configured the credentialsMode
parameter in the install-config.yaml
file to Manual
, you must use one of the following alternatives:
- To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
- To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials.
6.11.1. Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
custom resources (CRs) from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \2 --to=<path_to_directory_for_credentials_requests> 3
- 1
- The
--included
parameter includes only the manifests that your specific cluster configuration requires. - 2
- Specify the location of the
install-config.yaml
file. - 3
- Specify the path to the directory where you want to store the
CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
This command creates a YAML file for each
CredentialsRequest
object.Sample
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ...
Create YAML files for secrets in the
openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretRef
for eachCredentialsRequest
object.Sample
CredentialsRequest
object with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ...
Sample
Secret
objectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
6.11.2. Configuring an Azure cluster to use short-term credentials
To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster.
6.11.2.1. Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The ccoctl
utility is a Linux binary that must run in a Linux environment.
Prerequisites
- You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc
).
You have created a global Microsoft Azure account for the
ccoctl
utility to use with the following permissions:Example 6.3. Required Azure permissions
- Microsoft.Resources/subscriptions/resourceGroups/read
- Microsoft.Resources/subscriptions/resourceGroups/write
- Microsoft.Resources/subscriptions/resourceGroups/delete
- Microsoft.Authorization/roleAssignments/read
- Microsoft.Authorization/roleAssignments/delete
- Microsoft.Authorization/roleAssignments/write
- Microsoft.Authorization/roleDefinitions/read
- Microsoft.Authorization/roleDefinitions/write
- Microsoft.Authorization/roleDefinitions/delete
- Microsoft.Storage/storageAccounts/listkeys/action
- Microsoft.Storage/storageAccounts/delete
- Microsoft.Storage/storageAccounts/read
- Microsoft.Storage/storageAccounts/write
- Microsoft.Storage/storageAccounts/blobServices/containers/write
- Microsoft.Storage/storageAccounts/blobServices/containers/delete
- Microsoft.Storage/storageAccounts/blobServices/containers/read
- Microsoft.ManagedIdentity/userAssignedIdentities/delete
- Microsoft.ManagedIdentity/userAssignedIdentities/read
- Microsoft.ManagedIdentity/userAssignedIdentities/write
- Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read
- Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write
- Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete
- Microsoft.Storage/register/action
- Microsoft.ManagedIdentity/register/action
Procedure
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
NoteEnsure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
Change the permissions to make
ccoctl
executable by running the following command:$ chmod 775 ccoctl
Verification
To verify that
ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctl.rhel9
Example output
OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
6.11.2.2. Creating Azure resources with the Cloud Credential Operator utility
You can use the ccoctl azure create-all
command to automate the creation of Azure resources.
By default, ccoctl
creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir
flag. This procedure uses <path_to_ccoctl_output_dir>
to refer to this directory.
Prerequisites
You must have:
-
Extracted and prepared the
ccoctl
binary. - Access to your Microsoft Azure account by using the Azure CLI.
Procedure
Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
objects from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \2 --to=<path_to_directory_for_credentials_requests> 3
- 1
- The
--included
parameter includes only the manifests that your specific cluster configuration requires. - 2
- Specify the location of the
install-config.yaml
file. - 3
- Specify the path to the directory where you want to store the
CredentialsRequest
objects. If the specified directory does not exist, this command creates it.
NoteThis command might take a few moments to run.
To enable the
ccoctl
utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command:$ az login
Use the
ccoctl
tool to process allCredentialsRequest
objects by running the following command:$ ccoctl azure create-all \ --name=<azure_infra_name> \1 --output-dir=<ccoctl_output_dir> \2 --region=<azure_region> \3 --subscription-id=<azure_subscription_id> \4 --credentials-requests-dir=<path_to_credentials_requests_directory> \5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \6 --tenant-id=<azure_tenant_id> 7
- 1
- Specify the user-defined name for all created Azure resources used for tracking.
- 2
- Optional: Specify the directory in which you want the
ccoctl
utility to create objects. By default, the utility creates objects in the directory in which the commands are run. - 3
- Specify the Azure region in which cloud resources will be created.
- 4
- Specify the Azure subscription ID to use.
- 5
- Specify the directory containing the files for the component
CredentialsRequest
objects. - 6
- Specify the name of the resource group containing the cluster’s base domain Azure DNS zone.
- 7
- Specify the Azure tenant ID to use.
NoteIf your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the--enable-tech-preview
parameter.To see additional optional parameters and explanations of how to use them, run the
azure create-all --help
command.
Verification
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifests
directory:$ ls <path_to_ccoctl_output_dir>/manifests
Example output
azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml
You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts.
6.11.2.3. Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl
) created to the correct directories for the installation program.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
-
You have configured the Cloud Credential Operator utility (
ccoctl
). -
You have created the cloud provider resources that are required for your cluster with the
ccoctl
utility.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ...
If you used the
ccoctl
utility to create a new Azure resource group instead of using an existing resource group, modify theresourceGroupName
parameter in theinstall-config.yaml
as shown:Sample configuration file snippet
apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ...
- 1
- This value must match the user-defined name for Azure resources that was specified with the
--name
argument of theccoctl azure create-all
command.
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Copy the manifests that the
ccoctl
utility generated to themanifests
directory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
Copy the
tls
directory that contains the private key to the installation directory:$ cp -a /<path_to_ccoctl_output_dir>/tls .
6.12. Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster
command of the installation program only once, during initial installation.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have an Azure subscription ID and tenant ID.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2
Verification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user. -
Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
6.13. Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Additional resources
- See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
6.14. Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
6.15. Next steps
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
Chapter 7. Installing a cluster on Azure into an existing VNet
In OpenShift Container Platform version 4.14, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
7.1. Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
- If you use customer-managed encryption keys, you prepared your Azure environment for encryption.
7.2. About reusing a VNet for your OpenShift Container Platform cluster
In OpenShift Container Platform 4.14, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.
By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.
7.2.1. Requirements for using your VNet
When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:
- Subnets
- Route tables
- VNets
- Network Security Groups
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.
The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.
Your VNet must meet the following characteristics:
-
The VNet’s CIDR block must contain the
Networking.MachineCIDR
range, which is the IP address pool for cluster machines. - The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.
You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.
By default, if you specify availability zones in the install-config.yaml
file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region. To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones.
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the specified subnets exist.
- There are two private subnets, one for the control plane machines and one for the compute machines.
- The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them.
If you destroy a cluster that uses an existing VNet, the VNet is not deleted.
7.2.1.1. Network security group requirements
The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.
The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.
Port | Description | Control plane | Compute |
---|---|---|---|
| Allows HTTP traffic | x | |
| Allows HTTPS traffic | x | |
| Allows communication to the control plane machines | x | |
| Allows internal communication to the machine config server for provisioning machines | x |
- If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs. A network security group rule is not needed.
Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates.
To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies.
Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.
Protocol | Port | Description |
---|---|---|
ICMP | N/A | Network reachability tests |
TCP |
| Metrics |
|
Host level services, including the node exporter on ports | |
| The default ports that Kubernetes reserves | |
UDP |
| VXLAN |
| Geneve | |
|
Host level services, including the node exporter on ports | |
| IPsec IKE packets | |
| IPsec NAT-T packets | |
|
Network Time Protocol (NTP) on UDP port
If you configure an external NTP time server, you must open UDP port | |
TCP/UDP |
| Kubernetes node port |
ESP | N/A | IPsec Encapsulating Security Payload (ESP) |
Protocol | Port | Description |
---|---|---|
TCP |
| etcd server and peer ports |
Additional resources
7.2.2. Division of permissions
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.
The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.
7.2.3. Isolation between clusters
Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.
7.3. Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.14, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
7.4. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
7.5. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Procedure
- Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider from the Run it yourself section of the page.
- Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
Important- The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
- Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
7.6. Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Prerequisites
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have an Azure subscription ID and tenant ID.
- If you are installing the cluster using a service principal, you have its application ID and password.
- If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from.
If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites:
- You have its client ID.
- You have assigned it to the virtual machine that you will run the installation program from.
Procedure
Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the
~/.azure/
directory and delete theosServicePrincipal.json
configuration file.Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation.
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
When specifying the directory:
-
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
NoteAlways delete the
~/.powervs
directory to avoid reusing a stale configuration. Run the following command:$ rm -rf ~/.powervs
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select azure as the platform to target.
If the installation program cannot locate the
osServicePrincipal.json
configuration file from a previous installation, you are prompted for Azure subscription and authentication values.Enter the following Azure parameter values for your subscription:
- azure subscription id: Enter the subscription ID to use for the cluster.
- azure tenant id: Enter the tenant ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id:
- If you are using a service principal, enter its application ID.
- If you are using a system-assigned managed identity, leave this value blank.
- If you are using a user-assigned managed identity, specify its client ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret:
- If you are using a service principal, enter its password.
- If you are using a system-assigned managed identity, leave this value blank.
- If you are using a user-assigned managed identity, leave this value blank.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
-
Modify the
install-config.yaml
file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
If previously not detected, the installation program creates an osServicePrincipal.json
configuration file and stores this file in the ~/.azure/
directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform.
Additional resources
7.6.1. Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
Compute | RHCOS, RHEL 8.6 and later [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
- x86-64 architecture requires x86-64-v2 ISA
- ARM64 architecture requires ARMv8.0-A ISA
- IBM Power architecture requires Power 9 ISA
- s390x architecture requires z14 ISA
For more information, see RHEL Architectures.
You are required to use Azure virtual machines that have the premiumIO
parameter set to true
.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Additional resources
7.6.2. Tested instance types for Azure
The following Microsoft Azure instance types have been tested with OpenShift Container Platform.
Example 7.1. Machine types based on 64-bit x86 architecture
-
standardBSFamily
-
standardDADSv5Family
-
standardDASv4Family
-
standardDASv5Family
-
standardDCSv3Family
-
standardDCSv2Family
-
standardDDCSv3Family
-
standardDDSv4Family
-
standardDDSv5Family
-
standardDLDSv5Family
-
standardDLSv5Family
-
standardDSFamily
-
standardDSv2Family
-
standardDSv2PromoFamily
-
standardDSv3Family
-
standardDSv4Family
-
standardDSv5Family
-
standardEADSv5Family
-
standardEASv4Family
-
standardEASv5Family
-
standardEBDSv5Family
-
standardEBSv5Family
-
standardEDSv4Family
-
standardEDSv5Family
-
standardEIADSv5Family
-
standardEIASv4Family
-
standardEIASv5Family
-
standardEIDSv5Family
-
standardEISv3Family
-
standardEISv5Family
-
standardESv3Family
-
standardESv4Family
-
standardESv5Family
-
standardFXMDVSFamily
-
standardFSFamily
-
standardFSv2Family
-
standardGSFamily
-
standardHBrsv2Family
-
standardHBSFamily
-
standardHCSFamily
-
standardLASv3Family
-
standardLSFamily
-
standardLSv2Family
-
standardLSv3Family
-
standardMDSMediumMemoryv2Family
-
standardMIDSMediumMemoryv2Family
-
standardMISMediumMemoryv2Family
-
standardMSFamily
-
standardMSMediumMemoryv2Family
-
StandardNCADSA100v4Family
-
Standard NCASv3_T4 Family
-
standardNCSv3Family
-
standardNDSv2Family
-
standardNPSFamily
-
StandardNVADSA10v5Family
-
standardNVSv3Family
-
standardXEISv4Family
7.6.3. Tested instance types for Azure on 64-bit ARM infrastructures
The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform.
Example 7.2. Machine types based on 64-bit ARM architecture
-
standardDPSv5Family
-
standardDPDSv5Family
-
standardDPLDSv5Family
-
standardDPLSv5Family
-
standardEPSv5Family
-
standardEPDSv5Family
7.6.4. Enabling trusted launch for Azure VMs
You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules.
See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features.
Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add the following stanza:controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4
- 1
- Specify
controlPlane.platform.azure
orcompute.platform.azure
to enable trusted launch on only control plane or compute nodes respectively. Specifyplatform.azure.defaultMachinePlatform
to enable trusted launch on all nodes. - 2
- Enable trusted launch features.
- 3
- Enable secure boot. For more information, see the Azure documentation about secure boot.
- 4
- Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules.
7.6.5. Enabling confidential VMs
You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes.
Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can use confidential VMs with the following VM sizes:
- DCasv5-series
- DCadsv5-series
- ECasv5-series
- ECadsv5-series
Confidential VMs are currently not supported on 64-bit ARM architectures.
Prerequisites
-
You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add the following stanza:controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5
- 1
- Specify
controlPlane.platform.azure
orcompute.platform.azure
to deploy confidential VMs on only control plane or compute nodes respectively. Specifyplatform.azure.defaultMachinePlatform
to deploy confidential VMs on all nodes. - 2
- Enable confidential VMs.
- 3
- Enable secure boot. For more information, see the Azure documentation about secure boot.
- 4
- Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules.
- 5
- Specify
VMGuestStateOnly
to encrypt the VM guest state.
7.6.6. Sample customized install-config.yaml file for Azure
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program and modify it.
apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22
- 1 10 14 20
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlane
section is a single mapping, but thecompute
section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3
, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 11
- The cluster network plugin to install. The supported values are
OVNKubernetes
andOpenShiftSDN
. The default value isOVNKubernetes
. - 12
- Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The
publisher
,offer
,sku
, andversion
parameters underplatform.azure.defaultMachinePlatform.osImage
apply to both control plane and compute machines. If the parameters undercontrolPlane.platform.azure.osImage
orcompute.platform.azure.osImage
are set, they override theplatform.azure.defaultMachinePlatform.osImage
parameters. - 13
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 15
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 16
- If you use an existing VNet, specify the name of the resource group that contains it.
- 17
- If you use an existing VNet, specify its name.
- 18
- If you use an existing VNet, specify the name of the subnet to host the control plane machines.
- 19
- If you use an existing VNet, specify the name of the subnet to host the compute machines.
- 21
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
- 22
- You can optionally provide the
sshKey
value that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.
7.6.7. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
You have an existing
install-config.yaml
file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.NoteThe
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.
NoteThe installation program does not support the proxy
readinessEndpoints
field.NoteIf the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete --log-level debug
- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
Additional resources
- For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs.
7.7. Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc
.