7.7. Installing a cluster on Azure with network customizations
In OpenShift Container Platform version 4.12, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
7.7.1. Conditions préalables
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-system
namespace, you can manually create and maintain IAM credentials. Manual mode can also be used in environments where the cloud IAM APIs are not reachable. - If you use customer-managed encryption keys, you prepared your Azure environment for encryption.
7.7.2. Accès à l'internet pour OpenShift Container Platform
Dans OpenShift Container Platform 4.12, vous devez avoir accès à Internet pour installer votre cluster.
Vous devez disposer d'un accès à l'internet pour :
- Accédez à OpenShift Cluster Manager Hybrid Cloud Console pour télécharger le programme d'installation et effectuer la gestion des abonnements. Si le cluster dispose d'un accès internet et que vous ne désactivez pas Telemetry, ce service donne automatiquement des droits à votre cluster.
- Accédez à Quay.io pour obtenir les paquets nécessaires à l'installation de votre cluster.
- Obtenir les paquets nécessaires pour effectuer les mises à jour de la grappe.
Si votre cluster ne peut pas avoir d'accès direct à l'internet, vous pouvez effectuer une installation en réseau restreint sur certains types d'infrastructure que vous fournissez. Au cours de ce processus, vous téléchargez le contenu requis et l'utilisez pour remplir un registre miroir avec les paquets d'installation. Avec certains types d'installation, l'environnement dans lequel vous installez votre cluster ne nécessite pas d'accès à Internet. Avant de mettre à jour le cluster, vous mettez à jour le contenu du registre miroir.
7.7.3. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procédure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64
architecture, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Exemple de sortie
Agent pid 31874
NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Exemple de sortie
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Prochaines étapes
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
7.7.4. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Conditions préalables
- You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procédure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
7.7.5. Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Conditions préalables
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procédure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
When specifying the directory:
-
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory. - Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.- Select azure as the platform to target.
If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:
-
azure subscription id: The subscription ID to use for the cluster. Specify the
id
value in your account output. -
azure tenant id: The tenant ID. Specify the
tenantId
value in your account output. -
azure service principal client id: The value of the
appId
parameter for the service principal. -
azure service principal client secret: The value of the
password
parameter for the service principal.
-
azure subscription id: The subscription ID to use for the cluster. Specify the
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yaml
file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
7.7.5.1. Installation configuration parameters
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml
file.
7.7.5.1.1. Required configuration parameters
Required installation configuration parameters are described in the following table:
Paramètres | Description | Valeurs |
---|---|---|
|
The API version for the | String |
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
Kubernetes resource | Objet |
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
The configuration for the specific platform upon which to perform the installation: | Objet |
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
{ "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"you@example.com" }, "quay.io":{ "auth":"b3Blb=", "email":"you@example.com" } } } |
7.7.5.1.2. Network configuration parameters
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.
Paramètres | Description | Valeurs |
---|---|---|
| The configuration for the cluster network. | Objet Note
You cannot modify parameters specified by the |
| The Red Hat OpenShift Networking network plugin to install. |
Either |
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 |
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 |
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 |
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
7.7.5.1.3. Optional configuration parameters
Optional installation configuration parameters are described in the following table:
Paramètres | Description | Valeurs |
---|---|---|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
| Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing. | String array |
|
Selects an initial set of optional capabilities to enable. Valid values are | String |
|
Extends the set of optional capabilities beyond what you specify in | String array |
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
Required if you use |
|
|
Required if you use |
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
| Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". |
String. The name of the feature set to enable, such as |
| The configuration for the machines that comprise the control plane. |
Array of |
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
Required if you use |
|
|
Required if you use |
|
| The number of control plane machines to provision. |
The only supported value is |
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
Required if you use | String |
| Specify one or more repositories that may also contain the same images. | Array of strings |
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example: sshKey: <key1> <key2> <key3> |
7.7.5.1.4. Additional Azure configuration parameters
Additional Azure configuration parameters are described in the following table:
Paramètres | Description | Valeurs |
---|---|---|
| Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. |
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
| Defines the type of disk. |
|
| Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. |
|
| The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. |
String, for example |
| The name of the disk encryption set that contains the encryption key from the installation prerequisites. |
String, for example |
| Optional. The ID of a disk encryption set in another Azure subscription. This secondary disk encryption set will be used to encrypt compute machines. By default, the installation program will use the disk encryption set from the Azure subscription ID that you provided to the installation program prompts. |
String, in the format |
| Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. |
|
| The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. |
String, for example |
| The name of the disk encryption set that contains the encryption key from the installation prerequisites. |
String, for example |
| Optional. The ID of a disk encryption set in another Azure subscription. This secondary disk encryption set will be used to encrypt control plane machines. By default, the installation program will use the disk encryption set from the Azure subscription ID that you provided to the installation program prompts. |
String, in the format |
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
| Defines the type of disk. |
|
| Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. |
|
| The name of the resource group that contains the DNS zone for your base domain. |
String, for example |
| The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. |
String, for example |
| The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. |
|
| The name of the Azure region that hosts your cluster. |
Any valid region name, such as |
| List of availability zones to place machines in. For high availability, specify at least two zones. |
List of zones, for example |
| Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. |
|
|
The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the | String. |
| The name of the existing VNet that you want to deploy your cluster to. | String. |
| The name of the existing subnet in your VNet that you want to deploy your control plane machines to. |
Valid CIDR, for example |
| The name of the existing subnet in your VNet that you want to deploy your compute machines to. |
Valid CIDR, for example |
|
The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value |
Any valid cloud environment, such as |
You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.
7.7.5.2. Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Stockage | IOPS [2] |
---|---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | RHCOS | 4 | 16 GB | 100 GB | 300 |
Compute | RHCOS, RHEL 8.4, or RHEL 8.5 [3] | 2 | 8 GB | 100 GB | 300 |
- One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
- OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
- As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
You are required to use Azure virtual machines with premiumIO
set to true
. The machines must also have the hyperVGeneration
property contain V1
.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
7.7.5.3. Tested instance types for Azure
The following Microsoft Azure instance types have been tested with OpenShift Container Platform.
Exemple 7.24. Machine types
-
c4.*
-
c5.*
-
c5a.*
-
i3.*
-
m4.*
-
m5.*
-
m5a.*
-
m6i.*
-
r4.*
-
r5.*
-
r5a.*
-
r6i.*
-
t3.*
-
t3a.*
7.7.5.4. Tested instance types for Azure ARM
The following Microsoft Azure instance types have been tested with OpenShift Container Platform.
Exemple 7.25. Machine types
-
c6g.*
-
m6g.*
7.7.5.5. Sample customized install-config.yaml file for Azure
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program and modify it.
apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18
- 1 10 14 16
- Required. The installation program prompts you for this value.
- 2 6 11
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlane
section is a single mapping, but thecompute
section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3
, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 12
- The cluster network plugin to install. The supported values are
OVNKubernetes
andOpenShiftSDN
. The default value isOVNKubernetes
. - 13
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 15
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 17
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64
architecture. - 18
- You can optionally provide the
sshKey
value that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.
7.7.5.6. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Conditions préalables
-
You have an existing
install-config.yaml
file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.NoteThe
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procédure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.
NoteThe installation program does not support the proxy
readinessEndpoints
field.NoteIf the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete --log-level debug
- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
7.7.6. Network configuration phases
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
You can customize the following network-related fields in the
install-config.yaml
file before you create the manifest files:-
networking.networkType
-
networking.clusterNetwork
-
networking.serviceNetwork
networking.machineNetwork
For more information on these fields, refer to Installation configuration parameters.
NoteSet the
networking.machineNetwork
to match the CIDR that the preferred NIC resides in.ImportantThe CIDR range
172.17.0.0/16
is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests
, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml
file during phase 2. However, you can further customize the network plugin during phase 2.
7.7.7. Specifying advanced network configuration
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
Conditions préalables
-
You have created the
install-config.yaml
file and completed any modifications to it.
Procédure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory> 1
- 1
<installation_directory>
specifies the name of the directory that contains theinstall-config.yaml
file for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:
Specify the advanced network configuration for your cluster in the
cluster-network-03-config.yml
file, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800
Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}
-
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program consumes themanifests/
directory when you create the Ignition config files.
7.7.8. Cluster Network Operator configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group and these fields cannot be changed:
clusterNetwork
- IP address pools from which pod IP addresses are allocated.
serviceNetwork
- IP address pool for services.
defaultNetwork.type
- Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
7.7.8.1. Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CNO object. This name is always |
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23
You can customize this field only in the |
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14
You can customize this field only in the |
|
| Configures the network plugin for the cluster network. |
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
|
Either Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. |
|
| This object is only valid for the OpenShift SDN network plugin. |
|
| This object is only valid for the OVN-Kubernetes network plugin. |
Configuration for the OpenShift SDN network plugin
The following table describes the configuration fields for the OpenShift SDN network plugin:
Field | Type | Description |
---|---|---|
|
|
Configures the network isolation mode for OpenShift SDN. The default value is
The values |
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789
Configuration for the OVN-Kubernetes network plugin
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
Field | Type | Description |
---|---|---|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to |
|
|
The port to use for all Geneve packets. The default value is |
|
| Specify an empty object to enable IPsec encryption. |
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
|
| Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. |
|
If your existing network infrastructure overlaps with the
For example, if the This field cannot be changed after installation. |
The default value is |
|
If your existing network infrastructure overlaps with the This field cannot be changed after installation. |
The default value is |
Field | Type | Description |
---|---|---|
| entier |
The maximum number of messages to generate every second per node. The default value is |
| entier |
The maximum size for the audit log in bytes. The default value is |
| chaîne de caractères | One of the following additional audit log targets:
|
| chaîne de caractères |
The syslog facility, such as |
Field | Type | Description |
---|---|---|
|
|
Set this field to
This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to |
Example OVN-Kubernetes configuration with IPSec enabled
defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}
kubeProxyConfig object configuration
The values for the kubeProxyConfig
object are defined in the following table:
Field | Type | Description |
---|---|---|
|
|
The refresh period for Note
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the |
|
|
The minimum duration before refreshing kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s |
7.7.9. Configuring hybrid networking with OVN-Kubernetes
You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.
You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process.
Conditions préalables
-
You defined
OVNKubernetes
for thenetworking.networkType
parameter in theinstall-config.yaml
file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
Procédure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>
où :
<installation_directory>
-
Specifies the name of the directory that contains the
install-config.yaml
file for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF
où :
<installation_directory>
-
Specifies the directory name that contains the
manifests/
directory for your cluster.
Open the
cluster-network-03-config.yml
file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:Specify a hybrid networking configuration
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2
- 1
- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetwork
CIDR cannot overlap with theclusterNetwork
CIDR. - 2
- Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789
port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPort
value because this Windows server version does not support selecting a custom VXLAN port.-
Save the
cluster-network-03-config.yml
file and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program deletes themanifests/
directory when creating the cluster.
For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
7.7.10. Enabling Accelerated Networking during installation
You can enable Accelerated Networking on Microsoft Azure by adding acceleratedNetworking
to your compute machine set YAML file before you install the cluster.
Conditions préalables
-
You have created the
install-config.yaml
file and completed any modifications to it. - You have created the manifests for your cluster.
Procédure
Change to the
openshift
directory within the directory that contains the installation program. Theopenshift
directory contains the Kubernetes manifest files that define the worker machines. These are the three default compute machine set files for an Azure cluster:Machine set files in
openshift
directory listing99_openshift-cluster-api_worker-machineset-0.yaml 99_openshift-cluster-api_worker-machineset-1.yaml 99_openshift-cluster-api_worker-machineset-2.yaml
Add the following to the
providerSpec
field in each compute machine set file:providerSpec: value: ... acceleratedNetworking: true 1 ... vmSize: <azure-vm-size> 2 ...
- 1
- This line enables Accelerated Networking.
- 2
- Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation.
Ressources complémentaires
- For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs.
7.7.11. Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster
command of the installation program only once, during initial installation.
Conditions préalables
- Configurez un compte auprès de la plateforme cloud qui héberge votre cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procédure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
Vérification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user. -
Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Exemple de sortie
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s
-
Les fichiers de configuration d'Ignition générés par le programme d'installation contiennent des certificats qui expirent après 24 heures et qui sont renouvelés à ce moment-là. Si le cluster est arrêté avant le renouvellement des certificats et qu'il est redémarré après l'expiration des 24 heures, le cluster récupère automatiquement les certificats expirés. L'exception est que vous devez approuver manuellement les demandes de signature de certificat (CSR) de
node-bootstrapper
en attente pour récupérer les certificats de kubelet. Pour plus d'informations, consultez la documentation relative à Recovering from expired control plane certificates. - Il est recommandé d'utiliser les fichiers de configuration Ignition dans les 12 heures suivant leur génération, car le certificat de 24 heures tourne entre 16 et 22 heures après l'installation du cluster. En utilisant les fichiers de configuration Ignition dans les 12 heures, vous pouvez éviter l'échec de l'installation si la mise à jour du certificat s'exécute pendant l'installation.
7.7.12. Finalizing user-managed encryption after installation
If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group.
Procédure
Obtain the identity of the cluster resource group used by the installer:
If you specified an existing resource group in
install-config.yaml
, obtain its Azure identity by running the following command:$ az identity list --resource-group "<existing_resource_group>"
If you did not specify a existing resource group in
install-config.yaml
, locate the resource group that the installer created, and then obtain its Azure identity by running the following commands:$ az group list
$ az identity list --resource-group "<installer_created_resource_group>"
Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command:
$ az role assignment create --role "<privileged_role>" \1 --assignee "<resource_group_identity>" 2
Obtain the
id
of the disk encryption set you created prior to installation by running the following command:$ az disk-encryption-set show -n <disk_encryption_set_name> \1 --resource-group <resource_group_name> 2
Obtain the identity of the cluster service principal by running the following command:
$ az identity show -g <cluster_resource_group> \1 -n <cluster_service_principal_name> \2 --query principalId --out tsv
Create a role assignment that grants the cluster service principal necessary privileges to the disk encryption set by running the following command:
$ az role assignment create --assignee <cluster_service_principal_id> \1 --role <privileged_role> \2 --scope <disk_encryption_set_id> \3
Create a storage class that uses the user-managed disk encryption set:
Save the following storage class definition to a file, for example
storage-class-definition.yaml
:kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer
- 1
- Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example
"/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx"
. - 2
- Specifies the name of the resource group used by the installer. This is the same resource group from the first step.
Create the storage class
managed-premium
from the file you created by running the following command:$ oc create -f storage-class-definition.yaml
-
Select the
managed-premium
storage class when you create persistent volumes to use encrypted storage.
7.7.13. Installer le CLI OpenShift en téléchargeant le binaire
Vous pouvez installer l'OpenShift CLI (oc
) pour interagir avec OpenShift Container Platform à partir d'une interface de ligne de commande. Vous pouvez installer oc
sur Linux, Windows ou macOS.
Si vous avez installé une version antérieure de oc
, vous ne pouvez pas l'utiliser pour exécuter toutes les commandes dans OpenShift Container Platform 4.12. Téléchargez et installez la nouvelle version de oc
.
Installation de la CLI OpenShift sur Linux
Vous pouvez installer le binaire OpenShift CLI (oc
) sur Linux en utilisant la procédure suivante.
Procédure
- Naviguez jusqu'à la page de téléchargements OpenShift Container Platform sur le portail client Red Hat.
- Sélectionnez l'architecture dans la liste déroulante Product Variant.
- Sélectionnez la version appropriée dans la liste déroulante Version.
- Cliquez sur Download Now à côté de l'entrée OpenShift v4.12 Linux Client et enregistrez le fichier.
Décompressez l'archive :
tar xvf <file>
Placez le fichier binaire
oc
dans un répertoire situé sur votre sitePATH
.Pour vérifier votre
PATH
, exécutez la commande suivante :$ echo $PATH
Après l'installation de la CLI OpenShift, elle est disponible à l'aide de la commande oc
:
oc <command>
Installation de la CLI OpenShift sur Windows
Vous pouvez installer le binaire OpenShift CLI (oc
) sur Windows en utilisant la procédure suivante.
Procédure
- Naviguez jusqu'à la page de téléchargements OpenShift Container Platform sur le portail client Red Hat.
- Sélectionnez la version appropriée dans la liste déroulante Version.
- Cliquez sur Download Now à côté de l'entrée OpenShift v4.12 Windows Client et enregistrez le fichier.
- Décompressez l'archive à l'aide d'un programme ZIP.
Déplacez le fichier binaire
oc
dans un répertoire situé sur votre sitePATH
.Pour vérifier votre
PATH
, ouvrez l'invite de commande et exécutez la commande suivante :C:\N> path
Après l'installation de la CLI OpenShift, elle est disponible à l'aide de la commande oc
:
C:\N> oc <command>
Installation de la CLI OpenShift sur macOS
Vous pouvez installer le binaire OpenShift CLI (oc
) sur macOS en utilisant la procédure suivante.
Procédure
- Naviguez jusqu'à la page de téléchargements OpenShift Container Platform sur le portail client Red Hat.
- Sélectionnez la version appropriée dans la liste déroulante Version.
Cliquez sur Download Now à côté de l'entrée OpenShift v4.12 macOS Client et enregistrez le fichier.
NotePour macOS arm64, choisissez l'entrée OpenShift v4.12 macOS arm64 Client.
- Décompressez l'archive.
Déplacez le binaire
oc
dans un répertoire de votre PATH.Pour vérifier votre
PATH
, ouvrez un terminal et exécutez la commande suivante :$ echo $PATH
Après l'installation de la CLI OpenShift, elle est disponible à l'aide de la commande oc
:
oc <command>
7.7.14. Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Conditions préalables
- You deployed an OpenShift Container Platform cluster.
-
Vous avez installé le CLI
oc
.
Procédure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Exemple de sortie
system:admin
Ressources complémentaires
- See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
7.7.15. Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.
After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Ressources complémentaires
- See About remote health monitoring for more information about the Telemetry service
7.7.16. Prochaines étapes
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.