Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 6. Installing a cluster on Google Cloud with network customizations


In OpenShift Container Platform version 4.12, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the

install-config.yaml
file before you install the cluster.

You must set most of the network configuration parameters during installation, and you can modify only

kubeProxy
configuration parameters in a running cluster.

6.1. Prerequisites

6.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.12, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

6.3. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the

~/.ssh/authorized_keys
list for the
core
user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user

core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The

./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 
    1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the

    x86_64
    ,
    ppc64le
    , and
    s390x
    architectures. do not create a key that uses the
    ed25519
    algorithm. Instead, create a key that uses the
    rsa
    or
    ecdsa
    algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the

    ~/.ssh/id_ed25519.pub
    public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the

    ./openshift-install gather
    command.

    Note

    On some distributions, default SSH private key identities such as

    ~/.ssh/id_rsa
    and
    ~/.ssh/id_dsa
    are managed automatically.

    1. If the

      ssh-agent
      process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the

    ssh-agent
    :

    $ ssh-add <path>/<file_name> 
    1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

6.4. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

6.5. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Google Cloud.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the

    install-config.yaml
    file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 
      1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.

      When specifying the directory:

      • Verify that the directory has the
        execute
        permission. This permission is required to run Terraform binaries under the installation directory.
      • Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your

        ssh-agent
        process uses.

      2. Select gcp as the platform to target.
      3. If you have not configured the service account key for your Google Cloud account on your computer, you must obtain it from Google Cloud and paste the contents of the file or enter the absolute path to the file.
      4. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
      5. Select the region to deploy the cluster to.
      6. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
      7. Enter a descriptive name for your cluster.
      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the
    install-config.yaml
    file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the

    install-config.yaml
    file so that you can use it to install multiple clusters.

    Important

    The

    install-config.yaml
    file is consumed during the installation process. If you want to reuse the file, you must back it up now.

6.5.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the

install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml
file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the

install-config.yaml
file.

6.5.1.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Expand
Table 6.1. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the

install-config.yaml
content. The current version is
v1
. The installation program may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the

baseDomain
and
metadata.name
parameter values that uses the
<metadata.name>.<baseDomain>
format.

A fully-qualified domain or subdomain name, such as

example.com
.

metadata

Kubernetes resource

ObjectMeta
, from which only the
name
parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of

{{.metadata.name}}.{{.baseDomain}}
.

String of lowercase letters, hyphens (

-
), and periods (
.
), such as
dev
.

platform

The configuration for the specific platform upon which to perform the installation:

alibabacloud
,
aws
,
baremetal
,
azure
,
gcp
,
ibmcloud
,
nutanix
,
openstack
,
ovirt
,
vsphere
, or
{}
. For additional information about
platform.<platform>
parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

6.5.1.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Note

Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.

Expand
Table 6.2. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the

networking
object after installation.

networking.networkType

The Red Hat OpenShift Networking network plugin to install.

Either

OpenShiftSDN
or
OVNKubernetes
.
OpenShiftSDN
is a CNI plugin for all-Linux networks.
OVNKubernetes
is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is
OVNKubernetes
.

networking.clusterNetwork

The IP address blocks for pods.

The default value is

10.128.0.0/14
with a host prefix of
/23
.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use

networking.clusterNetwork
. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between

0
and
32
.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if

hostPrefix
is set to
23
then each node is assigned a
/23
subnet out of the given
cidr
. A
hostPrefix
value of
23
provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is

23
.

networking.serviceNetwork

The IP address block for services. The default value is

172.30.0.0/16
.

The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use

networking.machineNetwork
. An IP address block. The default value is
10.0.0.0/16
for all platforms other than libvirt. For libvirt, the default value is
192.168.126.0/24
.

An IP network block in CIDR notation.

For example,

10.0.0.0/16
.

Note

Set the

networking.machineNetwork
to match the CIDR that the preferred NIC resides in.

6.5.1.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Expand
Table 6.3. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

capabilities

Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing.

String array

capabilities.baselineCapabilitySet

Selects an initial set of optional capabilities to enable. Valid values are

None
,
v4.11
,
v4.12
and
vCurrent
. The default value is
vCurrent
.

String

capabilities.additionalEnabledCapabilities

Extends the set of optional capabilities beyond what you specify in

baselineCapabilitySet
. You may specify multiple capabilities in this parameter.

String array

compute

The configuration for the machines that comprise the compute nodes.

Array of

MachinePool
objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are

amd64
(the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or

hyperthreading
, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled
or
Disabled

compute.name

Required if you use

compute
. The name of the machine pool.

worker

compute.platform

Required if you use

compute
. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the
controlPlane.platform
parameter value.

alibabacloud
,
aws
,
azure
,
gcp
,
ibmcloud
,
nutanix
,
openstack
,
ovirt
,
vsphere
, or
{}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to

2
. The default value is
3
.

featureSet

Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates".

String. The name of the feature set to enable, such as

TechPreviewNoUpgrade
.

controlPlane

The configuration for the machines that comprise the control plane.

Array of

MachinePool
objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are

amd64
(the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or

hyperthreading
, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled
or
Disabled

controlPlane.name

Required if you use

controlPlane
. The name of the machine pool.

master

controlPlane.platform

Required if you use

controlPlane
. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the
compute.platform
parameter value.

alibabacloud
,
aws
,
azure
,
gcp
,
ibmcloud
,
nutanix
,
openstack
,
ovirt
,
vsphere
, or
{}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is

3
, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on Google Cloud into a shared virtual private cloud (VPC),

credentialsMode
must be set to
Passthrough
.

Note

Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the

credentialsMode
parameter to
Mint
,
Passthrough
or
Manual
.

Mint
,
Passthrough
,
Manual
or an empty string (
""
).

fips

Enable or disable FIPS mode. The default is

false
(disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the

x86_64
,
ppc64le
, and
s390x
architectures.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false
or
true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a

source
and, optionally,
mirrors
, as described in the following rows of this table.

imageContentSources.source

Required if you use

imageContentSources
. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal
or
External
. To deploy a private cluster, which cannot be accessed from the internet, set
publish
to
Internal
. The default value is
External
.

sshKey

The SSH key to authenticate access to your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your

ssh-agent
process uses.

For example,

sshKey: ssh-ed25519 AAAA..
.

6.5.1.4. Additional Google Cloud configuration parameters

Additional Google Cloud configuration parameters are described in the following table:

Expand
Table 6.4. Additional Google Cloud parameters
ParameterDescriptionValues

platform.gcp.network

The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set

platform.gcp.networkProjectID
with the name of the Google Cloud project that contains the shared VPC.

String.

platform.gcp.networkProjectID

Optional. The name of the Google Cloud project that contains the shared VPC where you want to deploy your cluster.

String.

platform.gcp.projectID

The name of the Google Cloud project where the installation program installs the cluster.

String.

platform.gcp.region

The name of the Google Cloud region that hosts your cluster.

Any valid region name, such as

us-central1
.

platform.gcp.controlPlaneSubnet

The name of the existing subnet where you want to deploy your control plane machines.

The subnet name.

platform.gcp.computeSubnet

The name of the existing subnet where you want to deploy your compute machines.

The subnet name.

platform.gcp.createFirewallRules

Optional. Set this value to

Disabled
if you want to create and manage your firewall rules using network tags. By default, the cluster will automatically create and manage the firewall rules that are required for cluster communication. Your service account must have
roles/compute.networkAdmin
and
roles/compute.securityAdmin
privileges in the host project to perform these tasks automatically. If your service account does not have the
roles/dns.admin
privilege in the host project, it must have the
dns.networks.bindPrivateDNSZone
permission.

Enabled
or
Disabled
. The default value is
Enabled
.

platform.gcp.publicDNSZone.project

Optional. The name of the project that contains the public DNS zone. If you set this value, your service account must have the

roles/dns.admin
privilege in the specified project. If you do not set this value, it defaults to
gcp.projectId
.

The name of the project that contains the public DNS zone.

platform.gcp.publicDNSZone.id

Optional. The ID or name of an existing public DNS zone. The public DNS zone domain must match the

baseDomain
parameter. If you do not set this value, the installation program will use a public DNS zone in the service project.

The public DNS zone name.

platform.gcp.privateDNSZone.project

Optional. The name of the project that contains the private DNS zone. If you set this value, your service account must have the

roles/dns.admin
privilege in the host project. If you do not set this value, it defaults to
gcp.projectId
.

The name of the project that contains the private DNS zone.

platform.gcp.privateDNSZone.id

Optional. The ID or name of an existing private DNS zone. If you do not set this value, the installation program will create a private DNS zone in the service project.

The private DNS zone name.

platform.gcp.licenses

A list of license URLs that must be applied to the compute images.

Important

The

licenses
parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field.

Any license available with the license API, such as the license to enable nested virtualization. You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installation program to copy the source image before use.

platform.gcp.defaultMachinePlatform.zones

The availability zones where the installation program creates machines.

A list of valid Google Cloud availability zones, such as

us-central1-a
, in a YAML sequence.

platform.gcp.defaultMachinePlatform.osDisk.diskSizeGB

The size of the disk in gigabytes (GB).

Any size between 16 GB and 65536 GB.

platform.gcp.defaultMachinePlatform.osDisk.diskType

The Google Cloud disk type.

Either the default

pd-ssd
or the
pd-standard
disk type. The control plane nodes must be the
pd-ssd
disk type. Compute nodes can be either type.

platform.gcp.defaultMachinePlatform.osImage.project

Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for both types of machines.

String. The name of Google Cloud project where the image is located.

platform.gcp.defaultMachinePlatform.osImage.name

The name of the custom RHCOS image for the installation program to use to boot control plane and compute machines. If you use

platform.gcp.defaultMachinePlatform.osImage.project
, this field is required.

String. The name of the RHCOS image.

platform.gcp.defaultMachinePlatform.tags

Optional. Additional network tags to add to the control plane and compute machines.

One or more strings, for example

network-tag1
.

platform.gcp.defaultMachinePlatform.type

The Google Cloud machine type for control plane and compute machines.

The Google Cloud machine type, for example

n1-standard-4
.

platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.name

The name of the customer managed encryption key to be used for machine disk encryption.

The encryption key name.

platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.keyRing

The name of the Key Management Service (KMS) key ring to which the KMS key belongs.

The KMS key ring name.

platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.location

The Google Cloud location in which the KMS key ring exists.

The Google Cloud location.

platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.projectID

The ID of the project in which the KMS key ring exists. This value defaults to the value of the

platform.gcp.projectID
parameter if it is not set.

The Google Cloud project ID.

platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKeyServiceAccount

The Google Cloud service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about Google Cloud service accounts, see Google’s documentation on service accounts.

The Google Cloud service account email, for example

<service_account_name>@<project_id>.iam.gserviceaccount.com
.

controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name

The name of the customer managed encryption key to be used for control plane machine disk encryption.

The encryption key name.

controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing

For control plane machines, the name of the KMS key ring to which the KMS key belongs.

The KMS key ring name.

controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location

For control plane machines, the Google Cloud location in which the key ring exists. For more information about KMS locations, see Google’s documentation on Cloud KMS locations.

The Google Cloud location for the key ring.

controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID

For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set.

The Google Cloud project ID.

controlPlane.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount

The Google Cloud service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about Google Cloud service accounts, see Google’s documentation on service accounts.

The Google Cloud service account email, for example

<service_account_name>@<project_id>.iam.gserviceaccount.com
.

controlPlane.platform.gcp.osDisk.diskSizeGB

The size of the disk in gigabytes (GB). This value applies to control plane machines.

Any integer between 16 and 65536.

controlPlane.platform.gcp.osDisk.diskType

The Google Cloud disk type for control plane machines.

Control plane machines must use the

pd-ssd
disk type, which is the default.

controlPlane.platform.gcp.osImage.project

Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for control plane machines only.

String. The name of Google Cloud project where the image is located.

controlPlane.platform.gcp.osImage.name

The name of the custom RHCOS image for the installation program to use to boot control plane machines. If you use

controlPlane.platform.gcp.osImage.project
, this field is required.

String. The name of the RHCOS image.

controlPlane.platform.gcp.tags

Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the

platform.gcp.defaultMachinePlatform.tags
parameter for control plane machines.

One or more strings, for example

control-plane-tag1
.

controlPlane.platform.gcp.type

The Google Cloud machine type for control plane machines. If set, this parameter overrides the

platform.gcp.defaultMachinePlatform.type
parameter.

The Google Cloud machine type, for example

n1-standard-4
.

controlPlane.platform.gcp.zones

The availability zones where the installation program creates control plane machines.

A list of valid Google Cloud availability zones, such as

us-central1-a
, in a YAML sequence.

compute.platform.gcp.osDisk.encryptionKey.kmsKey.name

The name of the customer managed encryption key to be used for compute machine disk encryption.

The encryption key name.

compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing

For compute machines, the name of the KMS key ring to which the KMS key belongs.

The KMS key ring name.

compute.platform.gcp.osDisk.encryptionKey.kmsKey.location

For compute machines, the Google Cloud location in which the key ring exists. For more information about KMS locations, see Google’s documentation on Cloud KMS locations.

The Google Cloud location for the key ring.

compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID

For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set.

The Google Cloud project ID.

compute.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount

The Google Cloud service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about Google Cloud service accounts, see Google’s documentation on service accounts.

The Google Cloud service account email, for example

<service_account_name>@<project_id>.iam.gserviceaccount.com
.

compute.platform.gcp.osDisk.diskSizeGB

The size of the disk in gigabytes (GB). This value applies to compute machines.

Any integer between 16 and 65536.

compute.platform.gcp.osDisk.diskType

The Google Cloud disk type for compute machines.

Either the default

pd-ssd
or the
pd-standard
disk type.

compute.platform.gcp.osImage.project

Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for compute machines only.

String. The name of Google Cloud project where the image is located.

compute.platform.gcp.osImage.name

The name of the custom RHCOS image for the installation program to use to boot compute machines. If you use

compute.platform.gcp.osImage.project
, this field is required.

String. The name of the RHCOS image.

compute.platform.gcp.tags

Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the

platform.gcp.defaultMachinePlatform.tags
parameter for compute machines.

One or more strings, for example

compute-network-tag1
.

compute.platform.gcp.type

The Google Cloud machine type for compute machines. If set, this parameter overrides the

platform.gcp.defaultMachinePlatform.type
parameter.

The Google Cloud machine type, for example

n1-standard-4
.

compute.platform.gcp.zones

The availability zones where the installation program creates compute machines.

A list of valid Google Cloud availability zones, such as

us-central1-a
, in a YAML sequence.

6.5.2. Minimum resource requirements for cluster installation

Each cluster machine must meet the following minimum requirements:

Expand
Table 6.5. Minimum resource requirements
MachineOperating SystemvCPU [1]Virtual RAMStorageInput/Output Per Second (IOPS)[2]

Bootstrap

RHCOS

4

16 GB

100 GB

300

Control plane

RHCOS

4

16 GB

100 GB

300

Compute

RHCOS, RHEL 8.6 and later [3]

2

8 GB

100 GB

300

  1. One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
  2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
  3. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.

If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.

6.5.3. Tested instance types for Google Cloud

The following Google Cloud instance types have been tested with OpenShift Container Platform.

Example 6.1. Machine series

  • A2
  • A3
  • C2
  • C2D
  • C3
  • C3D
  • C4
  • E2
  • M1
  • N1
  • N2
  • N2D
  • N4
  • Tau T2D

6.5.4. Using custom machine types

Using a custom machine type to install a OpenShift Container Platform cluster is supported.

Consider the following when using a custom machine type:

  • Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation".
  • The name of the custom machine type must adhere to the following syntax:

    custom-<number_of_cpus>-<amount_of_memory_in_mb>

    For example,

    custom-6-20480
    .

As part of the installation process, you specify the custom machine type in the

install-config.yaml
file.

Sample install-config.yaml file with a custom machine type

compute:
- architecture: amd64
  hyperthreading: Enabled
  name: worker
  platform:
    gcp:
      type: custom-6-20480
  replicas: 2
controlPlane:
  architecture: amd64
  hyperthreading: Enabled
  name: master
  platform:
    gcp:
      type: custom-6-20480
  replicas: 3

6.5.5. Sample customized install-config.yaml file for Google Cloud

You can customize the

install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your

install-config.yaml
file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 
1

controlPlane: 
2
 
3

  hyperthreading: Enabled 
4

  name: master
  platform:
    gcp:
      type: n2-standard-4
      zones:
      - us-central1-a
      - us-central1-c
      osDisk:
        diskType: pd-ssd
        diskSizeGB: 1024
        encryptionKey: 
5

          kmsKey:
            name: worker-key
            keyRing: test-machine-keys
            location: global
            projectID: project-id
      tags: 
6

      - control-plane-tag1
      - control-plane-tag2
      osImage: 
7

        project: example-project-name
        name: example-image-name
  replicas: 3
compute: 
8
 
9

- hyperthreading: Enabled 
10

  name: worker
  platform:
    gcp:
      type: n2-standard-4
      zones:
      - us-central1-a
      - us-central1-c
      osDisk:
        diskType: pd-standard
        diskSizeGB: 128
        encryptionKey: 
11

          kmsKey:
            name: worker-key
            keyRing: test-machine-keys
            location: global
            projectID: project-id
      tags: 
12

      - compute-tag1
      - compute-tag2
      osImage: 
13

          project: example-project-name
          name: example-image-name
  replicas: 3
metadata:
  name: test-cluster 
14

networking: 
15

  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes 
16

  serviceNetwork:
  - 172.30.0.0/16
platform:
  gcp:
    projectID: openshift-production 
17

    region: us-central1 
18

    defaultMachinePlatform:
      tags: 
19

      - global-tag1
      - global-tag2
      osImage: 
20

        project: example-project-name
        name: example-image-name
pullSecret: '{"auths": ...}' 
21

fips: false 
22

sshKey: ssh-ed25519 AAAA... 
23
1 14 17 18 21
Required. The installation program prompts you for this value.
2 8 15
If you do not provide these parameters and values, the installation program provides the default value.
3 9
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
4 10
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as

n1-standard-8
, for your machines if you disable simultaneous multithreading.

5 11
Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on Google Cloud".
6 12 19
Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter.
7 13 20
Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image for the installation program to use to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters.
16
The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN. The default value is OVNKubernetes.
22
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the

x86_64
,
ppc64le
, and
s390x
architectures.

23
You can optionally provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your

ssh-agent
process uses.

6.5.6. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the

install-config.yaml
file.

Prerequisites

  • You have an existing
    install-config.yaml
    file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the

    Proxy
    object’s
    spec.noProxy
    field to bypass the proxy if necessary.

    Note

    The

    Proxy
    object
    status.noProxy
    field is populated with the values of the
    networking.machineNetwork[].cidr
    ,
    networking.clusterNetwork[].cidr
    , and
    networking.serviceNetwork[]
    fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the

    Proxy
    object
    status.noProxy
    field is also populated with the instance metadata endpoint (
    169.254.169.254
    ).

Procedure

  1. Edit your

    install-config.yaml
    file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 
    1
    
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 
    2
    
      noProxy: example.com 
    3
    
    additionalTrustBundle: | 
    4
    
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 
    5
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    5
    Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always. Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly.
    Note

    The installation program does not support the proxy

    readinessEndpoints
    field.

    Note

    If the installer times out, restart and then complete the deployment by using the

    wait-for
    command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named

cluster
that uses the proxy settings in the provided
install-config.yaml
file. If no proxy settings are provided, a
cluster
Proxy
object is still created, but it will have a nil
spec
.

Note

Only the

Proxy
object named
cluster
is supported, and no additional proxies can be created.

6.6. Network configuration phases

There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.

Phase 1

You can customize the following network-related fields in the

install-config.yaml
file before you create the manifest files:

  • networking.networkType
  • networking.clusterNetwork
  • networking.serviceNetwork
  • networking.machineNetwork

    For more information on these fields, refer to Installation configuration parameters.

    Note

    Set the

    networking.machineNetwork
    to match the CIDR that the preferred NIC resides in.

    Important

    The CIDR range

    172.17.0.0/16
    is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.

Phase 2
After creating the manifest files by running openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration.

You cannot override the values specified in phase 1 in the

install-config.yaml
file during phase 2. However, you can further customize the network plugin during phase 2.

6.7. Specifying advanced network configuration

You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.

Important

Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.

Prerequisites

  • You have created the
    install-config.yaml
    file and completed any modifications to it.

Procedure

  1. Change to the directory that contains the installation program and create the manifests:

    $ ./openshift-install create manifests --dir <installation_directory> 
    1
    1
    <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a stub manifest file for the advanced network configuration that is named

    cluster-network-03-config.yml
    in the
    <installation_directory>/manifests/
    directory:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
  3. Specify the advanced network configuration for your cluster in the

    cluster-network-03-config.yml
    file, such as in the following examples:

    Specify a different VXLAN port for the OpenShift SDN network provider

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        openshiftSDNConfig:
          vxlanPort: 4800

    Enable IPsec for the OVN-Kubernetes network provider

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        ovnKubernetesConfig:
          ipsecConfig: {}

  4. Optional: Back up the
    manifests/cluster-network-03-config.yml
    file. The installation program consumes the
    manifests/
    directory when you create the Ignition config files.

6.8. Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named

cluster
. The CR specifies the fields for the
Network
API in the
operator.openshift.io
API group.

The CNO configuration inherits the following fields during cluster installation from the

Network
API in the
Network.config.openshift.io
API group and these fields cannot be changed:

clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.

You can specify the cluster network plugin configuration for your cluster by setting the fields for the

defaultNetwork
object in the CNO object named
cluster
.

6.8.1. Cluster Network Operator configuration object

The fields for the Cluster Network Operator (CNO) are described in the following table:

Expand
Table 6.6. Cluster Network Operator configuration object
FieldTypeDescription

metadata.name

string

The name of the CNO object. This name is always

cluster
.

spec.clusterNetwork

array

A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:

spec:
  clusterNetwork:
  - cidr: 10.128.0.0/19
    hostPrefix: 23
  - cidr: 10.128.32.0/19
    hostPrefix: 23

You can customize this field only in the

install-config.yaml
file before you create the manifests. The value is read-only in the manifest file.

spec.serviceNetwork

array

A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example:

spec:
  serviceNetwork:
  - 172.30.0.0/14

You can customize this field only in the

install-config.yaml
file before you create the manifests. The value is read-only in the manifest file.

spec.defaultNetwork

object

Configures the network plugin for the cluster network.

spec.kubeProxyConfig

object

The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect.

Important

For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the

clusterNetwork.hostPrefix
parameter for each network type that is defined in the
install-config.yaml
file. Setting a different value for each
clusterNetwork.hostPrefix
parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes.

defaultNetwork object configuration

The values for the

defaultNetwork
object are defined in the following table:

Expand
Table 6.7. defaultNetwork object
FieldTypeDescription

type

string

Either

OpenShiftSDN
or
OVNKubernetes
. The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation.

Note

OpenShift Container Platform uses the OVN-Kubernetes network plugin by default.

openshiftSDNConfig

object

This object is only valid for the OpenShift SDN network plugin.

ovnKubernetesConfig

object

This object is only valid for the OVN-Kubernetes network plugin.

Configuration for the OpenShift SDN network plugin

The following table describes the configuration fields for the OpenShift SDN network plugin:

Expand
Table 6.8. openshiftSDNConfig object
FieldTypeDescription

mode

string

Configures the network isolation mode for OpenShift SDN. The default value is

NetworkPolicy
.

The values

Multitenant
and
Subnet
are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation.

mtu

integer

The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to

50
less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of
9001
, and some have an MTU of
1500
, you must set this value to
1450
.

You can set the value during cluster installation or as a post-installation task. For more information, see "Changing the MTU for the cluster network" in the OpenShift Container Platform Networking document.

vxlanPort

integer

The port to use for all VXLAN packets. The default value is

4789
. This value cannot be changed after cluster installation.

If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.

On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port

9000
and port
9999
.

Configuration for the OVN-Kubernetes network plugin

The following table describes the configuration fields for the OVN-Kubernetes network plugin:

Expand
Table 6.9. ovnKubernetesConfig object
FieldTypeDescription

mtu

integer

The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to

100
less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of
9001
, and some have an MTU of
1500
, you must set this value to
1400
.

genevePort

integer

The port to use for all Geneve packets. The default value is

6081
. This value cannot be changed after cluster installation.

ipsecConfig

object

Specify an empty object to enable IPsec encryption.

policyAuditConfig

object

Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used.

gatewayConfig

object

Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.

Note

While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.

v4InternalSubnet

If your existing network infrastructure overlaps with the

100.64.0.0/16
IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the
clusterNetwork.cidr
value is
10.128.0.0/14
and the
clusterNetwork.hostPrefix
value is
/23
, then the maximum number of nodes is
2^(23-14)=512
.

This field cannot be changed after installation.

The default value is

100.64.0.0/16
.

v6InternalSubnet

If your existing network infrastructure overlaps with the

fd98::/48
IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster.

This field cannot be changed after installation.

The default value is

fd98::/48
.

Expand
Table 6.10. policyAuditConfig object
FieldTypeDescription

rateLimit

integer

The maximum number of messages to generate every second per node. The default value is

20
messages per second.

maxFileSize

integer

The maximum size for the audit log in bytes. The default value is

50000000
or 50 MB.

destination

string

One of the following additional audit log targets:

libc
The libc syslog() function of the journald process on the host.
udp:<host>:<port>
A syslog server. Replace <host>:<port> with the host and port of the syslog server.
unix:<file>
A Unix Domain Socket file specified by <file>.
null
Do not send the audit logs to any additional target.

syslogFacility

string

The syslog facility, such as

kern
, as defined by RFC5424. The default value is
local0
.

Expand
Table 6.11. gatewayConfig object
FieldTypeDescription

routingViaHost

boolean

Set this field to

true
to send egress traffic from pods to the host networking stack.

Note

In OpenShift Container Platform 4.12, egress IP is only assigned to the primary interface. Consequentially, setting

routingViaHost
to
true
will not work for egress IP in OpenShift Container Platform 4.12.

For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is

false
.

This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to

true
, you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack.

Example OVN-Kubernetes configuration with IPSec enabled

defaultNetwork:
  type: OVNKubernetes
  ovnKubernetesConfig:
    mtu: 1400
    genevePort: 6081
    ipsecConfig: {}

kubeProxyConfig object configuration

The values for the

kubeProxyConfig
object are defined in the following table:

Expand
Table 6.12. kubeProxyConfig object
FieldTypeDescription

iptablesSyncPeriod

string

The refresh period for

iptables
rules. The default value is
30s
. Valid suffixes include
s
,
m
, and
h
and are described in the Go time package documentation.

Note

Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the

iptablesSyncPeriod
parameter is no longer necessary.

proxyArguments.iptables-min-sync-period

array

The minimum duration before refreshing

iptables
rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include
s
,
m
, and
h
and are described in the Go time package. The default value is:

kubeProxyConfig:
  proxyArguments:
    iptables-min-sync-period:
    - 0s

6.9. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the

create cluster
command of the installation program only once, during initial installation.

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.

Procedure

  1. Remove any existing Google Cloud credentials that do not use the service account key for the Google Cloud account that you configured for your cluster and that are stored in the following locations:

    • The
      GOOGLE_CREDENTIALS
      ,
      GOOGLE_CLOUD_KEYFILE_JSON
      , or
      GCLOUD_KEYFILE_JSON
      environment variables
    • The
      ~/.gcp/osServiceAccount.json
      file
    • The
      gcloud cli
      default credentials
  2. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 
    1
    
        --log-level=info 
    2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

  3. Optional: You can reduce the number of permissions for the service account that you used to install the cluster.

    • If you assigned the
      Owner
      role to your service account, you can remove that role and replace it with the
      Viewer
      role.
    • If you included the
      Service Account Key Admin
      role, you can remove it.

Verification

When the cluster deployment completes successfully:

  • The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
    kubeadmin
    user.
  • Credential information also outputs to
    <installation_directory>/.openshift_install.log
    .
Important

Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
    node-bootstrapper
    certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

6.10. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (

oc
) to interact with OpenShift Container Platform from a command-line interface. You can install
oc
on Linux, Windows, or macOS.

Important

If you installed an earlier version of

oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of
oc
.

Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (

oc
) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the architecture from the Product Variant drop-down list.
  3. Select the appropriate version from the Version drop-down list.
  4. Click Download Now next to the OpenShift v4.12 Linux Client entry and save the file.
  5. Unpack the archive:

    $ tar xvf <file>
  6. Place the

    oc
    binary in a directory that is on your
    PATH
    .

    To check your

    PATH
    , execute the following command:

    $ echo $PATH

Verification

  • After you install the OpenShift CLI, it is available using the

    oc
    command:

    $ oc <command>

Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (

oc
) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version from the Version drop-down list.
  3. Click Download Now next to the OpenShift v4.12 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the

    oc
    binary to a directory that is on your
    PATH
    .

    To check your

    PATH
    , open the command prompt and execute the following command:

    C:\> path

Verification

  • After you install the OpenShift CLI, it is available using the

    oc
    command:

    C:\> oc <command>

Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (

oc
) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version from the Version drop-down list.
  3. Click Download Now next to the OpenShift v4.12 macOS Client entry and save the file.

    Note

    For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry.

  4. Unpack and unzip the archive.
  5. Move the

    oc
    binary to a directory on your PATH.

    To check your

    PATH
    , open a terminal and execute the following command:

    $ echo $PATH

Verification

  • After you install the OpenShift CLI, it is available using the

    oc
    command:

    $ oc <command>

6.11. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster

kubeconfig
file. The
kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the
    oc
    CLI.

Procedure

  1. Export the

    kubeadmin
    credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run

    oc
    commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

6.12. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.

After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

6.13. Next steps

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben