Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 18. Installing on OpenStack


18.1. Preparing to install on OpenStack

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP).

18.1.1. Prerequisites

18.1.2. Choosing a method to install OpenShift Container Platform on OpenStack

You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself.

See Installation process for more information about installer-provisioned and user-provisioned installation processes.

18.1.2.1. Installing a cluster on installer-provisioned infrastructure

You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:

  • Installing a cluster on OpenStack with customizations: You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
  • Installing a cluster on OpenStack with Kuryr: You can install a customized OpenShift Container Platform cluster on RHOSP that uses Kuryr SDN. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances.
  • Installing a cluster on OpenStack in a restricted network: You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.

18.1.2.2. Installing a cluster on user-provisioned infrastructure

You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods:

18.1.3. Scanning RHOSP endpoints for legacy HTTPS certificates

Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field.

Important

OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction.

Prerequisites

Procedure

  1. Save the following script to your machine:

    #!/usr/bin/env bash
    
    set -Eeuo pipefail
    
    declare catalog san
    catalog="$(mktemp)"
    san="$(mktemp)"
    readonly catalog san
    
    declare invalid=0
    
    openstack catalog list --format json --column Name --column Endpoints \
    	| jq -r '.[] | .Name as $name | .Endpoints[] | select(.interface=="public") | [$name, .interface, .url] | join(" ")' \
    	| sort \
    	> "$catalog"
    
    while read -r name interface url; do
    	# Ignore HTTP
    	if [[ ${url#"http://"} != "$url" ]]; then
    		continue
    	fi
    
    	# Remove the schema from the URL
    	noschema=${url#"https://"}
    
    	# If the schema was not HTTPS, error
    	if [[ "$noschema" == "$url" ]]; then
    		echo "ERROR (unknown schema): $name $interface $url"
    		exit 2
    	fi
    
    	# Remove the path and only keep host and port
    	noschema="${noschema%%/*}"
    	host="${noschema%%:*}"
    	port="${noschema##*:}"
    
    	# Add the port if was implicit
    	if [[ "$port" == "$host" ]]; then
    		port='443'
    	fi
    
    	# Get the SAN fields
    	openssl s_client -showcerts -servername "$host" -connect "$host:$port" </dev/null 2>/dev/null \
    		| openssl x509 -noout -ext subjectAltName \
    		> "$san"
    
    	# openssl returns the empty string if no SAN is found.
    	# If a SAN is found, openssl is expected to return something like:
    	#
    	#    X509v3 Subject Alternative Name:
    	#        DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2
    	if [[ "$(grep -c "Subject Alternative Name" "$san" || true)" -gt 0 ]]; then
    		echo "PASS: $name $interface $url"
    	else
    		invalid=$((invalid+1))
    		echo "INVALID: $name $interface $url"
    	fi
    done < "$catalog"
    
    # clean up temporary files
    rm "$catalog" "$san"
    
    if [[ $invalid -gt 0 ]]; then
    	echo "${invalid} legacy certificates were detected. Update your certificates to include a SAN field."
    	exit 1
    else
    	echo "All HTTPS certificates for this cloud are valid."
    fi
  2. Run the script.
  3. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields.
Important

You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message:

x509: certificate relies on legacy Common Name field, use SANs instead

18.1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually

Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field.

Important

OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction.

Procedure

  1. On a command line, run the following command to view the URL of RHOSP public endpoints:

    $ openstack catalog list

    Record the URL for each HTTPS endpoint that the command returns.

  2. For each public endpoint, note the host and the port.

    Tip

    Determine the host of an endpoint by removing the scheme, the port, and the path.

  3. For each endpoint, run the following commands to extract the SAN field of the certificate:

    1. Set a host variable:

      $ host=<host_name>
    2. Set a port variable:

      $ port=<port_number>

      If the URL of the endpoint does not have a port, use the value 443.

    3. Retrieve the SAN field of the certificate:

      $ openssl s_client -showcerts -servername "$host" -connect "$host:$port" </dev/null 2>/dev/null \
          | openssl x509 -noout -ext subjectAltName

      Example output

      X509v3 Subject Alternative Name:
          DNS:your.host.example.net

      For each endpoint, look for output that resembles the previous example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued.

Important

You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message:

x509: certificate relies on legacy Common Name field, use SANs instead

18.2. Installing a cluster on OpenStack with customizations

In OpenShift Container Platform version 4.10, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster.

18.2.1. Prerequisites

18.2.2. Resource guidelines for installing OpenShift Container Platform on RHOSP

To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:

Table 18.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
ResourceValue

Floating IP addresses

3

Ports

15

Routers

1

Subnets

1

RAM

88 GB

vCPUs

22

Volume storage

275 GB

Instances

7

Security groups

3

Security group rules

60

Server groups

2 - plus 1 for each additional availability zone in each machine pool

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Note

By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

18.2.2.1. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.2.2.2. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

18.2.2.3. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.2.3. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.2.4. Enabling Swift on RHOSP

Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program.

Important

If the Red Hat OpenStack Platform (RHOSP) object storage service, commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.

Prerequisites

  • You have a RHOSP administrator account on the target environment.
  • The Swift service is installed.
  • On Ceph RGW, the account in url option is enabled.

Procedure

To enable Swift on RHOSP:

  1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift:

    $ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

18.2.5. Configuring an image registry with custom storage on clusters that run on RHOSP

After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage.

Procedure

  1. Create a YAML file that specifies the storage class and availability zone to use. For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: custom-csi-storageclass
    provisioner: cinder.csi.openstack.org
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    parameters:
      availability: <availability_zone_name>
    Note

    OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration.

  2. From a command line, apply the configuration:

    $ oc apply -f <storage_class_file_name>

    Example output

    storageclass.storage.k8s.io/custom-csi-storageclass created

  3. Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: csi-pvc-imageregistry
      namespace: openshift-image-registry 1
      annotations:
        imageregistry.openshift.io: "true"
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem
      resources:
        requests:
          storage: 100Gi 2
      storageClassName: <your_custom_storage_class> 3
    1
    Enter the namespace openshift-image-registry. This namespace allows the Cluster Image Registry Operator to consume the PVC.
    2
    Optional: Adjust the volume size.
    3
    Enter the name of the storage class that you created.
  4. From a command line, apply the configuration:

    $ oc apply -f <pvc_file_name>

    Example output

    persistentvolumeclaim/csi-pvc-imageregistry created

  5. Replace the original persistent volume claim in the image registry configuration with the new claim:

    $ oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]'

    Example output

    config.imageregistry.operator.openshift.io/cluster patched

    Over the next several minutes, the configuration is updated.

Verification

To confirm that the registry is using the resources that you defined:

  1. Verify that the PVC claim value is identical to the name that you provided in your PVC definition:

    $ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml

    Example output

    ...
    status:
        ...
        managementState: Managed
        pvc:
          claim: csi-pvc-imageregistry
    ...

  2. Verify that the status of the PVC is Bound:

    $ oc get pvc -n openshift-image-registry csi-pvc-imageregistry

    Example output

    NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
    csi-pvc-imageregistry  Bound    pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5   100Gi      RWO            custom-csi-storageclass  11m

18.2.6. Verifying external network access

The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Procedure

  1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"

    Example output

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

Important

If the external network’s CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process.

The default network ranges are:

NetworkRange

machineNetwork

10.0.0.0/16

serviceNetwork

172.30.0.0/16

clusterNetwork

10.128.0.0/14

Warning

If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP.

Note

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

18.2.7. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.2.8. Setting cloud provider options

Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP).

For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation.

Procedure

  1. If you have not already generated manifest files for your cluster, generate them by running the following command:

    $ openshift-install --dir <destination_directory> create manifests
  2. In a text editor, open the cloud-provider configuration manifest file. For example:

    $ vi openshift/manifests/cloud-provider-config.yaml
  3. Modify the options based on the cloud configuration specification.

    Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:

    #...
    [LoadBalancer]
    use-octavia=true 1
    lb-provider = "amphora" 2
    floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3
    create-monitor = True 4
    monitor-delay = 10s 5
    monitor-timeout = 10s 6
    monitor-max-retries = 1 7
    #...
    1
    This property enables Octavia integration.
    2
    This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT.
    3
    This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here.
    4
    This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider.
    5
    This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    6
    This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    7
    This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True.
    Important

    Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section.

    Important

    You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local. The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn".

    Important

    For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider.

  4. Save the changes to the file and proceed with installation.

    Tip

    You can update your cloud provider configuration after you run the installer. On a command line, run:

    $ oc edit configmap -n openshift-config cloud-provider-config

    After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status.

18.2.8.1. External load balancers that use pre-defined floating IP addresses

Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers.

If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method.

Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses.

Additional resources

18.2.9. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

18.2.10. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.
      3. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
      4. Specify the floating IP address to use for external access to the OpenShift API.
      5. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
      7. Enter a name for your cluster. The name must be 14 or fewer characters long.
      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

Additional resources

See Installation configuration parameters section for more information about the available parameters.

18.2.10.1. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

    Note

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

18.2.11. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.2.11.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.2. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

18.2.11.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.3. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.2.11.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.4. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

18.2.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters

Additional RHOSP configuration parameters are described in the following table:

Table 18.5. Additional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.rootVolume.size

For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

compute.platform.openstack.rootVolume.type

For compute machines, the root volume’s type.

String, for example performance.

controlPlane.platform.openstack.rootVolume.size

For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

controlPlane.platform.openstack.rootVolume.type

For control plane machines, the root volume’s type.

String, for example performance.

platform.openstack.cloud

The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file.

String, for example MyCloud.

platform.openstack.externalNetwork

The RHOSP external network name to be used for installation.

String, for example external.

platform.openstack.computeFlavor

The RHOSP flavor to use for control plane and compute machines.

This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually.

String, for example m1.xlarge.

18.2.11.5. Optional RHOSP configuration parameters

Optional RHOSP configuration parameters are described in the following table:

Table 18.6. Optional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.additionalNetworkIDs

Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

compute.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with compute machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

compute.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

compute.platform.openstack.rootVolume.zones

For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

compute.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

controlPlane.platform.openstack.additionalNetworkIDs

Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

controlPlane.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with control plane machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

controlPlane.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

controlPlane.platform.openstack.rootVolume.zones

For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

controlPlane.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

platform.openstack.clusterOSImage

The location from which the installer downloads the RHCOS image.

You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with an SHA-256 checksum.

For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d. The value can also be the name of an existing Glance image, for example my-rhcos.

platform.openstack.clusterOSImageProperties

Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image.

You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi.

You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes.

A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"].

platform.openstack.defaultMachinePlatform

The default machine pool platform configuration.

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}

platform.openstack.ingressFloatingIP

An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.apiFloatingIP

An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.externalDNS

IP addresses for external DNS servers that cluster instances use for DNS resolution.

A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform.openstack.machinesSubnet

The UUID of a RHOSP subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.

The first item in networking.machineNetwork must match the value of machinesSubnet.

If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP.

A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

18.2.11.6. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.
  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.
  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.
  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines.
  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
Note

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

18.2.11.7. Deploying a cluster with bare metal machines

If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines.

Bare-metal compute machines are not supported on clusters that use Kuryr.

Note

Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not.

Prerequisites

  • The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API.
  • Bare metal is available as a RHOSP flavor.
  • The RHOSP network supports both VM and bare metal server attachment.
  • Your network configuration does not rely on a provider network. Provider networks are not supported.
  • If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned.
  • If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks.
  • You created an install-config.yaml file as part of the OpenShift Container Platform installation process.

Procedure

  1. In the install-config.yaml file, edit the flavors for machines:

    1. If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor.
    2. Change the value of compute.platform.openstack.type to a bare metal flavor.
    3. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet.

      An example bare metal install-config.yaml file

      controlPlane:
          platform:
            openstack:
              type: <bare_metal_control_plane_flavor> 1
      ...
      
      compute:
        - architecture: amd64
          hyperthreading: Enabled
          name: worker
          platform:
            openstack:
              type: <bare_metal_compute_flavor> 2
          replicas: 3
      ...
      
      platform:
          openstack:
            machinesSubnet: <subnet_UUID> 3
      ...

      1
      If you want to have bare-metal control plane machines, change this value to a bare metal flavor.
      2
      Change this value to a bare metal flavor to use for compute machines.
      3
      If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet.

Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file.

Note

The installer may time out while waiting for bare metal machines to boot.

If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

$ ./openshift-install wait-for install-complete --log-level debug

18.2.11.8. Cluster deployment on RHOSP provider networks

You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.

RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them.

In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network:

A diagram that depicts four OpenShift workloads on OpenStack. Each workload is connected by its NIC to an external data center by using a provider network.

OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.

Example provider network types include flat (untagged) and VLAN (802.1Q tagged).

Note

A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections.

You can learn more about provider and tenant networks in the RHOSP documentation.

18.2.11.8.1. RHOSP provider network requirements for cluster installation

Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions:

  • The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API.
  • The RHOSP networking service has the port security and allowed address pairs extensions enabled.
  • The provider network can be shared with other tenants.

    Tip

    Use the openstack network create command with the --share flag to create a network that can be shared.

  • The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet.

    Tip
    To create a network for a project that is named "openshift," enter the following command
    $ openstack network create --project openshift
    To create a subnet for a project that is named "openshift," enter the following command
    $ openstack subnet create --project openshift

    To learn more about creating networks on RHOSP, read the provider networks documentation.

    If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network.

    Important

    Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network.

  • Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default.

    Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:

    $ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
  • Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.
18.2.11.8.2. Deploying a cluster that has a primary interface on a provider network

You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network.

Prerequisites

  • Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation".

Procedure

  1. In a text editor, open the install-config.yaml file.
  2. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP.
  3. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP.
  4. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet.
  5. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet.
Important

The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block.

Section of an installation configuration file for a cluster that relies on a RHOSP provider network

        ...
        platform:
          openstack:
            apiVIP: 192.0.2.13
            ingressVIP: 192.0.2.23
            machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
            # ...
        networking:
          machineNetwork:
          - cidr: 192.0.2.0/24

Warning

You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface.

When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network.

Tip

You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list.

After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks.

18.2.11.9. Sample customized install-config.yaml file for RHOSP

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OpenShiftSDN
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

18.2.12. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.2.13. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.2.13.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

    • platform.openstack.ingressFloatingIP
    • platform.openstack.apiFloatingIP

If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.2.13.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the install-config.yaml file, do not define the following parameters:

  • platform.openstack.ingressFloatingIP
  • platform.openstack.apiFloatingIP

If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own.

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.2.14. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

18.2.15. Verifying cluster status

You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

  1. In the cluster environment, export the administrator’s kubeconfig file:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

  2. View the control plane and compute machines created after a deployment:

    $ oc get nodes
  3. View your cluster’s version:

    $ oc get clusterversion
  4. View your Operators' status:

    $ oc get clusteroperator
  5. View all running pods in the cluster:

    $ oc get pods -A

18.2.16. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

18.2.17. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.2.18. Next steps

18.3. Installing a cluster on OpenStack with Kuryr

In OpenShift Container Platform version 4.10, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster.

18.3.1. Prerequisites

18.3.2. About Kuryr SDN

Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services.

Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances.

Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace:

  • kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object.
  • kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object.

The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs.

Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network.

If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial.

Kuryr is not recommended in deployments where all of the following criteria are true:

  • The RHOSP version is less than 16.
  • The deployment uses UDP services, or a large number of TCP services on few hypervisors.

or

  • The ovn-octavia Octavia driver is disabled.
  • The deployment uses a large number of TCP services on few hypervisors.

18.3.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr

When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires.

Use the following quota to satisfy a default cluster’s minimum requirements:

Table 18.7. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr
ResourceValue

Floating IP addresses

3 - plus the expected number of Services of LoadBalancer type

Ports

1500 - 1 needed per Pod

Routers

1

Subnets

250 - 1 needed per Namespace/Project

Networks

250 - 1 needed per Namespace/Project

RAM

112 GB

vCPUs

28

Volume storage

275 GB

Instances

7

Security groups

250 - 1 needed per Service and per NetworkPolicy

Security group rules

1000

Server groups

2 - plus 1 for each additional availability zone in each machine pool

Load balancers

100 - 1 needed per Service

Load balancer listeners

500 - 1 needed per Service-exposed port

Load balancer pools

500 - 1 needed per Service-exposed port

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Important

If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects.

Take the following notes into consideration when setting resources:

  • The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time.
  • Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group.
  • Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota.

    If you are using RHOSP version 15 or earlier, or the ovn-octavia driver, each load balancer has a security group with the user project.

  • The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment’s size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them.

    If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

To enable Kuryr SDN, your environment must meet the following requirements:

  • Run RHOSP 13+.
  • Have Overcloud with Octavia.
  • Use Neutron Trunk ports extension.
  • Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.

18.3.3.1. Increasing quota

When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies.

Procedure

  • Increase the quotas for a project by running the following command:

    $ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>

18.3.3.2. Configuring Neutron

Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work.

In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies.

18.3.3.3. Configuring Octavia

Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN.

To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.

Note

The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary.

This example uses the local registry method.

Procedure

  1. If you are using the local registry, create a template to upload the images to the registry. For example:

    (undercloud) $ openstack overcloud container image prepare \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
    --namespace=registry.access.redhat.com/rhosp13 \
    --push-destination=<local-ip-from-undercloud.conf>:8787 \
    --prefix=openstack- \
    --tag-from-label {version}-{product-version} \
    --output-env-file=/home/stack/templates/overcloud_images.yaml \
    --output-images-file /home/stack/local_registry_images.yaml
  2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:

    ...
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
      push_destination: <local-ip-from-undercloud.conf>:8787
    Note

    The Octavia container versions vary depending upon the specific RHOSP release installed.

  3. Pull the container images from registry.redhat.io to the Undercloud node:

    (undercloud) $ sudo openstack overcloud container image upload \
      --config-file  /home/stack/local_registry_images.yaml \
      --verbose

    This may take some time depending on the speed of your network and Undercloud disk.

  4. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command:

    (undercloud) $ cat octavia_timeouts.yaml
    parameter_defaults:
      OctaviaTimeoutClientData: 1200000
      OctaviaTimeoutMemberData: 1200000
    Note

    This is not needed for RHOSP 13.0.13+.

  5. Install or update your Overcloud environment with Octavia:

    $ openstack overcloud deploy --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
      -e octavia_timeouts.yaml
    Note

    This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director.

    Note

    When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN.

  6. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project.

    • To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project.

      This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation.

      Note

      This task is unnecessary in RHOSP version 13.0.13 or later.

      Octavia implements a new ACL API that restricts access to the load balancers VIP.

      1. Get the project ID

        $ openstack project show <project>

        Example output

        +-------------+----------------------------------+
        | Field       | Value                            |
        +-------------+----------------------------------+
        | description |                                  |
        | domain_id   | default                          |
        | enabled     | True                             |
        | id          | PROJECT_ID                       |
        | is_domain   | False                            |
        | name        | *<project>*                      |
        | parent_id   | default                          |
        | tags        | []                               |
        +-------------+----------------------------------+

      2. Add the project ID to octavia.conf for the controllers.

        1. Source the stackrc file:

          $ source stackrc  # Undercloud credentials
        2. List the Overcloud controllers:

          $ openstack server list

          Example output

          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
          │
          | ID                                   | Name         | Status | Networks
          | Image          | Flavor     |
          │
          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
          │
          | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
          ctlplane=192.168.24.8 | overcloud-full | controller |
          │
          | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0    | ACTIVE |
          ctlplane=192.168.24.6 | overcloud-full | compute    |
          │
          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+

        3. SSH into the controller(s).

          $ ssh heat-admin@192.168.24.8
        4. Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user’s account.

          # List of project IDs that are allowed to have Load balancer security groups
          # belonging to them.
          amp_secgroup_allowed_projects = PROJECT_ID
      3. Restart the Octavia worker so the new configuration loads.

        controller-0$ sudo docker restart octavia_worker
Note

Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP.

18.3.3.3.1. The Octavia OVN Driver

Octavia supports multiple provider drivers through the Octavia API.

To see all available Octavia provider drivers, on a command line, enter:

$ openstack loadbalancer provider list

Example output

+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn     | Octavia OVN driver.                             |
+---------+-------------------------------------------------+

Beginning with RHOSP version 16, the Octavia OVN provider driver (ovn) is supported on OpenShift Container Platform on RHOSP deployments.

ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2.

The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.

If Kuryr uses ovn instead of Amphora, it offers the following benefits:

  • Decreased resource requirements. Kuryr does not require a load balancer VM for each service.
  • Reduced network latency.
  • Increased service creation speed by using OpenFlow rules instead of a VM for each service.
  • Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.

You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from version 13 to version 16.

18.3.3.4. Known limitations of installing with Kuryr

Using OpenShift Container Platform with Kuryr SDN has several known limitations.

RHOSP general limitations

Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments:

  • Service objects with the NodePort type are not supported.
  • Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods.
  • If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.
  • Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting.
RHOSP version limitations

Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version.

  • RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources.

    Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP.

  • Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported.
  • Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported.
  • Kuryr SDN does not support automatic unidling by a service.
RHOSP environment limitations

There are limitations when using Kuryr SDN that depend on your deployment environment.

Because of Octavia’s lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution.

In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf, which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails.

To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1, i.e. CGO_ENABLED=1, or ensure that the variable is absent.

In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails.

Note

musl-based containers, including Alpine-based containers, do not support the use-vc option.

RHOSP upgrade limitations

As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required.

You can address API changes on an individual basis.

If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways:

  • Upgrade each VM by triggering a load balancer failover.
  • Leave responsibility for upgrading the VMs to users.

If the operator takes the first option, there might be short downtimes during failovers.

If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features.

Important

If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing.

The recreation causes the DNS service approximately one minute of downtime.

18.3.3.5. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.3.3.6. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

18.3.3.7. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.3.4. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.3.5. Enabling Swift on RHOSP

Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program.

Important

If the Red Hat OpenStack Platform (RHOSP) object storage service, commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.

Prerequisites

  • You have a RHOSP administrator account on the target environment.
  • The Swift service is installed.
  • On Ceph RGW, the account in url option is enabled.

Procedure

To enable Swift on RHOSP:

  1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift:

    $ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

18.3.6. Verifying external network access

The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Procedure

  1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"

    Example output

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

Important

If the external network’s CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process.

The default network ranges are:

NetworkRange

machineNetwork

10.0.0.0/16

serviceNetwork

172.30.0.0/16

clusterNetwork

10.128.0.0/14

Warning

If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP.

Note

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

18.3.7. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.3.8. Setting cloud provider options

Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP).

For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation.

Procedure

  1. If you have not already generated manifest files for your cluster, generate them by running the following command:

    $ openshift-install --dir <destination_directory> create manifests
  2. In a text editor, open the cloud-provider configuration manifest file. For example:

    $ vi openshift/manifests/cloud-provider-config.yaml
  3. Modify the options based on the cloud configuration specification.

    Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:

    #...
    [LoadBalancer]
    use-octavia=true 1
    lb-provider = "amphora" 2
    floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3
    create-monitor = True 4
    monitor-delay = 10s 5
    monitor-timeout = 10s 6
    monitor-max-retries = 1 7
    #...
    1
    This property enables Octavia integration.
    2
    This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT.
    3
    This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here.
    4
    This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider.
    5
    This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    6
    This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    7
    This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True.
    Important

    Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section.

    Important

    You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local. The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn".

    Important

    For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider.

  4. Save the changes to the file and proceed with installation.

    Tip

    You can update your cloud provider configuration after you run the installer. On a command line, run:

    $ oc edit configmap -n openshift-config cloud-provider-config

    After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status.

18.3.8.1. External load balancers that use pre-defined floating IP addresses

Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers.

If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method.

Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses.

Additional resources

18.3.9. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

18.3.10. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.
      3. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
      4. Specify the floating IP address to use for external access to the OpenShift API.
      5. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
      7. Enter a name for your cluster. The name must be 14 or fewer characters long.
      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

18.3.10.1. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Note

Kuryr installations default to HTTP proxies.

Prerequisites

  • For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter:

    $ ip route add <cluster_network_cidr> via <installer_subnet_gateway>
  • The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates.
  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

    Note

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

18.3.11. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.3.11.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.8. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

18.3.11.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.9. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.3.11.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.10. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

18.3.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters

Additional RHOSP configuration parameters are described in the following table:

Table 18.11. Additional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.rootVolume.size

For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

compute.platform.openstack.rootVolume.type

For compute machines, the root volume’s type.

String, for example performance.

controlPlane.platform.openstack.rootVolume.size

For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

controlPlane.platform.openstack.rootVolume.type

For control plane machines, the root volume’s type.

String, for example performance.

platform.openstack.cloud

The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file.

String, for example MyCloud.

platform.openstack.externalNetwork

The RHOSP external network name to be used for installation.

String, for example external.

platform.openstack.computeFlavor

The RHOSP flavor to use for control plane and compute machines.

This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually.

String, for example m1.xlarge.

18.3.11.5. Optional RHOSP configuration parameters

Optional RHOSP configuration parameters are described in the following table:

Table 18.12. Optional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.additionalNetworkIDs

Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

compute.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with compute machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

compute.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

compute.platform.openstack.rootVolume.zones

For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

compute.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

controlPlane.platform.openstack.additionalNetworkIDs

Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

controlPlane.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with control plane machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

controlPlane.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

controlPlane.platform.openstack.rootVolume.zones

For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

controlPlane.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

platform.openstack.clusterOSImage

The location from which the installer downloads the RHCOS image.

You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with an SHA-256 checksum.

For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d. The value can also be the name of an existing Glance image, for example my-rhcos.

platform.openstack.clusterOSImageProperties

Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image.

You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi.

You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes.

A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"].

platform.openstack.defaultMachinePlatform

The default machine pool platform configuration.

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}

platform.openstack.ingressFloatingIP

An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.apiFloatingIP

An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.externalDNS

IP addresses for external DNS servers that cluster instances use for DNS resolution.

A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform.openstack.machinesSubnet

The UUID of a RHOSP subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.

The first item in networking.machineNetwork must match the value of machinesSubnet.

If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP.

A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

18.3.11.6. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.
  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.
  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.
  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines.
  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
Note

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

18.3.11.7. Sample customized install-config.yaml file for RHOSP with Kuryr

To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16 1
  networkType: Kuryr
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
    trunkSupport: true 2
    octaviaSupport: true 3
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
1
The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts.
2 3
Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services.

18.3.11.8. Cluster deployment on RHOSP provider networks

You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.

RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them.

In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network:

A diagram that depicts four OpenShift workloads on OpenStack. Each workload is connected by its NIC to an external data center by using a provider network.

OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.

Example provider network types include flat (untagged) and VLAN (802.1Q tagged).

Note

A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections.

You can learn more about provider and tenant networks in the RHOSP documentation.

18.3.11.8.1. RHOSP provider network requirements for cluster installation

Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions:

  • The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API.
  • The RHOSP networking service has the port security and allowed address pairs extensions enabled.
  • The provider network can be shared with other tenants.

    Tip

    Use the openstack network create command with the --share flag to create a network that can be shared.

  • The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet.

    Tip
    To create a network for a project that is named "openshift," enter the following command
    $ openstack network create --project openshift
    To create a subnet for a project that is named "openshift," enter the following command
    $ openstack subnet create --project openshift

    To learn more about creating networks on RHOSP, read the provider networks documentation.

    If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network.

    Important

    Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network.

  • Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default.

    Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:

    $ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
  • Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.
18.3.11.8.2. Deploying a cluster that has a primary interface on a provider network

You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network.

Prerequisites

  • Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation".

Procedure

  1. In a text editor, open the install-config.yaml file.
  2. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP.
  3. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP.
  4. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet.
  5. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet.
Important

The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block.

Section of an installation configuration file for a cluster that relies on a RHOSP provider network

        ...
        platform:
          openstack:
            apiVIP: 192.0.2.13
            ingressVIP: 192.0.2.23
            machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
            # ...
        networking:
          machineNetwork:
          - cidr: 192.0.2.0/24

Warning

You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface.

When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network.

Tip

You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list.

After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks.

18.3.11.9. Kuryr ports pools

A Kuryr ports pool maintains a number of ports on standby for pod creation.

Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.

The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes.

Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.

Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior:

  • The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false.
  • The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1.
  • The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting.

    If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.

  • The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3.

18.3.11.10. Adjusting Kuryr ports pools during installation

During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation.

Prerequisites

  • Create and modify the install-config.yaml file.

Procedure

  1. From a command line, create the manifest files:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    $ touch <installation_directory>/manifests/cluster-network-03-config.yml 1
    1
    For <installation_directory>, specify the directory name that contains the manifests/ directory for your cluster.

    After creating the file, several network configuration files are in the manifests/ directory, as shown:

    $ ls <installation_directory>/manifests/cluster-network-*

    Example output

    cluster-network-01-crd.yml
    cluster-network-02-config.yml
    cluster-network-03-config.yml

  3. Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want:

    $ oc edit networks.operator.openshift.io cluster
  4. Edit the settings to meet your requirements. The following file is provided as an example:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      serviceNetwork:
      - 172.30.0.0/16
      defaultNetwork:
        type: Kuryr
        kuryrConfig:
          enablePortPoolsPrepopulation: false 1
          poolMinPorts: 1 2
          poolBatchPorts: 3 3
          poolMaxPorts: 5 4
          openstackServiceNetwork: 172.30.0.0/15 5
    1
    Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false.
    2
    Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts. The default value is 1.
    3
    poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts. The default value is 3.
    4
    If the number of free ports in a pool is higher than the value of poolMaxPorts, Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0.
    5
    The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia’s LoadBalancers.

    If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork, and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork.

    The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter.

    If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1.

  5. Save the cluster-network-03-config.yml file, and exit the text editor.
  6. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster.

18.3.12. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.3.13. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.3.13.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

    • platform.openstack.ingressFloatingIP
    • platform.openstack.apiFloatingIP

If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.3.13.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the install-config.yaml file, do not define the following parameters:

  • platform.openstack.ingressFloatingIP
  • platform.openstack.apiFloatingIP

If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own.

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.3.14. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

18.3.15. Verifying cluster status

You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

  1. In the cluster environment, export the administrator’s kubeconfig file:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

  2. View the control plane and compute machines created after a deployment:

    $ oc get nodes
  3. View your cluster’s version:

    $ oc get clusterversion
  4. View your Operators' status:

    $ oc get clusteroperator
  5. View all running pods in the cluster:

    $ oc get pods -A

18.3.16. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

18.3.17. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.3.18. Next steps

18.4. Installing a cluster on OpenStack that supports SR-IOV-connected compute machines

In OpenShift Container Platform version 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that can use compute machines with single-root I/O virtualization (SR-IOV) technology.

18.4.1. Prerequisites

  • Review details about the OpenShift Container Platform installation and update processes.

    • Verify that OpenShift Container Platform 4.10 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix.
  • Verify that your network configuration does not rely on a provider network. Provider networks are not supported.
  • Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage.
  • Have metadata service enabled in RHOSP

18.4.2. Resource guidelines for installing OpenShift Container Platform on RHOSP

To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:

Table 18.13. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
ResourceValue

Floating IP addresses

3

Ports

15

Routers

1

Subnets

1

RAM

88 GB

vCPUs

22

Volume storage

275 GB

Instances

7

Security groups

3

Security group rules

60

Server groups

2 - plus 1 for each additional availability zone in each machine pool

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Note

By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

18.4.2.1. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.4.2.2. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages.

Important

SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure.

Additional resources

18.4.2.3. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.4.3. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.4.4. Enabling Swift on RHOSP

Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program.

Important

If the Red Hat OpenStack Platform (RHOSP) object storage service, commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.

Prerequisites

  • You have a RHOSP administrator account on the target environment.
  • The Swift service is installed.
  • On Ceph RGW, the account in url option is enabled.

Procedure

To enable Swift on RHOSP:

  1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift:

    $ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

18.4.5. Verifying external network access

The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Procedure

  1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"

    Example output

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

Note

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

18.4.6. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.4.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

18.4.8. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Enter a descriptive name for your cluster.
      3. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

18.4.8.1. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

    Note

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

18.4.9. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.4.9.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.14. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

18.4.9.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.15. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.4.9.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.16. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

18.4.9.4. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.
  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.
  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.
  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines.
  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
Note

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

18.4.9.5. Deploying a cluster with bare metal machines

If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines.

Bare-metal compute machines are not supported on clusters that use Kuryr.

Note

Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not.

Prerequisites

  • The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API.
  • Bare metal is available as a RHOSP flavor.
  • The RHOSP network supports both VM and bare metal server attachment.
  • Your network configuration does not rely on a provider network. Provider networks are not supported.
  • If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned.
  • If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks.
  • You created an inventory.yaml file as part of the OpenShift Container Platform installation process.

Procedure

  1. In the inventory.yaml file, edit the flavors for machines:

    1. If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor.
    2. Change the value of os_flavor_worker to a bare metal flavor.

      An example bare metal inventory.yaml file

      all:
        hosts:
          localhost:
            ansible_connection: local
            ansible_python_interpreter: "{{ansible_playbook_python}}"
      
            # User-provided values
            os_subnet_range: '10.0.0.0/16'
            os_flavor_master: 'my-bare-metal-flavor' 1
            os_flavor_worker: 'my-bare-metal-flavor' 2
            os_image_rhcos: 'rhcos'
            os_external_network: 'external'
      ...

      1
      If you want to have bare-metal control plane machines, change this value to a bare metal flavor.
      2
      Change this value to a bare metal flavor to use for compute machines.

Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file.

Note

The installer may time out while waiting for bare metal machines to boot.

If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

$ ./openshift-install wait-for install-complete --log-level debug

18.4.9.6. Sample customized install-config.yaml file for RHOSP

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OpenShiftSDN
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

18.4.10. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.4.11. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.4.11.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

    • platform.openstack.ingressFloatingIP
    • platform.openstack.apiFloatingIP

If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.4.11.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the file, do not define the following

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.4.12. Creating SR-IOV networks for compute machines

If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV), you can provision SR-IOV networks that compute machines run on.

Note

The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required.

Prerequisites

  • Your cluster supports SR-IOV.

    Note

    If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation.

  • You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks.

Procedure

  1. On a command line, create a radio RHOSP network:

    $ openstack network create radio --provider-physical-network radio --provider-network-type flat --external
  2. Create an uplink RHOSP network:

    $ openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external
  3. Create a subnet for the radio network:

    $ openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio
  4. Create a subnet for the uplink network:

    $ openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink

18.4.13. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

18.4.14. Verifying cluster status

You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

  1. In the cluster environment, export the administrator’s kubeconfig file:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

  2. View the control plane and compute machines created after a deployment:

    $ oc get nodes
  3. View your cluster’s version:

    $ oc get clusterversion
  4. View your Operators' status:

    $ oc get clusteroperator
  5. View all running pods in the cluster:

    $ oc get pods -A

18.4.15. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

The cluster is operational. Before you can add SR-IOV compute machines though, you must perform additional tasks.

18.4.16. Preparing a cluster that runs on RHOSP for SR-IOV

Before you use single root I/O virtualization (SR-IOV) on a cluster that runs on Red Hat OpenStack Platform (RHOSP), make the RHOSP metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver.

18.4.16.1. Enabling the RHOSP metadata service as a mountable drive

You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive.

The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources.

Procedure

  1. Create a machine config file from the following template:

    A mountable metadata service machine config file

    kind: MachineConfig
    apiVersion: machineconfiguration.openshift.io/v1
    metadata:
      name: 20-mount-config 1
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        systemd:
          units:
            - name: create-mountpoint-var-config.service
              enabled: true
              contents: |
                [Unit]
                Description=Create mountpoint /var/config
                Before=kubelet.service
    
                [Service]
                ExecStart=/bin/mkdir -p /var/config
    
                [Install]
                WantedBy=var-config.mount
    
            - name: var-config.mount
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
                [Mount]
                Where=/var/config
                What=/dev/disk/by-label/config-2
                [Install]
                WantedBy=local-fs.target

    1
    You can substitute a name of your choice.
  2. From a command line, apply the machine config:

    $ oc apply -f <machine_config_file_name>.yaml

18.4.16.2. Enabling the No-IOMMU feature for the RHOSP VFIO driver

You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature.

Procedure

  1. Create a machine config file from the following template:

    A No-IOMMU VFIO machine config file

    kind: MachineConfig
    apiVersion: machineconfiguration.openshift.io/v1
    metadata:
      name: 99-vfio-noiommu 1
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
          - path: /etc/modprobe.d/vfio-noiommu.conf
            mode: 0644
            contents:
              source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK

    1
    You can substitute a name of your choice.
  2. From a command line, apply the machine config:

    $ oc apply -f <machine_config_file_name>.yaml

The cluster is installed and prepared for SR-IOV configuration. Complete the post-installation SR-IOV tasks that are listed in the "Next steps" section.

18.4.17. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.4.18. Next steps

18.5. Installing a cluster on OpenStack that supports OVS-DPDK-connected compute machines

If your Red Hat OpenStack Platform (RHOSP) deployment has Open vSwitch with the Data Plane Development Kit (OVS-DPDK) enabled, you can install an OpenShift Container Platform cluster on it. Clusters that run on such RHOSP deployments use OVS-DPDK features by providing access to poll mode drivers.

18.5.1. Prerequisites

18.5.2. Resource guidelines for installing OpenShift Container Platform on RHOSP

To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:

Table 18.17. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
ResourceValue

Floating IP addresses

3

Ports

15

Routers

1

Subnets

1

RAM

88 GB

vCPUs

22

Volume storage

275 GB

Instances

7

Security groups

3

Security group rules

60

Server groups

2 - plus 1 for each additional availability zone in each machine pool

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Note

By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

18.5.2.1. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.5.2.2. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages.

Important

SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure.

Additional resources

18.5.2.3. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.5.3. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.5.4. Enabling Swift on RHOSP

Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program.

Important

If the Red Hat OpenStack Platform (RHOSP) object storage service, commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.

Prerequisites

  • You have a RHOSP administrator account on the target environment.
  • The Swift service is installed.
  • On Ceph RGW, the account in url option is enabled.

Procedure

To enable Swift on RHOSP:

  1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift:

    $ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

18.5.5. Verifying external network access

The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Procedure

  1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"

    Example output

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

Note

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

18.5.6. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.5.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

18.5.8. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Enter a descriptive name for your cluster.
      3. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

18.5.8.1. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites

  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

    Note

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

18.5.9. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.5.9.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.18. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

18.5.9.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.19. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.5.9.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.20. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

18.5.9.4. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.
  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.
  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.
  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines.
  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
Note

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

18.5.9.5. Deploying a cluster with bare metal machines

If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines.

Bare-metal compute machines are not supported on clusters that use Kuryr.

Note

Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not.

Prerequisites

  • The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API.
  • Bare metal is available as a RHOSP flavor.
  • The RHOSP network supports both VM and bare metal server attachment.
  • Your network configuration does not rely on a provider network. Provider networks are not supported.
  • If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned.
  • If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks.
  • You created an inventory.yaml file as part of the OpenShift Container Platform installation process.

Procedure

  1. In the inventory.yaml file, edit the flavors for machines:

    1. If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor.
    2. Change the value of os_flavor_worker to a bare metal flavor.

      An example bare metal inventory.yaml file

      all:
        hosts:
          localhost:
            ansible_connection: local
            ansible_python_interpreter: "{{ansible_playbook_python}}"
      
            # User-provided values
            os_subnet_range: '10.0.0.0/16'
            os_flavor_master: 'my-bare-metal-flavor' 1
            os_flavor_worker: 'my-bare-metal-flavor' 2
            os_image_rhcos: 'rhcos'
            os_external_network: 'external'
      ...

      1
      If you want to have bare-metal control plane machines, change this value to a bare metal flavor.
      2
      Change this value to a bare metal flavor to use for compute machines.

Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file.

Note

The installer may time out while waiting for bare metal machines to boot.

If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

$ ./openshift-install wait-for install-complete --log-level debug

18.5.9.6. Sample customized install-config.yaml file for RHOSP

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OpenShiftSDN
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

18.5.10. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.5.11. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.5.11.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

    • platform.openstack.ingressFloatingIP
    • platform.openstack.apiFloatingIP

If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.5.11.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the file, do not define the following

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.5.12. Creating SR-IOV networks for compute machines

If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV), you can provision SR-IOV networks that compute machines run on.

Note

The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required.

Prerequisites

  • Your cluster supports SR-IOV.

    Note

    If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation.

  • You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks.

Procedure

  1. On a command line, create a radio RHOSP network:

    $ openstack network create radio --provider-physical-network radio --provider-network-type flat --external
  2. Create an uplink RHOSP network:

    $ openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external
  3. Create a subnet for the radio network:

    $ openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio
  4. Create a subnet for the uplink network:

    $ openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink

18.5.13. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

18.5.14. Verifying cluster status

You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

  1. In the cluster environment, export the administrator’s kubeconfig file:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

  2. View the control plane and compute machines created after a deployment:

    $ oc get nodes
  3. View your cluster’s version:

    $ oc get clusterversion
  4. View your Operators' status:

    $ oc get clusteroperator
  5. View all running pods in the cluster:

    $ oc get pods -A

18.5.15. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

The cluster is operational. Before you can add OVS-DPDK compute machines though, you must perform additional tasks.

18.5.16. Enabling the RHOSP metadata service as a mountable drive

You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive.

The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources.

Procedure

  1. Create a machine config file from the following template:

    A mountable metadata service machine config file

    kind: MachineConfig
    apiVersion: machineconfiguration.openshift.io/v1
    metadata:
      name: 20-mount-config 1
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        systemd:
          units:
            - name: create-mountpoint-var-config.service
              enabled: true
              contents: |
                [Unit]
                Description=Create mountpoint /var/config
                Before=kubelet.service
    
                [Service]
                ExecStart=/bin/mkdir -p /var/config
    
                [Install]
                WantedBy=var-config.mount
    
            - name: var-config.mount
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
                [Mount]
                Where=/var/config
                What=/dev/disk/by-label/config-2
                [Install]
                WantedBy=local-fs.target

    1
    You can substitute a name of your choice.
  2. From a command line, apply the machine config:

    $ oc apply -f <machine_config_file_name>.yaml

18.5.17. Enabling the No-IOMMU feature for the RHOSP VFIO driver

You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature.

Procedure

  1. Create a machine config file from the following template:

    A No-IOMMU VFIO machine config file

    kind: MachineConfig
    apiVersion: machineconfiguration.openshift.io/v1
    metadata:
      name: 99-vfio-noiommu 1
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
          - path: /etc/modprobe.d/vfio-noiommu.conf
            mode: 0644
            contents:
              source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK

    1
    You can substitute a name of your choice.
  2. From a command line, apply the machine config:

    $ oc apply -f <machine_config_file_name>.yaml

18.5.18. Binding the vfio-pci kernel driver to NICs

Compute machines that connect to a virtual function I/O (VFIO) network require the vfio-pci kernel driver to be bound to the ports that are attached to a configured network. Create a machine set for workers that attach to this VFIO network.

Procedure

  1. From a command line, retrieve VFIO network UUIDs:

    $ openstack network show <VFIO_network_name> -f value -c id
  2. Create a machine set on your cluster from the following template:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-vhostuser-bind
    spec:
      config:
        ignition:
          version: 2.2.0
        systemd:
          units:
          - name: vhostuser-bind.service
            enabled: true
            contents: |
              [Unit]
              Description=Vhostuser Interface vfio-pci Bind
              Wants=network-online.target
              After=network-online.target ignition-firstboot-complete.service
              [Service]
              Type=oneshot
              EnvironmentFile=/etc/vhostuser-bind.conf
              ExecStart=/usr/local/bin/vhostuser $ARG
              [Install]
              WantedBy=multi-user.target
        storage:
          files:
          - contents:
              inline: vfio-pci
            filesystem: root
            mode: 0644
            path: /etc/modules-load.d/vfio-pci.conf
          - contents:
              inline: |
                #!/bin/bash
                set -e
                if [[ "$#" -lt 1 ]]; then
                    echo "Nework ID not provided, nothing to do"
                    exit
                fi
    
                source /etc/vhostuser-bind.conf
    
                NW_DATA="/var/config/openstack/latest/network_data.json"
                if [ ! -f ${NW_DATA} ]; then
                    echo "Network data file not found, trying to download it from nova metadata"
                    if ! curl http://169.254.169.254/openstack/latest/network_data.json > /tmp/network_data.json; then
                        echo "Failed to download network data file"
                        exit 1
                    fi
                    NW_DATA="/tmp/network_data.json"
                fi
                function parseNetwork() {
                    local nwid=$1
                    local pcis=()
                    echo "Network ID is $nwid"
                    links=$(jq '.networks[] | select(.network_id == "'$nwid'") | .link' $NW_DATA)
                    if [ ${#links} -gt 0 ]; then
                        for link in $links; do
                            echo "Link Name: $link"
                            mac=$(jq -r '.links[] | select(.id == '$link') | .ethernet_mac_address'  $NW_DATA)
                            if [ -n $mac ]; then
                                pci=$(bindDriver $mac)
                                pci_ret=$?
                                if [[ "$pci_ret" -eq 0 ]]; then
                                    echo "$pci bind succesful"
                                fi
                            fi
                        done
                    fi
                }
    
                function bindDriver() {
                    local mac=$1
                    for file in /sys/class/net/*; do
                        dev_mac=$(cat $file/address)
                        if [[ "$mac" == "$dev_mac" ]]; then
                            name=${file##*\/}
                            bus_str=$(ethtool -i $name | grep bus)
                            dev_t=${bus_str#*:}
                            dev=${dev_t#[[:space:]]}
    
                            echo $dev
    
                            devlink="/sys/bus/pci/devices/$dev"
                            syspath=$(realpath "$devlink")
                            if [ ! -f "$syspath/driver/unbind" ]; then
                                echo "File $syspath/driver/unbind not found"
                                return 1
                            fi
                            if ! echo "$dev">"$syspath/driver/unbind"; then
                                return 1
                            fi
    
                            if [ ! -f "$syspath/driver_override" ]; then
                                echo "File $syspath/driver_override not found"
                                return 1
                            fi
                            if ! echo "vfio-pci">"$syspath/driver_override"; then
                                return 1
                            fi
    
                            if [ ! -f "/sys/bus/pci/drivers/vfio-pci/bind" ]; then
                                echo "File /sys/bus/pci/drivers/vfio-pci/bind not found"
                                return 1
                            fi
                            if ! echo "$dev">"/sys/bus/pci/drivers/vfio-pci/bind"; then
                              return 1
                            fi
                            return 0
                        fi
                    done
                    return 1
                }
    
                for nwid in "$@"; do
                    parseNetwork $nwid
                done
            filesystem: root
            mode: 0744
            path: /usr/local/bin/vhostuser
          - contents:
              inline: |
                ARG="be22563c-041e-44a0-9cbd-aa391b439a39,ec200105-fb85-4181-a6af-35816da6baf7" 1
            filesystem: root
            mode: 0644
            path: /etc/vhostuser-bind.conf
    1
    Replace this value with a comma-separated list of VFIO network UUIDs.

    On boot for machines that are part of this set, the MAC addresses of ports are translated into PCI bus IDs. The vfio-pci module is bound to any port that is assocated with a network that is identified by the RHOSP network ID.

Verification

  1. On a compute node, from a command line, retrieve the name of the node by entering:

    $ oc get nodes
  2. Create a shell to debug the node:

    $ oc debug node/<node_name>
  3. Change the root directory for the current running process:

    $ chroot /host
  4. Enter the following command to list the kernel drivers that are handling each device on your machine:

    $ lspci -k

    Example output

    00:07.0 Ethernet controller: Red Hat, Inc. Virtio network device
    Subsystem: Red Hat, Inc. Device 0001
    Kernel driver in use: vfio-pci

    In the output of the command, VFIO ethernet controllers use the vfio-pci kernel driver.

18.5.19. Exposing the host-device interface to the pod

You can use the Container Network Interface (CNI) plugin to expose an interface that is on the host to the pod. The plugin moves the interface from the namespace of the host network to the namespace of the pod. The pod then has direct control of the interface.

Procedure

  • Create an additional network attachment with the host-device CNI plugin by using the following object as an example:

        apiVersion: k8s.cni.cncf.io/v1
        kind: NetworkAttachmentDefinition
        metadata:
         name: vhostuser1
         namespace: default
        spec:
         config: '{ "cniVersion": "0.3.1", "name": "hostonly", "type": "host-device", "pciBusId": "0000:00:04.0", "ipam": { } }'

Verification

  • From a command line, run the following command to see if networks are created in the namespace:

    $ oc -n <your_cnf_namespace> get net-attach-def

The cluster is installed and prepared for configuration. You must now perform the OVS-DPDK configuration tasks in Next steps.

18.5.20. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.5.21. Additional resources

18.5.22. Next steps

18.6. Installing a cluster on OpenStack on your own infrastructure

In OpenShift Container Platform version 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure.

Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process.

18.6.1. Prerequisites

18.6.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP

To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:

Table 18.21. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
ResourceValue

Floating IP addresses

3

Ports

15

Routers

1

Subnets

1

RAM

88 GB

vCPUs

22

Volume storage

275 GB

Instances

7

Security groups

3

Security group rules

60

Server groups

2 - plus 1 for each additional availability zone in each machine pool

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Note

By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

18.6.3.1. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.6.3.2. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

18.6.3.3. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.6.4. Downloading playbook dependencies

The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them.

Note

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.

Prerequisites

  • Python 3 is installed on your machine.

Procedure

  1. On a command line, add the repositories:

    1. Register with Red Hat Subscription Manager:

      $ sudo subscription-manager register # If not done already
    2. Pull the latest subscription data:

      $ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already
    3. Disable the current repositories:

      $ sudo subscription-manager repos --disable=* # If not done already
    4. Add the required repositories:

      $ sudo subscription-manager repos \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
        --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms
  2. Install the modules:

    $ sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr
  3. Ensure that the python command points to python3:

    $ sudo alternatives --set python /usr/bin/python3

18.6.5. Downloading the installation playbooks

Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure.

Prerequisites

  • The curl command-line tool is available on your machine.

Procedure

  • To download the playbooks to your working directory, run the following script from a command line:

    $ xargs -n 1 curl -O <<< '
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml'

The playbooks are downloaded to your machine.

Important

During the installation process, you can modify the playbooks to configure your deployment.

Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP.

Important

You must match any edits you make in the bootstrap.yaml, compute-nodes.yaml, control-plane.yaml, network.yaml, and security-groups.yaml files to the corresponding playbooks that are prefixed with down-. For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail.

18.6.6. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

18.6.7. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.6.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image

The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI.

Prerequisites

  • The RHOSP CLI is installed.

Procedure

  1. Log in to the Red Hat Customer Portal’s Product Downloads page.
  2. Under Version, select the most recent release of OpenShift Container Platform 4.10 for Red Hat Enterprise Linux (RHEL) 8.

    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available.

  3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW).
  4. Decompress the image.

    Note

    You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz. To find out if or how the file is compressed, in a command line, enter:

    $ file <name_of_downloaded_file>
  5. From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI:

    $ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-${RHCOS_VERSION}-openstack.qcow2 rhcos
    Important

    Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.

    Warning

    If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP.

After you upload the image to RHOSP, it is usable in the installation process.

18.6.9. Verifying external network access

The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Procedure

  1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"

    Example output

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

Note

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

18.6.10. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.6.10.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP:

    $ openstack floating ip create --description "bootstrap machine" <external_network>
  4. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  5. Add the FIPs to the inventory.yaml file as the values of the following variables:

    • os_api_fip
    • os_bootstrap_fip
    • os_ingress_fip

If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.6.10.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the inventory.yaml file, do not define the following variables:

  • os_api_fip
  • os_bootstrap_fip
  • os_ingress_fip

If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own.

If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.6.11. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.6.12. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.
      3. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
      4. Specify the floating IP address to use for external access to the OpenShift API.
      5. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
      7. Enter a name for your cluster. The name must be 14 or fewer characters long.
      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

You now have the file install-config.yaml in the directory that you specified.

18.6.13. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.6.13.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.22. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

18.6.13.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.23. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.6.13.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.24. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

18.6.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters

Additional RHOSP configuration parameters are described in the following table:

Table 18.25. Additional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.rootVolume.size

For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

compute.platform.openstack.rootVolume.type

For compute machines, the root volume’s type.

String, for example performance.

controlPlane.platform.openstack.rootVolume.size

For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

controlPlane.platform.openstack.rootVolume.type

For control plane machines, the root volume’s type.

String, for example performance.

platform.openstack.cloud

The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file.

String, for example MyCloud.

platform.openstack.externalNetwork

The RHOSP external network name to be used for installation.

String, for example external.

platform.openstack.computeFlavor

The RHOSP flavor to use for control plane and compute machines.

This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually.

String, for example m1.xlarge.

18.6.13.5. Optional RHOSP configuration parameters

Optional RHOSP configuration parameters are described in the following table:

Table 18.26. Optional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.additionalNetworkIDs

Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

compute.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with compute machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

compute.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

compute.platform.openstack.rootVolume.zones

For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

compute.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

controlPlane.platform.openstack.additionalNetworkIDs

Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

controlPlane.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with control plane machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

controlPlane.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

controlPlane.platform.openstack.rootVolume.zones

For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

controlPlane.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

platform.openstack.clusterOSImage

The location from which the installer downloads the RHCOS image.

You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with an SHA-256 checksum.

For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d. The value can also be the name of an existing Glance image, for example my-rhcos.

platform.openstack.clusterOSImageProperties

Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image.

You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi.

You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes.

A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"].

platform.openstack.defaultMachinePlatform

The default machine pool platform configuration.

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}

platform.openstack.ingressFloatingIP

An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.apiFloatingIP

An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.externalDNS

IP addresses for external DNS servers that cluster instances use for DNS resolution.

A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform.openstack.machinesSubnet

The UUID of a RHOSP subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.

The first item in networking.machineNetwork must match the value of machinesSubnet.

If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP.

A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

18.6.13.6. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.
  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.
  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.
  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines.
  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
Note

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

18.6.13.7. Sample customized install-config.yaml file for RHOSP

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OpenShiftSDN
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

18.6.13.8. Setting a custom subnet for machines

The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file.

Prerequisites

  • You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program.

Procedure

  1. On a command line, browse to the directory that contains install-config.yaml.
  2. From that directory, either run a script to edit the install-config.yaml file or update the file manually:

    • To set the value by using a script, run:

      $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
      1
      Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24.
    • To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet.

18.6.13.9. Emptying compute machine pools

To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually.

Prerequisites

  • You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program.

Procedure

  1. On a command line, browse to the directory that contains install-config.yaml.
  2. From that directory, either run a script to edit the install-config.yaml file or update the file manually:

    • To set the value by using a script, run:

      $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["compute"][0]["replicas"] = 0;
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
    • To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0.

18.6.13.10. Cluster deployment on RHOSP provider networks

You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.

RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them.

In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network:

A diagram that depicts four OpenShift workloads on OpenStack. Each workload is connected by its NIC to an external data center by using a provider network.

OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.

Example provider network types include flat (untagged) and VLAN (802.1Q tagged).

Note

A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections.

You can learn more about provider and tenant networks in the RHOSP documentation.

18.6.13.10.1. RHOSP provider network requirements for cluster installation

Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions:

  • The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API.
  • The RHOSP networking service has the port security and allowed address pairs extensions enabled.
  • The provider network can be shared with other tenants.

    Tip

    Use the openstack network create command with the --share flag to create a network that can be shared.

  • The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet.

    Tip
    To create a network for a project that is named "openshift," enter the following command
    $ openstack network create --project openshift
    To create a subnet for a project that is named "openshift," enter the following command
    $ openstack subnet create --project openshift

    To learn more about creating networks on RHOSP, read the provider networks documentation.

    If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network.

    Important

    Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network.

  • Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default.

    Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:

    $ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
  • Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.
18.6.13.10.2. Deploying a cluster that has a primary interface on a provider network

You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network.

Prerequisites

  • Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation".

Procedure

  1. In a text editor, open the install-config.yaml file.
  2. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP.
  3. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP.
  4. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet.
  5. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet.
Important

The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block.

Section of an installation configuration file for a cluster that relies on a RHOSP provider network

        ...
        platform:
          openstack:
            apiVIP: 192.0.2.13
            ingressVIP: 192.0.2.23
            machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
            # ...
        networking:
          machineNetwork:
          - cidr: 192.0.2.0/24

Warning

You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface.

When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network.

Tip

You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list.

After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks.

18.6.14. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.

Important
  • The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites

  • You obtained the OpenShift Container Platform installation program.
  • You created the install-config.yaml installation configuration file.

Procedure

  1. Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.
  2. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage these resources yourself, you do not have to initialize them.

    • You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment.
  3. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.
    2. Locate the mastersSchedulable parameter and ensure that it is set to false.
    3. Save and exit the file.
  4. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory> 1
    1
    For <installation_directory>, specify the same installation directory.

    Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign
  5. Export the metadata file’s infraID key as an environment variable:

    $ export INFRA_ID=$(jq -r .infraID metadata.json)
Tip

Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project.

18.6.15. Preparing the bootstrap Ignition files

The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file.

Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file.

Prerequisites

  • You have the bootstrap Ignition file that the installer program generates, bootstrap.ign.
  • The infrastructure ID from the installer’s metadata file is set as an environment variable ($INFRA_ID).

    • If the variable is not set, see Creating the Kubernetes manifest and Ignition config files.
  • You have an HTTP(S)-accessible way to store the bootstrap Ignition file.

    • The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server.

Procedure

  1. Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs:

    import base64
    import json
    import os
    
    with open('bootstrap.ign', 'r') as f:
        ignition = json.load(f)
    
    files = ignition['storage'].get('files', [])
    
    infra_id = os.environ.get('INFRA_ID', 'openshift').encode()
    hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
    files.append(
    {
        'path': '/etc/hostname',
        'mode': 420,
        'contents': {
            'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64
        }
    })
    
    ca_cert_path = os.environ.get('OS_CACERT', '')
    if ca_cert_path:
        with open(ca_cert_path, 'r') as f:
            ca_cert = f.read().encode()
            ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()
    
        files.append(
        {
            'path': '/opt/openshift/tls/cloud-ca-cert.pem',
            'mode': 420,
            'contents': {
                'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
            }
        })
    
    ignition['storage']['files'] = files;
    
    with open('bootstrap.ign', 'w') as f:
        json.dump(ignition, f)
  2. Using the RHOSP CLI, create an image that uses the bootstrap Ignition file:

    $ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>
  3. Get the image’s details:

    $ openstack image show <image_name>

    Make a note of the file value; it follows the pattern v2/images/<image_ID>/file.

    Note

    Verify that the image you created is active.

  4. Retrieve the image service’s public address:

    $ openstack catalog show image
  5. Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file.
  6. Generate an auth token and save the token ID:

    $ openstack token issue -c id -f value
  7. Insert the following content into a file called $INFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values:

    {
      "ignition": {
        "config": {
          "merge": [{
            "source": "<storage_url>", 1
            "httpHeaders": [{
              "name": "X-Auth-Token", 2
              "value": "<token_ID>" 3
            }]
          }]
        },
        "security": {
          "tls": {
            "certificateAuthorities": [{
              "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4
            }]
          }
        },
        "version": "3.2.0"
      }
    }
    1
    Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL.
    2
    Set name in httpHeaders to "X-Auth-Token".
    3
    Set value in httpHeaders to your token’s ID.
    4
    If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate.
  8. Save the secondary Ignition config file.

The bootstrap Ignition data will be passed to RHOSP during installation.

Warning

The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process.

18.6.16. Creating control plane Ignition config files on RHOSP

Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files.

Note

As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine.

Prerequisites

  • The infrastructure ID from the installation program’s metadata file is set as an environment variable ($INFRA_ID).

    • If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files".

Procedure

  • On a command line, run the following Python script:

    $ for index in $(seq 0 2); do
        MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
        python -c "import base64, json, sys;
    ignition = json.load(sys.stdin);
    storage = ignition.get('storage', {});
    files = storage.get('files', []);
    files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'});
    storage['files'] = files;
    ignition['storage'] = storage
    json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
    done

    You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json, <INFRA_ID>-master-1-ignition.json, and <INFRA_ID>-master-2-ignition.json.

18.6.17. Creating network resources on RHOSP

Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports.

Prerequisites

  • Python 3 is installed on your machine.
  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".

Procedure

  1. Optional: Add an external network value to the inventory.yaml playbook:

    Example external network value in the inventory.yaml Ansible playbook

    ...
          # The public network providing connectivity to the cluster. If not
          # provided, the cluster external connectivity must be provided in another
          # way.
    
          # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip.
          os_external_network: 'external'
    ...

    Important

    If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself.

  2. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook:

    Example FIP values in the inventory.yaml Ansible playbook

    ...
          # OpenShift API floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the Control Plane to
          # serve the OpenShift API.
          os_api_fip: '203.0.113.23'
    
          # OpenShift Ingress floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the worker nodes to serve
          # the applications.
          os_ingress_fip: '203.0.113.19'
    
          # If this value is non-empty, the corresponding floating IP will be
          # attached to the bootstrap machine. This is needed for collecting logs
          # in case of install failure.
          os_bootstrap_fip: '203.0.113.20'

    Important

    If you do not define values for os_api_fip and os_ingress_fip, you must perform post-installation network configuration.

    If you do not define a value for os_bootstrap_fip, the installer cannot download debugging information from failed installations.

    See "Enabling access to the environment" for more information.

  3. On a command line, create security groups by running the security-groups.yaml playbook:

    $ ansible-playbook -i inventory.yaml security-groups.yaml
  4. On a command line, create a network, subnet, and router by running the network.yaml playbook:

    $ ansible-playbook -i inventory.yaml network.yaml
  5. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command:

    $ openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "$INFRA_ID-nodes"

Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines.

18.6.17.1. Deploying a cluster with bare metal machines

If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines.

Bare-metal compute machines are not supported on clusters that use Kuryr.

Note

Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not.

Prerequisites

  • The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API.
  • Bare metal is available as a RHOSP flavor.
  • The RHOSP network supports both VM and bare metal server attachment.
  • Your network configuration does not rely on a provider network. Provider networks are not supported.
  • If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned.
  • If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks.
  • You created an inventory.yaml file as part of the OpenShift Container Platform installation process.

Procedure

  1. In the inventory.yaml file, edit the flavors for machines:

    1. If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor.
    2. Change the value of os_flavor_worker to a bare metal flavor.

      An example bare metal inventory.yaml file

      all:
        hosts:
          localhost:
            ansible_connection: local
            ansible_python_interpreter: "{{ansible_playbook_python}}"
      
            # User-provided values
            os_subnet_range: '10.0.0.0/16'
            os_flavor_master: 'my-bare-metal-flavor' 1
            os_flavor_worker: 'my-bare-metal-flavor' 2
            os_image_rhcos: 'rhcos'
            os_external_network: 'external'
      ...

      1
      If you want to have bare-metal control plane machines, change this value to a bare metal flavor.
      2
      Change this value to a bare metal flavor to use for compute machines.

Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file.

Note

The installer may time out while waiting for bare metal machines to boot.

If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

$ ./openshift-install wait-for install-complete --log-level debug

18.6.18. Creating the bootstrap machine on RHOSP

Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and bootstrap.yaml Ansible playbooks are in a common directory.
  • The metadata.json file that the installation program created is in the same directory as the Ansible playbooks.

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the bootstrap.yaml playbook:

    $ ansible-playbook -i inventory.yaml bootstrap.yaml
  3. After the bootstrap server is active, view the logs to verify that the Ignition files were received:

    $ openstack console log show "$INFRA_ID-bootstrap"

18.6.19. Creating the control plane machines on RHOSP

Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The infrastructure ID from the installation program’s metadata file is set as an environment variable ($INFRA_ID).
  • The inventory.yaml, common.yaml, and control-plane.yaml Ansible playbooks are in a common directory.
  • You have the three Ignition files that were created in "Creating control plane Ignition config files".

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. If the control plane Ignition config files aren’t already in your working directory, copy them into it.
  3. On a command line, run the control-plane.yaml playbook:

    $ ansible-playbook -i inventory.yaml control-plane.yaml
  4. Run the following command to monitor the bootstrapping process:

    $ openshift-install wait-for bootstrap-complete

    You will see messages that confirm that the control plane machines are running and have joined the cluster:

    INFO API v1.23.0 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    ...
    INFO It is now safe to remove the bootstrap resources

18.6.20. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

18.6.21. Deleting bootstrap resources from RHOSP

Delete the bootstrap resources that you no longer need.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and down-bootstrap.yaml Ansible playbooks are in a common directory.
  • The control plane machines are running.

    • If you do not know the status of the machines, see "Verifying cluster status".

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the down-bootstrap.yaml playbook:

    $ ansible-playbook -i inventory.yaml down-bootstrap.yaml

The bootstrap port, server, and floating IP address are deleted.

Warning

If you did not disable the bootstrap Ignition file URL earlier, do so now.

18.6.22. Creating compute machines on RHOSP

After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and compute-nodes.yaml Ansible playbooks are in a common directory.
  • The metadata.json file that the installation program created is in the same directory as the Ansible playbooks.
  • The control plane is active.

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the playbook:

    $ ansible-playbook -i inventory.yaml compute-nodes.yaml

Next steps

  • Approve the certificate signing requests for the machines.

18.6.23. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.23.0
    master-1  Ready     master  63m  v1.23.0
    master-2  Ready     master  64m  v1.23.0

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.23.0
    master-1  Ready     master  73m  v1.23.0
    master-2  Ready     master  74m  v1.23.0
    worker-0  Ready     worker  11m  v1.23.0
    worker-1  Ready     worker  11m  v1.23.0

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

18.6.24. Verifying a successful installation

Verify that the OpenShift Container Platform installation is complete.

Prerequisites

  • You have the installation program (openshift-install)

Procedure

  • On a command line, enter:

    $ openshift-install --log-level debug wait-for install-complete

The program outputs the console URL, as well as the administrator’s login information.

18.6.25. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.6.26. Next steps

18.7. Installing a cluster on OpenStack with Kuryr on your own infrastructure

In OpenShift Container Platform version 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure.

Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process.

18.7.1. Prerequisites

18.7.2. About Kuryr SDN

Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services.

Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances.

Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace:

  • kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object.
  • kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object.

The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs.

Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network.

If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial.

Kuryr is not recommended in deployments where all of the following criteria are true:

  • The RHOSP version is less than 16.
  • The deployment uses UDP services, or a large number of TCP services on few hypervisors.

or

  • The ovn-octavia Octavia driver is disabled.
  • The deployment uses a large number of TCP services on few hypervisors.

18.7.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr

When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires.

Use the following quota to satisfy a default cluster’s minimum requirements:

Table 18.27. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr
ResourceValue

Floating IP addresses

3 - plus the expected number of Services of LoadBalancer type

Ports

1500 - 1 needed per Pod

Routers

1

Subnets

250 - 1 needed per Namespace/Project

Networks

250 - 1 needed per Namespace/Project

RAM

112 GB

vCPUs

28

Volume storage

275 GB

Instances

7

Security groups

250 - 1 needed per Service and per NetworkPolicy

Security group rules

1000

Server groups

2 - plus 1 for each additional availability zone in each machine pool

Load balancers

100 - 1 needed per Service

Load balancer listeners

500 - 1 needed per Service-exposed port

Load balancer pools

500 - 1 needed per Service-exposed port

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Important

If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects.

Take the following notes into consideration when setting resources:

  • The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time.
  • Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group.
  • Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota.

    If you are using RHOSP version 15 or earlier, or the ovn-octavia driver, each load balancer has a security group with the user project.

  • The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment’s size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them.

    If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

To enable Kuryr SDN, your environment must meet the following requirements:

  • Run RHOSP 13+.
  • Have Overcloud with Octavia.
  • Use Neutron Trunk ports extension.
  • Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.

18.7.3.1. Increasing quota

When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies.

Procedure

  • Increase the quotas for a project by running the following command:

    $ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>

18.7.3.2. Configuring Neutron

Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work.

In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies.

18.7.3.3. Configuring Octavia

Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN.

To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.

Note

The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary.

This example uses the local registry method.

Procedure

  1. If you are using the local registry, create a template to upload the images to the registry. For example:

    (undercloud) $ openstack overcloud container image prepare \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
    --namespace=registry.access.redhat.com/rhosp13 \
    --push-destination=<local-ip-from-undercloud.conf>:8787 \
    --prefix=openstack- \
    --tag-from-label {version}-{product-version} \
    --output-env-file=/home/stack/templates/overcloud_images.yaml \
    --output-images-file /home/stack/local_registry_images.yaml
  2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:

    ...
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
      push_destination: <local-ip-from-undercloud.conf>:8787
    Note

    The Octavia container versions vary depending upon the specific RHOSP release installed.

  3. Pull the container images from registry.redhat.io to the Undercloud node:

    (undercloud) $ sudo openstack overcloud container image upload \
      --config-file  /home/stack/local_registry_images.yaml \
      --verbose

    This may take some time depending on the speed of your network and Undercloud disk.

  4. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command:

    (undercloud) $ cat octavia_timeouts.yaml
    parameter_defaults:
      OctaviaTimeoutClientData: 1200000
      OctaviaTimeoutMemberData: 1200000
    Note

    This is not needed for RHOSP 13.0.13+.

  5. Install or update your Overcloud environment with Octavia:

    $ openstack overcloud deploy --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
      -e octavia_timeouts.yaml
    Note

    This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director.

    Note

    When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN.

  6. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project.

    • To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project.

      This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation.

      Note

      This task is unnecessary in RHOSP version 13.0.13 or later.

      Octavia implements a new ACL API that restricts access to the load balancers VIP.

      1. Get the project ID

        $ openstack project show <project>

        Example output

        +-------------+----------------------------------+
        | Field       | Value                            |
        +-------------+----------------------------------+
        | description |                                  |
        | domain_id   | default                          |
        | enabled     | True                             |
        | id          | PROJECT_ID                       |
        | is_domain   | False                            |
        | name        | *<project>*                      |
        | parent_id   | default                          |
        | tags        | []                               |
        +-------------+----------------------------------+

      2. Add the project ID to octavia.conf for the controllers.

        1. Source the stackrc file:

          $ source stackrc  # Undercloud credentials
        2. List the Overcloud controllers:

          $ openstack server list

          Example output

          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
          │
          | ID                                   | Name         | Status | Networks
          | Image          | Flavor     |
          │
          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
          │
          | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
          ctlplane=192.168.24.8 | overcloud-full | controller |
          │
          | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0    | ACTIVE |
          ctlplane=192.168.24.6 | overcloud-full | compute    |
          │
          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+

        3. SSH into the controller(s).

          $ ssh heat-admin@192.168.24.8
        4. Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user’s account.

          # List of project IDs that are allowed to have Load balancer security groups
          # belonging to them.
          amp_secgroup_allowed_projects = PROJECT_ID
      3. Restart the Octavia worker so the new configuration loads.

        controller-0$ sudo docker restart octavia_worker
Note

Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP.

18.7.3.3.1. The Octavia OVN Driver

Octavia supports multiple provider drivers through the Octavia API.

To see all available Octavia provider drivers, on a command line, enter:

$ openstack loadbalancer provider list

Example output

+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn     | Octavia OVN driver.                             |
+---------+-------------------------------------------------+

Beginning with RHOSP version 16, the Octavia OVN provider driver (ovn) is supported on OpenShift Container Platform on RHOSP deployments.

ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2.

The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.

If Kuryr uses ovn instead of Amphora, it offers the following benefits:

  • Decreased resource requirements. Kuryr does not require a load balancer VM for each service.
  • Reduced network latency.
  • Increased service creation speed by using OpenFlow rules instead of a VM for each service.
  • Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.

18.7.3.4. Known limitations of installing with Kuryr

Using OpenShift Container Platform with Kuryr SDN has several known limitations.

RHOSP general limitations

Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments:

  • Service objects with the NodePort type are not supported.
  • Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods.
  • If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.
  • Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting.
RHOSP version limitations

Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version.

  • RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources.

    Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP.

  • Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported.
  • Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported.
  • Kuryr SDN does not support automatic unidling by a service.
RHOSP environment limitations

There are limitations when using Kuryr SDN that depend on your deployment environment.

Because of Octavia’s lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution.

In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf, which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails.

To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1, i.e. CGO_ENABLED=1, or ensure that the variable is absent.

In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails.

Note

musl-based containers, including Alpine-based containers, do not support the use-vc option.

RHOSP upgrade limitations

As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required.

You can address API changes on an individual basis.

If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways:

  • Upgrade each VM by triggering a load balancer failover.
  • Leave responsibility for upgrading the VMs to users.

If the operator takes the first option, there might be short downtimes during failovers.

If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features.

Important

If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing.

The recreation causes the DNS service approximately one minute of downtime.

18.7.3.5. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.7.3.6. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

18.7.3.7. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.7.4. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.7.5. Downloading playbook dependencies

The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them.

Note

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.

Prerequisites

  • Python 3 is installed on your machine.

Procedure

  1. On a command line, add the repositories:

    1. Register with Red Hat Subscription Manager:

      $ sudo subscription-manager register # If not done already
    2. Pull the latest subscription data:

      $ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already
    3. Disable the current repositories:

      $ sudo subscription-manager repos --disable=* # If not done already
    4. Add the required repositories:

      $ sudo subscription-manager repos \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
        --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms
  2. Install the modules:

    $ sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr
  3. Ensure that the python command points to python3:

    $ sudo alternatives --set python /usr/bin/python3

18.7.6. Downloading the installation playbooks

Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure.

Prerequisites

  • The curl command-line tool is available on your machine.

Procedure

  • To download the playbooks to your working directory, run the following script from a command line:

    $ xargs -n 1 curl -O <<< '
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml'

The playbooks are downloaded to your machine.

Important

During the installation process, you can modify the playbooks to configure your deployment.

Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP.

Important

You must match any edits you make in the bootstrap.yaml, compute-nodes.yaml, control-plane.yaml, network.yaml, and security-groups.yaml files to the corresponding playbooks that are prefixed with down-. For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail.

18.7.7. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

18.7.8. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.7.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image

The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI.

Prerequisites

  • The RHOSP CLI is installed.

Procedure

  1. Log in to the Red Hat Customer Portal’s Product Downloads page.
  2. Under Version, select the most recent release of OpenShift Container Platform 4.10 for Red Hat Enterprise Linux (RHEL) 8.

    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available.

  3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW).
  4. Decompress the image.

    Note

    You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz. To find out if or how the file is compressed, in a command line, enter:

    $ file <name_of_downloaded_file>
  5. From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI:

    $ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-${RHCOS_VERSION}-openstack.qcow2 rhcos
    Important

    Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.

    Warning

    If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP.

After you upload the image to RHOSP, it is usable in the installation process.

18.7.10. Verifying external network access

The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Procedure

  1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"

    Example output

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

Note

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

18.7.11. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.7.11.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP:

    $ openstack floating ip create --description "bootstrap machine" <external_network>
  4. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  5. Add the FIPs to the inventory.yaml file as the values of the following variables:

    • os_api_fip
    • os_bootstrap_fip
    • os_ingress_fip

If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.7.11.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the inventory.yaml file, do not define the following variables:

  • os_api_fip
  • os_bootstrap_fip
  • os_ingress_fip

If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own.

If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.7.12. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.7.13. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.
      3. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
      4. Specify the floating IP address to use for external access to the OpenShift API.
      5. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
      7. Enter a name for your cluster. The name must be 14 or fewer characters long.
      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

You now have the file install-config.yaml in the directory that you specified.

18.7.14. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.7.14.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.28. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

18.7.14.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.29. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.7.14.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.30. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

18.7.14.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters

Additional RHOSP configuration parameters are described in the following table:

Table 18.31. Additional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.rootVolume.size

For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

compute.platform.openstack.rootVolume.type

For compute machines, the root volume’s type.

String, for example performance.

controlPlane.platform.openstack.rootVolume.size

For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

controlPlane.platform.openstack.rootVolume.type

For control plane machines, the root volume’s type.

String, for example performance.

platform.openstack.cloud

The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file.

String, for example MyCloud.

platform.openstack.externalNetwork

The RHOSP external network name to be used for installation.

String, for example external.

platform.openstack.computeFlavor

The RHOSP flavor to use for control plane and compute machines.

This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually.

String, for example m1.xlarge.

18.7.14.5. Optional RHOSP configuration parameters

Optional RHOSP configuration parameters are described in the following table:

Table 18.32. Optional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.additionalNetworkIDs

Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

compute.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with compute machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

compute.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

compute.platform.openstack.rootVolume.zones

For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

compute.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

controlPlane.platform.openstack.additionalNetworkIDs

Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

controlPlane.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with control plane machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

controlPlane.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

controlPlane.platform.openstack.rootVolume.zones

For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

controlPlane.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

platform.openstack.clusterOSImage

The location from which the installer downloads the RHCOS image.

You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with an SHA-256 checksum.

For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d. The value can also be the name of an existing Glance image, for example my-rhcos.

platform.openstack.clusterOSImageProperties

Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image.

You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi.

You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes.

A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"].

platform.openstack.defaultMachinePlatform

The default machine pool platform configuration.

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}

platform.openstack.ingressFloatingIP

An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.apiFloatingIP

An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.externalDNS

IP addresses for external DNS servers that cluster instances use for DNS resolution.

A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform.openstack.machinesSubnet

The UUID of a RHOSP subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.

The first item in networking.machineNetwork must match the value of machinesSubnet.

If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP.

A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

18.7.14.6. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.
  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.
  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.
  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines.
  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
Note

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

18.7.14.7. Sample customized install-config.yaml file for RHOSP with Kuryr

To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16 1
  networkType: Kuryr
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
    trunkSupport: true 2
    octaviaSupport: true 3
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
1
The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts.
2 3
Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services.

18.7.14.8. Cluster deployment on RHOSP provider networks

You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.

RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them.

In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network:

A diagram that depicts four OpenShift workloads on OpenStack. Each workload is connected by its NIC to an external data center by using a provider network.

OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.

Example provider network types include flat (untagged) and VLAN (802.1Q tagged).

Note

A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections.

You can learn more about provider and tenant networks in the RHOSP documentation.

18.7.14.8.1. RHOSP provider network requirements for cluster installation

Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions:

  • The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API.
  • The RHOSP networking service has the port security and allowed address pairs extensions enabled.
  • The provider network can be shared with other tenants.

    Tip

    Use the openstack network create command with the --share flag to create a network that can be shared.

  • The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet.

    Tip
    To create a network for a project that is named "openshift," enter the following command
    $ openstack network create --project openshift
    To create a subnet for a project that is named "openshift," enter the following command
    $ openstack subnet create --project openshift

    To learn more about creating networks on RHOSP, read the provider networks documentation.

    If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network.

    Important

    Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network.

  • Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default.

    Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:

    $ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
  • Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.
18.7.14.8.2. Deploying a cluster that has a primary interface on a provider network

You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network.

Prerequisites

  • Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation".

Procedure

  1. In a text editor, open the install-config.yaml file.
  2. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP.
  3. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP.
  4. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet.
  5. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet.
Important

The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block.

Section of an installation configuration file for a cluster that relies on a RHOSP provider network

        ...
        platform:
          openstack:
            apiVIP: 192.0.2.13
            ingressVIP: 192.0.2.23
            machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
            # ...
        networking:
          machineNetwork:
          - cidr: 192.0.2.0/24

Warning

You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface.

When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network.

Tip

You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list.

After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks.

18.7.14.9. Kuryr ports pools

A Kuryr ports pool maintains a number of ports on standby for pod creation.

Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.

The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes.

Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.

Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior:

  • The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false.
  • The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1.
  • The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting.

    If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.

  • The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3.

18.7.14.10. Adjusting Kuryr ports pools during installation

During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation.

Prerequisites

  • Create and modify the install-config.yaml file.

Procedure

  1. From a command line, create the manifest files:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    $ touch <installation_directory>/manifests/cluster-network-03-config.yml 1
    1
    For <installation_directory>, specify the directory name that contains the manifests/ directory for your cluster.

    After creating the file, several network configuration files are in the manifests/ directory, as shown:

    $ ls <installation_directory>/manifests/cluster-network-*

    Example output

    cluster-network-01-crd.yml
    cluster-network-02-config.yml
    cluster-network-03-config.yml

  3. Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want:

    $ oc edit networks.operator.openshift.io cluster
  4. Edit the settings to meet your requirements. The following file is provided as an example:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      serviceNetwork:
      - 172.30.0.0/16
      defaultNetwork:
        type: Kuryr
        kuryrConfig:
          enablePortPoolsPrepopulation: false 1
          poolMinPorts: 1 2
          poolBatchPorts: 3 3
          poolMaxPorts: 5 4
          openstackServiceNetwork: 172.30.0.0/15 5
    1
    Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false.
    2
    Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts. The default value is 1.
    3
    poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts. The default value is 3.
    4
    If the number of free ports in a pool is higher than the value of poolMaxPorts, Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0.
    5
    The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia’s LoadBalancers.

    If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork, and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork.

    The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter.

    If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1.

  5. Save the cluster-network-03-config.yml file, and exit the text editor.
  6. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster.

18.7.14.11. Setting a custom subnet for machines

The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file.

Prerequisites

  • You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program.

Procedure

  1. On a command line, browse to the directory that contains install-config.yaml.
  2. From that directory, either run a script to edit the install-config.yaml file or update the file manually:

    • To set the value by using a script, run:

      $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
      1
      Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24.
    • To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet.

18.7.14.12. Emptying compute machine pools

To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually.

Prerequisites

  • You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program.

Procedure

  1. On a command line, browse to the directory that contains install-config.yaml.
  2. From that directory, either run a script to edit the install-config.yaml file or update the file manually:

    • To set the value by using a script, run:

      $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["compute"][0]["replicas"] = 0;
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
    • To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0.

18.7.14.13. Modifying the network type

By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated.

Prerequisites

  • You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program

Procedure

  1. In a command prompt, browse to the directory that contains install-config.yaml.
  2. From that directory, either run a script to edit the install-config.yaml file or update the file manually:

    • To set the value by using a script, run:

      $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["networking"]["networkType"] = "Kuryr";
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
    • To set the value manually, open the file and set networking.networkType to "Kuryr".

18.7.15. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.

Important
  • The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites

  • You obtained the OpenShift Container Platform installation program.
  • You created the install-config.yaml installation configuration file.

Procedure

  1. Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.
  2. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage these resources yourself, you do not have to initialize them.

    • You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment.
  3. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.
    2. Locate the mastersSchedulable parameter and ensure that it is set to false.
    3. Save and exit the file.
  4. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory> 1
    1
    For <installation_directory>, specify the same installation directory.

    Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign
  5. Export the metadata file’s infraID key as an environment variable:

    $ export INFRA_ID=$(jq -r .infraID metadata.json)
Tip

Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project.

18.7.16. Preparing the bootstrap Ignition files

The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file.

Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file.

Prerequisites

  • You have the bootstrap Ignition file that the installer program generates, bootstrap.ign.
  • The infrastructure ID from the installer’s metadata file is set as an environment variable ($INFRA_ID).

    • If the variable is not set, see Creating the Kubernetes manifest and Ignition config files.
  • You have an HTTP(S)-accessible way to store the bootstrap Ignition file.

    • The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server.

Procedure

  1. Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs:

    import base64
    import json
    import os
    
    with open('bootstrap.ign', 'r') as f:
        ignition = json.load(f)
    
    files = ignition['storage'].get('files', [])
    
    infra_id = os.environ.get('INFRA_ID', 'openshift').encode()
    hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
    files.append(
    {
        'path': '/etc/hostname',
        'mode': 420,
        'contents': {
            'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64
        }
    })
    
    ca_cert_path = os.environ.get('OS_CACERT', '')
    if ca_cert_path:
        with open(ca_cert_path, 'r') as f:
            ca_cert = f.read().encode()
            ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()
    
        files.append(
        {
            'path': '/opt/openshift/tls/cloud-ca-cert.pem',
            'mode': 420,
            'contents': {
                'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
            }
        })
    
    ignition['storage']['files'] = files;
    
    with open('bootstrap.ign', 'w') as f:
        json.dump(ignition, f)
  2. Using the RHOSP CLI, create an image that uses the bootstrap Ignition file:

    $ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>
  3. Get the image’s details:

    $ openstack image show <image_name>

    Make a note of the file value; it follows the pattern v2/images/<image_ID>/file.

    Note

    Verify that the image you created is active.

  4. Retrieve the image service’s public address:

    $ openstack catalog show image
  5. Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file.
  6. Generate an auth token and save the token ID:

    $ openstack token issue -c id -f value
  7. Insert the following content into a file called $INFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values:

    {
      "ignition": {
        "config": {
          "merge": [{
            "source": "<storage_url>", 1
            "httpHeaders": [{
              "name": "X-Auth-Token", 2
              "value": "<token_ID>" 3
            }]
          }]
        },
        "security": {
          "tls": {
            "certificateAuthorities": [{
              "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4
            }]
          }
        },
        "version": "3.2.0"
      }
    }
    1
    Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL.
    2
    Set name in httpHeaders to "X-Auth-Token".
    3
    Set value in httpHeaders to your token’s ID.
    4
    If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate.
  8. Save the secondary Ignition config file.

The bootstrap Ignition data will be passed to RHOSP during installation.

Warning

The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process.

18.7.17. Creating control plane Ignition config files on RHOSP

Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files.

Note

As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine.

Prerequisites

  • The infrastructure ID from the installation program’s metadata file is set as an environment variable ($INFRA_ID).

    • If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files".

Procedure

  • On a command line, run the following Python script:

    $ for index in $(seq 0 2); do
        MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
        python -c "import base64, json, sys;
    ignition = json.load(sys.stdin);
    storage = ignition.get('storage', {});
    files = storage.get('files', []);
    files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'});
    storage['files'] = files;
    ignition['storage'] = storage
    json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
    done

    You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json, <INFRA_ID>-master-1-ignition.json, and <INFRA_ID>-master-2-ignition.json.

18.7.18. Creating network resources on RHOSP

Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports.

Prerequisites

  • Python 3 is installed on your machine.
  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".

Procedure

  1. Optional: Add an external network value to the inventory.yaml playbook:

    Example external network value in the inventory.yaml Ansible playbook

    ...
          # The public network providing connectivity to the cluster. If not
          # provided, the cluster external connectivity must be provided in another
          # way.
    
          # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip.
          os_external_network: 'external'
    ...

    Important

    If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself.

  2. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook:

    Example FIP values in the inventory.yaml Ansible playbook

    ...
          # OpenShift API floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the Control Plane to
          # serve the OpenShift API.
          os_api_fip: '203.0.113.23'
    
          # OpenShift Ingress floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the worker nodes to serve
          # the applications.
          os_ingress_fip: '203.0.113.19'
    
          # If this value is non-empty, the corresponding floating IP will be
          # attached to the bootstrap machine. This is needed for collecting logs
          # in case of install failure.
          os_bootstrap_fip: '203.0.113.20'

    Important

    If you do not define values for os_api_fip and os_ingress_fip, you must perform post-installation network configuration.

    If you do not define a value for os_bootstrap_fip, the installer cannot download debugging information from failed installations.

    See "Enabling access to the environment" for more information.

  3. On a command line, create security groups by running the security-groups.yaml playbook:

    $ ansible-playbook -i inventory.yaml security-groups.yaml
  4. On a command line, create a network, subnet, and router by running the network.yaml playbook:

    $ ansible-playbook -i inventory.yaml network.yaml
  5. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command:

    $ openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "$INFRA_ID-nodes"

18.7.19. Creating the bootstrap machine on RHOSP

Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and bootstrap.yaml Ansible playbooks are in a common directory.
  • The metadata.json file that the installation program created is in the same directory as the Ansible playbooks.

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the bootstrap.yaml playbook:

    $ ansible-playbook -i inventory.yaml bootstrap.yaml
  3. After the bootstrap server is active, view the logs to verify that the Ignition files were received:

    $ openstack console log show "$INFRA_ID-bootstrap"

18.7.20. Creating the control plane machines on RHOSP

Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The infrastructure ID from the installation program’s metadata file is set as an environment variable ($INFRA_ID).
  • The inventory.yaml, common.yaml, and control-plane.yaml Ansible playbooks are in a common directory.
  • You have the three Ignition files that were created in "Creating control plane Ignition config files".

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. If the control plane Ignition config files aren’t already in your working directory, copy them into it.
  3. On a command line, run the control-plane.yaml playbook:

    $ ansible-playbook -i inventory.yaml control-plane.yaml
  4. Run the following command to monitor the bootstrapping process:

    $ openshift-install wait-for bootstrap-complete

    You will see messages that confirm that the control plane machines are running and have joined the cluster:

    INFO API v1.23.0 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    ...
    INFO It is now safe to remove the bootstrap resources

18.7.21. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

18.7.22. Deleting bootstrap resources from RHOSP

Delete the bootstrap resources that you no longer need.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and down-bootstrap.yaml Ansible playbooks are in a common directory.
  • The control plane machines are running.

    • If you do not know the status of the machines, see "Verifying cluster status".

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the down-bootstrap.yaml playbook:

    $ ansible-playbook -i inventory.yaml down-bootstrap.yaml

The bootstrap port, server, and floating IP address are deleted.

Warning

If you did not disable the bootstrap Ignition file URL earlier, do so now.

18.7.23. Creating compute machines on RHOSP

After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and compute-nodes.yaml Ansible playbooks are in a common directory.
  • The metadata.json file that the installation program created is in the same directory as the Ansible playbooks.
  • The control plane is active.

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the playbook:

    $ ansible-playbook -i inventory.yaml compute-nodes.yaml

Next steps

  • Approve the certificate signing requests for the machines.

18.7.24. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.23.0
    master-1  Ready     master  63m  v1.23.0
    master-2  Ready     master  64m  v1.23.0

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.23.0
    master-1  Ready     master  73m  v1.23.0
    master-2  Ready     master  74m  v1.23.0
    worker-0  Ready     worker  11m  v1.23.0
    worker-1  Ready     worker  11m  v1.23.0

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

18.7.25. Verifying a successful installation

Verify that the OpenShift Container Platform installation is complete.

Prerequisites

  • You have the installation program (openshift-install)

Procedure

  • On a command line, enter:

    $ openshift-install --log-level debug wait-for install-complete

The program outputs the console URL, as well as the administrator’s login information.

18.7.26. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.7.27. Next steps

18.8. Installing a cluster on OpenStack on your own SR-IOV infrastructure

In OpenShift Container Platform 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure and uses single-root input/output virtualization (SR-IOV) networks to run compute machines.

Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, such as Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process.

18.8.1. Prerequisites

18.8.2. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.8.3. Resource guidelines for installing OpenShift Container Platform on RHOSP

To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:

Table 18.33. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
ResourceValue

Floating IP addresses

3

Ports

15

Routers

1

Subnets

1

RAM

88 GB

vCPUs

22

Volume storage

275 GB

Instances

7

Security groups

3

Security group rules

60

Server groups

2 - plus 1 for each additional availability zone in each machine pool

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Note

By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

18.8.3.1. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.8.3.2. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages.

Important

SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure.

Additional resources

18.8.3.3. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.8.4. Downloading playbook dependencies

The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them.

Note

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.

Prerequisites

  • Python 3 is installed on your machine.

Procedure

  1. On a command line, add the repositories:

    1. Register with Red Hat Subscription Manager:

      $ sudo subscription-manager register # If not done already
    2. Pull the latest subscription data:

      $ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already
    3. Disable the current repositories:

      $ sudo subscription-manager repos --disable=* # If not done already
    4. Add the required repositories:

      $ sudo subscription-manager repos \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
        --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms
  2. Install the modules:

    $ sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr
  3. Ensure that the python command points to python3:

    $ sudo alternatives --set python /usr/bin/python3

18.8.5. Downloading the installation playbooks

Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure.

Prerequisites

  • The curl command-line tool is available on your machine.

Procedure

  • To download the playbooks to your working directory, run the following script from a command line:

    $ xargs -n 1 curl -O <<< '
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/common.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/inventory.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-load-balancers.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/openstack/down-containers.yaml'

The playbooks are downloaded to your machine.

Important

During the installation process, you can modify the playbooks to configure your deployment.

Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP.

Important

You must match any edits you make in the bootstrap.yaml, compute-nodes.yaml, control-plane.yaml, network.yaml, and security-groups.yaml files to the corresponding playbooks that are prefixed with down-. For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail.

18.8.6. Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
  2. Select your infrastructure provider.
  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    Important

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Important

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

18.8.7. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.8.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image

The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI.

Prerequisites

  • The RHOSP CLI is installed.

Procedure

  1. Log in to the Red Hat Customer Portal’s Product Downloads page.
  2. Under Version, select the most recent release of OpenShift Container Platform 4.10 for Red Hat Enterprise Linux (RHEL) 8.

    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available.

  3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW).
  4. Decompress the image.

    Note

    You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz. To find out if or how the file is compressed, in a command line, enter:

    $ file <name_of_downloaded_file>
  5. From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI:

    $ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-${RHCOS_VERSION}-openstack.qcow2 rhcos
    Important

    Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.

    Warning

    If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP.

After you upload the image to RHOSP, it is usable in the installation process.

18.8.9. Verifying external network access

The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Procedure

  1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"

    Example output

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

Note

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

18.8.10. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.8.10.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP:

    $ openstack floating ip create --description "bootstrap machine" <external_network>
  4. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  5. Add the FIPs to the inventory.yaml file as the values of the following variables:

    • os_api_fip
    • os_bootstrap_fip
    • os_ingress_fip

If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.8.10.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the inventory.yaml file, do not define the following variables:

  • os_api_fip
  • os_bootstrap_fip
  • os_ingress_fip

If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own.

If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.8.11. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.8.12. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.
      3. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
      4. Specify the floating IP address to use for external access to the OpenShift API.
      5. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
      7. Enter a name for your cluster. The name must be 14 or fewer characters long.
      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

You now have the file install-config.yaml in the directory that you specified.

18.8.13. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.8.13.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.34. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

18.8.13.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.35. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.8.13.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.36. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

18.8.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters

Additional RHOSP configuration parameters are described in the following table:

Table 18.37. Additional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.rootVolume.size

For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

compute.platform.openstack.rootVolume.type

For compute machines, the root volume’s type.

String, for example performance.

controlPlane.platform.openstack.rootVolume.size

For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

controlPlane.platform.openstack.rootVolume.type

For control plane machines, the root volume’s type.

String, for example performance.

platform.openstack.cloud

The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file.

String, for example MyCloud.

platform.openstack.externalNetwork

The RHOSP external network name to be used for installation.

String, for example external.

platform.openstack.computeFlavor

The RHOSP flavor to use for control plane and compute machines.

This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually.

String, for example m1.xlarge.

18.8.13.5. Optional RHOSP configuration parameters

Optional RHOSP configuration parameters are described in the following table:

Table 18.38. Optional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.additionalNetworkIDs

Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

compute.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with compute machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

compute.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

compute.platform.openstack.rootVolume.zones

For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

compute.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

controlPlane.platform.openstack.additionalNetworkIDs

Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

controlPlane.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with control plane machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

controlPlane.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

controlPlane.platform.openstack.rootVolume.zones

For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

controlPlane.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

platform.openstack.clusterOSImage

The location from which the installer downloads the RHCOS image.

You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with an SHA-256 checksum.

For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d. The value can also be the name of an existing Glance image, for example my-rhcos.

platform.openstack.clusterOSImageProperties

Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image.

You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi.

You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes.

A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"].

platform.openstack.defaultMachinePlatform

The default machine pool platform configuration.

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}

platform.openstack.ingressFloatingIP

An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.apiFloatingIP

An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.externalDNS

IP addresses for external DNS servers that cluster instances use for DNS resolution.

A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform.openstack.machinesSubnet

The UUID of a RHOSP subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.

The first item in networking.machineNetwork must match the value of machinesSubnet.

If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP.

A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

18.8.13.6. Sample customized install-config.yaml file for RHOSP

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OpenShiftSDN
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

18.8.13.7. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.
  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.
  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.
  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines.
  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
Note

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

18.8.13.8. Setting a custom subnet for machines

The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file.

Prerequisites

  • You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program.

Procedure

  1. On a command line, browse to the directory that contains install-config.yaml.
  2. From that directory, either run a script to edit the install-config.yaml file or update the file manually:

    • To set the value by using a script, run:

      $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
      1
      Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24.
    • To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet.

18.8.13.9. Emptying compute machine pools

To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually.

Prerequisites

  • You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program.

Procedure

  1. On a command line, browse to the directory that contains install-config.yaml.
  2. From that directory, either run a script to edit the install-config.yaml file or update the file manually:

    • To set the value by using a script, run:

      $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["compute"][0]["replicas"] = 0;
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
    • To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0.

18.8.14. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.

Important
  • The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites

  • You obtained the OpenShift Container Platform installation program.
  • You created the install-config.yaml installation configuration file.

Procedure

  1. Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory> 1
    1
    For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.
  2. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage these resources yourself, you do not have to initialize them.

    • You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment.
  3. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.
    2. Locate the mastersSchedulable parameter and ensure that it is set to false.
    3. Save and exit the file.
  4. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory> 1
    1
    For <installation_directory>, specify the same installation directory.

    Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign
  5. Export the metadata file’s infraID key as an environment variable:

    $ export INFRA_ID=$(jq -r .infraID metadata.json)
Tip

Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project.

18.8.15. Preparing the bootstrap Ignition files

The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file.

Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file.

Prerequisites

  • You have the bootstrap Ignition file that the installer program generates, bootstrap.ign.
  • The infrastructure ID from the installer’s metadata file is set as an environment variable ($INFRA_ID).

    • If the variable is not set, see Creating the Kubernetes manifest and Ignition config files.
  • You have an HTTP(S)-accessible way to store the bootstrap Ignition file.

    • The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server.

Procedure

  1. Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs:

    import base64
    import json
    import os
    
    with open('bootstrap.ign', 'r') as f:
        ignition = json.load(f)
    
    files = ignition['storage'].get('files', [])
    
    infra_id = os.environ.get('INFRA_ID', 'openshift').encode()
    hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
    files.append(
    {
        'path': '/etc/hostname',
        'mode': 420,
        'contents': {
            'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64
        }
    })
    
    ca_cert_path = os.environ.get('OS_CACERT', '')
    if ca_cert_path:
        with open(ca_cert_path, 'r') as f:
            ca_cert = f.read().encode()
            ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()
    
        files.append(
        {
            'path': '/opt/openshift/tls/cloud-ca-cert.pem',
            'mode': 420,
            'contents': {
                'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
            }
        })
    
    ignition['storage']['files'] = files;
    
    with open('bootstrap.ign', 'w') as f:
        json.dump(ignition, f)
  2. Using the RHOSP CLI, create an image that uses the bootstrap Ignition file:

    $ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>
  3. Get the image’s details:

    $ openstack image show <image_name>

    Make a note of the file value; it follows the pattern v2/images/<image_ID>/file.

    Note

    Verify that the image you created is active.

  4. Retrieve the image service’s public address:

    $ openstack catalog show image
  5. Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file.
  6. Generate an auth token and save the token ID:

    $ openstack token issue -c id -f value
  7. Insert the following content into a file called $INFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values:

    {
      "ignition": {
        "config": {
          "merge": [{
            "source": "<storage_url>", 1
            "httpHeaders": [{
              "name": "X-Auth-Token", 2
              "value": "<token_ID>" 3
            }]
          }]
        },
        "security": {
          "tls": {
            "certificateAuthorities": [{
              "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4
            }]
          }
        },
        "version": "3.2.0"
      }
    }
    1
    Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL.
    2
    Set name in httpHeaders to "X-Auth-Token".
    3
    Set value in httpHeaders to your token’s ID.
    4
    If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate.
  8. Save the secondary Ignition config file.

The bootstrap Ignition data will be passed to RHOSP during installation.

Warning

The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process.

18.8.16. Creating control plane Ignition config files on RHOSP

Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files.

Note

As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine.

Prerequisites

  • The infrastructure ID from the installation program’s metadata file is set as an environment variable ($INFRA_ID).

    • If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files".

Procedure

  • On a command line, run the following Python script:

    $ for index in $(seq 0 2); do
        MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
        python -c "import base64, json, sys;
    ignition = json.load(sys.stdin);
    storage = ignition.get('storage', {});
    files = storage.get('files', []);
    files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'});
    storage['files'] = files;
    ignition['storage'] = storage
    json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
    done

    You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json, <INFRA_ID>-master-1-ignition.json, and <INFRA_ID>-master-2-ignition.json.

18.8.17. Creating network resources on RHOSP

Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports.

Prerequisites

  • Python 3 is installed on your machine.
  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".

Procedure

  1. Optional: Add an external network value to the inventory.yaml playbook:

    Example external network value in the inventory.yaml Ansible playbook

    ...
          # The public network providing connectivity to the cluster. If not
          # provided, the cluster external connectivity must be provided in another
          # way.
    
          # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip.
          os_external_network: 'external'
    ...

    Important

    If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself.

  2. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook:

    Example FIP values in the inventory.yaml Ansible playbook

    ...
          # OpenShift API floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the Control Plane to
          # serve the OpenShift API.
          os_api_fip: '203.0.113.23'
    
          # OpenShift Ingress floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the worker nodes to serve
          # the applications.
          os_ingress_fip: '203.0.113.19'
    
          # If this value is non-empty, the corresponding floating IP will be
          # attached to the bootstrap machine. This is needed for collecting logs
          # in case of install failure.
          os_bootstrap_fip: '203.0.113.20'

    Important

    If you do not define values for os_api_fip and os_ingress_fip, you must perform post-installation network configuration.

    If you do not define a value for os_bootstrap_fip, the installer cannot download debugging information from failed installations.

    See "Enabling access to the environment" for more information.

  3. On a command line, create security groups by running the security-groups.yaml playbook:

    $ ansible-playbook -i inventory.yaml security-groups.yaml
  4. On a command line, create a network, subnet, and router by running the network.yaml playbook:

    $ ansible-playbook -i inventory.yaml network.yaml
  5. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command:

    $ openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "$INFRA_ID-nodes"

Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines.

18.8.17.1. Deploying a cluster with bare metal machines

If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines.

Bare-metal compute machines are not supported on clusters that use Kuryr.

Note

Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not.

Prerequisites

  • The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API.
  • Bare metal is available as a RHOSP flavor.
  • The RHOSP network supports both VM and bare metal server attachment.
  • Your network configuration does not rely on a provider network. Provider networks are not supported.
  • If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned.
  • If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks.
  • You created an inventory.yaml file as part of the OpenShift Container Platform installation process.

Procedure

  1. In the inventory.yaml file, edit the flavors for machines:

    1. If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor.
    2. Change the value of os_flavor_worker to a bare metal flavor.

      An example bare metal inventory.yaml file

      all:
        hosts:
          localhost:
            ansible_connection: local
            ansible_python_interpreter: "{{ansible_playbook_python}}"
      
            # User-provided values
            os_subnet_range: '10.0.0.0/16'
            os_flavor_master: 'my-bare-metal-flavor' 1
            os_flavor_worker: 'my-bare-metal-flavor' 2
            os_image_rhcos: 'rhcos'
            os_external_network: 'external'
      ...

      1
      If you want to have bare-metal control plane machines, change this value to a bare metal flavor.
      2
      Change this value to a bare metal flavor to use for compute machines.

Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file.

Note

The installer may time out while waiting for bare metal machines to boot.

If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

$ ./openshift-install wait-for install-complete --log-level debug

18.8.18. Creating the bootstrap machine on RHOSP

Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and bootstrap.yaml Ansible playbooks are in a common directory.
  • The metadata.json file that the installation program created is in the same directory as the Ansible playbooks.

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the bootstrap.yaml playbook:

    $ ansible-playbook -i inventory.yaml bootstrap.yaml
  3. After the bootstrap server is active, view the logs to verify that the Ignition files were received:

    $ openstack console log show "$INFRA_ID-bootstrap"

18.8.19. Creating the control plane machines on RHOSP

Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The infrastructure ID from the installation program’s metadata file is set as an environment variable ($INFRA_ID).
  • The inventory.yaml, common.yaml, and control-plane.yaml Ansible playbooks are in a common directory.
  • You have the three Ignition files that were created in "Creating control plane Ignition config files".

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. If the control plane Ignition config files aren’t already in your working directory, copy them into it.
  3. On a command line, run the control-plane.yaml playbook:

    $ ansible-playbook -i inventory.yaml control-plane.yaml
  4. Run the following command to monitor the bootstrapping process:

    $ openshift-install wait-for bootstrap-complete

    You will see messages that confirm that the control plane machines are running and have joined the cluster:

    INFO API v1.23.0 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    ...
    INFO It is now safe to remove the bootstrap resources

18.8.20. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

18.8.21. Deleting bootstrap resources from RHOSP

Delete the bootstrap resources that you no longer need.

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The inventory.yaml, common.yaml, and down-bootstrap.yaml Ansible playbooks are in a common directory.
  • The control plane machines are running.

    • If you do not know the status of the machines, see "Verifying cluster status".

Procedure

  1. On a command line, change the working directory to the location of the playbooks.
  2. On a command line, run the down-bootstrap.yaml playbook:

    $ ansible-playbook -i inventory.yaml down-bootstrap.yaml

The bootstrap port, server, and floating IP address are deleted.

Warning

If you did not disable the bootstrap Ignition file URL earlier, do so now.

18.8.22. Creating SR-IOV networks for compute machines

If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV), you can provision SR-IOV networks that compute machines run on.

Note

The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required.

Prerequisites

  • Your cluster supports SR-IOV.

    Note

    If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation.

  • You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks.

Procedure

  1. On a command line, create a radio RHOSP network:

    $ openstack network create radio --provider-physical-network radio --provider-network-type flat --external
  2. Create an uplink RHOSP network:

    $ openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external
  3. Create a subnet for the radio network:

    $ openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio
  4. Create a subnet for the uplink network:

    $ openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink

18.8.23. Creating compute machines that run on SR-IOV networks

After standing up the control plane, create compute machines that run on the SR-IOV networks that you created in "Creating SR-IOV networks for compute machines".

Prerequisites

  • You downloaded the modules in "Downloading playbook dependencies".
  • You downloaded the playbooks in "Downloading the installation playbooks".
  • The metadata.yaml file that the installation program created is in the same directory as the Ansible playbooks.
  • The control plane is active.
  • You created radio and uplink SR-IOV networks as described in "Creating SR-IOV networks for compute machines".

Procedure

  1. On a command line, change the working directory to the location of the inventory.yaml and common.yaml files.
  2. Add the radio and uplink networks to the end of the inventory.yaml file by using the additionalNetworks parameter:

    ....
    # If this value is non-empty, the corresponding floating IP will be
    # attached to the bootstrap machine. This is needed for collecting logs
    # in case of install failure.
        os_bootstrap_fip: '203.0.113.20'
    
        additionalNetworks:
        - id: radio
          count: 4 1
          type: direct
          port_security_enabled: no
        - id: uplink
          count: 4 2
          type: direct
          port_security_enabled: no
    1 2
    The count parameter defines the number of SR-IOV virtual functions (VFs) to attach to each worker node. In this case, each network has four VFs.
  3. Replace the content of the compute-nodes.yaml file with the following text:

    Example 18.1. compute-nodes.yaml

    - import_playbook: common.yaml
    
    - hosts: all
      gather_facts: no
    
      vars:
        worker_list: []
        port_name_list: []
        nic_list: []
    
      tasks:
      # Create the SDN/primary port for each worker node
      - name: 'Create the Compute ports'
        os_port:
          name: "{{ item.1 }}-{{ item.0 }}"
          network: "{{ os_network }}"
          security_groups:
          - "{{ os_sg_worker }}"
          allowed_address_pairs:
          - ip_address: "{{ os_ingressVIP }}"
        with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}"
        register: ports
    
      # Tag each SDN/primary port with cluster name
      - name: 'Set Compute ports tag'
        command:
          cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}"
        with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}"
    
      - name: 'List the Compute Trunks'
        command:
          cmd: "openstack network trunk list"
        when: os_networking_type == "Kuryr"
        register: compute_trunks
    
      - name: 'Create the Compute trunks'
        command:
          cmd: "openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}"
        with_indexed_items: "{{ ports.results }}"
        when:
        - os_networking_type == "Kuryr"
        - "os_compute_trunk_name|string not in compute_trunks.stdout"
    
      - name: ‘Call additional-port processing’
        include_tasks: additional-ports.yaml
    
      # Create additional ports in OpenStack
      - name: ‘Create additionalNetworks ports’
        os_port:
          name:  "{{ item.0 }}-{{ item.1.name }}"
          vnic_type: "{{ item.1.type }}"
          network: "{{ item.1.uuid }}"
          port_security_enabled: "{{ item.1.port_security_enabled|default(omit) }}"
          no_security_groups: "{{ 'true' if item.1.security_groups is not defined else omit }}"
          security_groups: "{{ item.1.security_groups | default(omit) }}"
        with_nested:
          - "{{ worker_list }}"
          - "{{ port_name_list }}"
    
      # Tag the ports with the cluster info
      - name: 'Set additionalNetworks ports tag'
        command:
          cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}"
        with_nested:
          - "{{ worker_list }}"
          - "{{ port_name_list }}"
    
      # Build the nic list to use for server create
      - name: Build nic list
        set_fact:
          nic_list: "{{ nic_list | default([]) + [ item.name ] }}"
        with_items: "{{ port_name_list }}"
    
      # Create the servers
      - name: 'Create the Compute servers'
        vars:
          worker_nics: "{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\1') | list }}"
        os_server:
          name: "{{ item.1 }}"
          image: "{{ os_image_rhcos }}"
          flavor: "{{ os_flavor_worker }}"
          auto_ip: no
          userdata: "{{ lookup('file', 'worker.ign') | string }}"
          security_groups: []
          nics:  "{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}"
          config_drive: yes
        with_indexed_items: "{{ worker_list }}"
  4. Insert the following content into a local file that is called additional-ports.yaml:

    Example 18.2. additional-ports.yaml

    # Build a list of worker nodes with indexes
    - name: ‘Build worker list’
      set_fact:
        worker_list: "{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}"
      with_indexed_items: "{{ [ os_compute_server_name ] * os_compute_nodes_number }}"
    
    # Ensure that each network specified in additionalNetworks exists
    - name: ‘Verify additionalNetworks’
      os_networks_info:
        name: "{{ item.id }}"
      with_items: "{{ additionalNetworks }}"
      register: network_info
    
    # Expand additionalNetworks by the count parameter in each network definition
    - name: ‘Build port and port index list for additionalNetworks’
      set_fact:
        port_list: "{{ port_list | default([]) + [ {
                        'net_name' : item.1.id,
                        'uuid' : network_info.results[item.0].openstack_networks[0].id,
                        'type' : item.1.type|default('normal'),
                        'security_groups' : item.1.security_groups|default(omit),
                        'port_security_enabled' : item.1.port_security_enabled|default(omit)
                        } ] * item.1.count|default(1) }}"
        index_list: "{{ index_list | default([]) + range(item.1.count|default(1)) | list }}"
      with_indexed_items: "{{ additionalNetworks }}"
    
    # Calculate and save the name of the port
    # The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count
    # i.e. fdp-nz995-worker-1-99bcd111-1
    - name: ‘Calculate port name’
      set_fact:
        port_name_list: "{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}"
      with_indexed_items: "{{ port_list }}"
      when: port_list is defined
  5. On a command line, run the compute-nodes.yaml playbook:

    $ ansible-playbook -i inventory.yaml compute-nodes.yaml

18.8.24. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.23.0
    master-1  Ready     master  63m  v1.23.0
    master-2  Ready     master  64m  v1.23.0

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.23.0
    master-1  Ready     master  73m  v1.23.0
    master-2  Ready     master  74m  v1.23.0
    worker-0  Ready     worker  11m  v1.23.0
    worker-1  Ready     worker  11m  v1.23.0

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

18.8.25. Verifying a successful installation

Verify that the OpenShift Container Platform installation is complete.

Prerequisites

  • You have the installation program (openshift-install)

Procedure

  • On a command line, enter:

    $ openshift-install --log-level debug wait-for install-complete

The program outputs the console URL, as well as the administrator’s login information.

The cluster is operational. Before you can configure it for SR-IOV networks though, you must perform additional tasks.

18.8.26. Preparing a cluster that runs on RHOSP for SR-IOV

Before you use single root I/O virtualization (SR-IOV) on a cluster that runs on Red Hat OpenStack Platform (RHOSP), make the RHOSP metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver.

18.8.26.1. Enabling the RHOSP metadata service as a mountable drive

You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive.

The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources.

Procedure

  1. Create a machine config file from the following template:

    A mountable metadata service machine config file

    kind: MachineConfig
    apiVersion: machineconfiguration.openshift.io/v1
    metadata:
      name: 20-mount-config 1
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        systemd:
          units:
            - name: create-mountpoint-var-config.service
              enabled: true
              contents: |
                [Unit]
                Description=Create mountpoint /var/config
                Before=kubelet.service
    
                [Service]
                ExecStart=/bin/mkdir -p /var/config
    
                [Install]
                WantedBy=var-config.mount
    
            - name: var-config.mount
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
                [Mount]
                Where=/var/config
                What=/dev/disk/by-label/config-2
                [Install]
                WantedBy=local-fs.target

    1
    You can substitute a name of your choice.
  2. From a command line, apply the machine config:

    $ oc apply -f <machine_config_file_name>.yaml

18.8.26.2. Enabling the No-IOMMU feature for the RHOSP VFIO driver

You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature.

Procedure

  1. Create a machine config file from the following template:

    A No-IOMMU VFIO machine config file

    kind: MachineConfig
    apiVersion: machineconfiguration.openshift.io/v1
    metadata:
      name: 99-vfio-noiommu 1
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
          - path: /etc/modprobe.d/vfio-noiommu.conf
            mode: 0644
            contents:
              source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK

    1
    You can substitute a name of your choice.
  2. From a command line, apply the machine config:

    $ oc apply -f <machine_config_file_name>.yaml
Note

After you apply the machine config to the machine pool, you can watch the machine config pool status to see when the machines are available.

The cluster is installed and prepared for SR-IOV configuration. You must now perform the SR-IOV configuration tasks in "Next steps".

18.8.27. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.8.28. Additional resources

18.8.29. Next steps

18.9. Installing a cluster on OpenStack in a restricted network

In OpenShift Container Platform 4.10, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content.

18.9.1. Prerequisites

18.9.2. About installations in restricted networks

In OpenShift Container Platform 4.10, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.

If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.

To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.

18.9.2.1. Additional limits

Clusters in restricted networks have the following additional limitations and restrictions:

  • The ClusterVersion status includes an Unable to retrieve available updates error.
  • By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.

18.9.3. Resource guidelines for installing OpenShift Container Platform on RHOSP

To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:

Table 18.39. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
ResourceValue

Floating IP addresses

3

Ports

15

Routers

1

Subnets

1

RAM

88 GB

vCPUs

22

Volume storage

275 GB

Instances

7

Security groups

3

Security group rules

60

Server groups

2 - plus 1 for each additional availability zone in each machine pool

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

Important

If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

Note

By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.

18.9.3.1. Control plane machines

By default, the OpenShift Container Platform installation process creates three control plane machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.9.3.2. Compute machines

By default, the OpenShift Container Platform installation process creates three compute machines.

Each machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 8 GB memory and 2 vCPUs
  • At least 100 GB storage space from the RHOSP quota
Tip

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.

18.9.3.3. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the RHOSP quota
  • A port from the RHOSP quota
  • A flavor with at least 16 GB memory and 4 vCPUs
  • At least 100 GB storage space from the RHOSP quota

18.9.4. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to obtain the images that are necessary to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.
Important

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

18.9.5. Enabling Swift on RHOSP

Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program.

Important

If the Red Hat OpenStack Platform (RHOSP) object storage service, commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.

Prerequisites

  • You have a RHOSP administrator account on the target environment.
  • The Swift service is installed.
  • On Ceph RGW, the account in url option is enabled.

Procedure

To enable Swift on RHOSP:

  1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift:

    $ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

18.9.6. Defining parameters for the installation program

The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure

  1. Create the clouds.yaml file:

    • If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Important

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the RHOSP documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.
    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      Tip

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable
    2. The current directory
    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

18.9.7. Setting cloud provider options

Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP).

For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation.

Procedure

  1. If you have not already generated manifest files for your cluster, generate them by running the following command:

    $ openshift-install --dir <destination_directory> create manifests
  2. In a text editor, open the cloud-provider configuration manifest file. For example:

    $ vi openshift/manifests/cloud-provider-config.yaml
  3. Modify the options based on the cloud configuration specification.

    Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:

    #...
    [LoadBalancer]
    use-octavia=true 1
    lb-provider = "amphora" 2
    floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3
    create-monitor = True 4
    monitor-delay = 10s 5
    monitor-timeout = 10s 6
    monitor-max-retries = 1 7
    #...
    1
    This property enables Octavia integration.
    2
    This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT.
    3
    This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here.
    4
    This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider.
    5
    This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    6
    This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    7
    This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True.
    Important

    Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section.

    Important

    You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local. The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn".

    Important

    For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider.

  4. Save the changes to the file and proceed with installation.

    Tip

    You can update your cloud provider configuration after you run the installer. On a command line, run:

    $ oc edit configmap -n openshift-config cloud-provider-config

    After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status.

18.9.7.1. External load balancers that use pre-defined floating IP addresses

Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers.

If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method.

Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses.

Additional resources

18.9.8. Creating the RHCOS image for restricted network installations

Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment.

Prerequisites

  • Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host.

Procedure

  1. Log in to the Red Hat Customer Portal’s Product Downloads page.
  2. Under Version, select the most recent release of OpenShift Container Platform 4.10 for RHEL 8.

    Important

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available.

  3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image.
  4. Decompress the image.

    Note

    You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz. To find out if or how the file is compressed, in a command line, enter:

    $ file <name_of_downloaded_file>
  5. Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example:

    $ openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-${RHCOS_VERSION}
    Important

    Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.

    Warning

    If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP.

The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment.

18.9.9. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
  • Have the imageContentSources values that were generated during mirror registry creation.
  • Obtain the contents of the certificate for your mirror registry.
  • Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location.
  • Obtain service principal permissions at the subscription level.

Procedure

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      Important

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.
      3. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
      4. Specify the floating IP address to use for external access to the OpenShift API.
      5. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
      7. Enter a name for your cluster. The name must be 14 or fewer characters long.
      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager.
  2. In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example:

    platform:
      openstack:
          clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d
  3. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network.

    1. Update the pullSecret value to contain the authentication information for your registry:

      pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'

      For <mirror_host_name>, specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials>, specify the base64-encoded user name and password for your mirror registry.

    2. Add the additionalTrustBundle parameter and value.

      additionalTrustBundle: |
        -----BEGIN CERTIFICATE-----
        ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
        -----END CERTIFICATE-----

      The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry.

    3. Add the image content resources, which resemble the following YAML excerpt:

      imageContentSources:
      - mirrors:
        - <mirror_host_name>:5000/<repo_name>/release
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - <mirror_host_name>:5000/<repo_name>/release
        source: registry.redhat.io/ocp/release

      For these values, use the imageContentSources that you recorded during mirror registry creation.

  4. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section.
  5. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

18.9.9.1. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Note

Kuryr installations default to HTTP proxies.

Prerequisites

  • For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter:

    $ ip route add <cluster_network_cidr> via <installer_subnet_gateway>
  • The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates.
  • You have an existing install-config.yaml file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure

  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 1
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 2
      noProxy: example.com 3
    additionalTrustBundle: | 4
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    Note

    The installation program does not support the proxy readinessEndpoints field.

    Note

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Note

Only the Proxy object named cluster is supported, and no additional proxies can be created.

18.9.9.2. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

Note

After installation, you cannot modify these parameters in the install-config.yaml file.

18.9.9.2.1. Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 18.40. Required parameters
ParameterDescriptionValues

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}
18.9.9.2.2. Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 18.41. Network parameters
ParameterDescriptionValues

networking

The configuration for the cluster network.

Object

Note

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Note

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

18.9.9.2.3. Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 18.42. Optional parameters
ParameterDescriptionValues

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Note

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

Note

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

Note

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

Important

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>
18.9.9.2.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters

Additional RHOSP configuration parameters are described in the following table:

Table 18.43. Additional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.rootVolume.size

For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

compute.platform.openstack.rootVolume.type

For compute machines, the root volume’s type.

String, for example performance.

controlPlane.platform.openstack.rootVolume.size

For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

controlPlane.platform.openstack.rootVolume.type

For control plane machines, the root volume’s type.

String, for example performance.

platform.openstack.cloud

The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file.

String, for example MyCloud.

platform.openstack.externalNetwork

The RHOSP external network name to be used for installation.

String, for example external.

platform.openstack.computeFlavor

The RHOSP flavor to use for control plane and compute machines.

This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually.

String, for example m1.xlarge.

18.9.9.2.5. Optional RHOSP configuration parameters

Optional RHOSP configuration parameters are described in the following table:

Table 18.44. Optional RHOSP parameters
ParameterDescriptionValues

compute.platform.openstack.additionalNetworkIDs

Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

compute.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with compute machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

compute.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

compute.platform.openstack.rootVolume.zones

For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

compute.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

controlPlane.platform.openstack.additionalNetworkIDs

Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

controlPlane.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with control plane machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

controlPlane.platform.openstack.zones

RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured.

On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

controlPlane.platform.openstack.rootVolume.zones

For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

controlPlane.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

platform.openstack.clusterOSImage

The location from which the installer downloads the RHCOS image.

You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with an SHA-256 checksum.

For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d. The value can also be the name of an existing Glance image, for example my-rhcos.

platform.openstack.clusterOSImageProperties

Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image.

You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi.

You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes.

A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"].

platform.openstack.defaultMachinePlatform

The default machine pool platform configuration.

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}

platform.openstack.ingressFloatingIP

An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.apiFloatingIP

An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.externalDNS

IP addresses for external DNS servers that cluster instances use for DNS resolution.

A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform.openstack.machinesSubnet

The UUID of a RHOSP subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.

The first item in networking.machineNetwork must match the value of machinesSubnet.

If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP.

A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

18.9.9.3. Sample customized install-config.yaml file for restricted OpenStack installations

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.

Important

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OpenShiftSDN
platform:
  openstack:
    region: region1
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
additionalTrustBundle: |

  -----BEGIN CERTIFICATE-----

  ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ

  -----END CERTIFICATE-----

imageContentSources:
- mirrors:
  - <mirror_registry>/<repo_name>/release
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - <mirror_registry>/<repo_name>/release
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

18.9.10. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    Note

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> 1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

18.9.11. Enabling access to the environment

At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.

You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

18.9.11.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications.

Procedure

  1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    Note

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>
    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

    • platform.openstack.ingressFloatingIP
    • platform.openstack.apiFloatingIP

If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file.

Tip

You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

18.9.11.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.

In the install-config.yaml file, do not define the following parameters:

  • platform.openstack.ingressFloatingIP
  • platform.openstack.apiFloatingIP

If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own.

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

Note

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

18.9.12. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 1
        --log-level=info 2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
    Note

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    Note

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    Important
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
    Important

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

18.9.13. Verifying cluster status

You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

  1. In the cluster environment, export the administrator’s kubeconfig file:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

  2. View the control plane and compute machines created after a deployment:

    $ oc get nodes
  3. View your cluster’s version:

    $ oc get clusterversion
  4. View your Operators' status:

    $ oc get clusteroperator
  5. View all running pods in the cluster:

    $ oc get pods -A

18.9.14. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

Additional resources

  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

18.9.15. Disabling the default OperatorHub sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.

Procedure

  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Tip

Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources.

18.9.16. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

18.9.17. Next steps

18.10. OpenStack cloud configuration reference guide

A cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). Use the following parameters in a cloud-provider configuration manifest file to configure your cluster.

18.10.1. OpenStack cloud provider options

The cloud provider configuration, typically stored as a file named cloud.conf, controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP).

You can create a valid cloud.conf file by specifying the following options in it.

18.10.1.1. Global options

The following options are used for RHOSP CCM authentication with the RHOSP Identity service, also known as Keystone. They are similiar to the global options that you can set by using the openstack CLI.

OptionDescription

auth-url

The RHOSP Identity service URL. For example, http://128.110.154.166/identity.

ca-file

Optional. The CA certificate bundle file for communication with the RHOSP Identity service. If you use the HTTPS protocol with The Identity service URL, this option is required.

domain-id

The Identity service user domain ID.

Leave this option unset if you are using Identity service application credentials.

domain-name

The Identity service user domain name.

This option is not required if you set domain-id.

tenant-id

The Identity service project ID. Leave this option unset if you are using Identity service application credentials.

In version 3 of the Identity API, which changed the identifier tenant to project, the value of tenant-id is automatically mapped to the project construct in the API.

tenant-name

The Identity service project name.

username

The Identity service user name.

Leave this option unset if you are using Identity service application credentials.

password

The Identity service user password.

Leave this option unset if you are using Identity service application credentials.

region

The Identity service region name.

trust-id

The Identity service trust ID. A trust represents the authorization of a user, or trustor, to delegate roles to another user, or trustee. Optionally, a trust authorizes the trustee to impersonate the trustor. You can find available trusts by querying the /v3/OS-TRUST/trusts endpoint of the Identity service API.

18.10.1.2. Load balancer options

The cloud provider supports several load balancer options for deployments that use Octavia.

OptionDescription

use-octavia

Whether or not to use Octavia for the LoadBalancer type of the service implementation rather than Neutron-LBaaS. The default value is true.

floating-network-id

Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation.

lb-method

The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN, LEAST_CONNECTIONS, or SOURCE_IP. The default value is ROUND_ROBIN.

For the OVN provider, only the SOURCE_IP_PORT algorithm is supported.

For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections.

lb-provider

Optional. Used to specify the provider of the load balancer, for example, amphora or octavia. Only the Amphora and Octavia providers are supported.

lb-version

Optional. The load balancer API version. Only "v2" is supported.

subnet-id

The ID of the Networking service subnet on which load balancer VIPs are created.

create-monitor

Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local. The default value is false.

This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider.

monitor-delay

The interval in seconds by which probes are sent to members of the load balancer. The default value is 5.

monitor-max-retries

The number of successful checks that are required to change the operating status of a load balancer member to ONLINE. The valid range is 1 to 10, and the default value is 1.

monitor-timeout

The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3.

18.10.1.3. Metadata options

OptionDescription

search-order

This configuration key affects the way that the provider retrieves metadata that relates to the instances in which it runs. The default value of configDrive,metadataService results in the provider retrieving instance metadata from the configuration drive first if available, and then the metadata service. Alternative values are:

  • configDrive: Only retrieve instance metadata from the configuration drive.
  • metadataService: Only retrieve instance metadata from the metadata service.
  • metadataService,configDrive: Retrieve instance metadata from the metadata service first if available, and then retrieve instance metadata from the configuration drive.

18.11. Uninstalling a cluster on OpenStack

You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP).

18.11.1. Removing a cluster that uses installer-provisioned infrastructure

You can remove a cluster that uses installer-provisioned infrastructure from your cloud.

Note

After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access.

Prerequisites

  • Have a copy of the installation program that you used to deploy the cluster.
  • Have the files that the installation program generated when you created your cluster.

Procedure

  1. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command:

    $ ./openshift-install destroy cluster \
    --dir <installation_directory> --log-level info 1 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different details, specify warn, debug, or error instead of info.
    Note

    You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster.

  1. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.

18.12. Uninstalling a cluster on RHOSP from your own infrastructure

You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure.

18.12.1. Downloading playbook dependencies

The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them.

Note

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.

Prerequisites

  • Python 3 is installed on your machine.

Procedure

  1. On a command line, add the repositories:

    1. Register with Red Hat Subscription Manager:

      $ sudo subscription-manager register # If not done already
    2. Pull the latest subscription data:

      $ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already
    3. Disable the current repositories:

      $ sudo subscription-manager repos --disable=* # If not done already
    4. Add the required repositories:

      $ sudo subscription-manager repos \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
        --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms
  2. Install the modules:

    $ sudo yum install python3-openstackclient ansible python3-openstacksdk
  3. Ensure that the python command points to python3:

    $ sudo alternatives --set python /usr/bin/python3

18.12.2. Removing a cluster from RHOSP that uses your own infrastructure

You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks.

Prerequisites

  • Python 3 is installed on your machine.
  • You downloaded the modules in "Downloading playbook dependencies."
  • You have the playbooks that you used to install the cluster.
  • You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file.
  • All of the playbooks are in a common directory.

Procedure

  1. On a command line, run the playbooks that you downloaded:

    $ ansible-playbook -i inventory.yaml  \
    	down-bootstrap.yaml      \
    	down-control-plane.yaml  \
    	down-compute-nodes.yaml  \
    	down-load-balancers.yaml \
    	down-network.yaml        \
    	down-security-groups.yaml
  2. Remove any DNS record changes you made for the OpenShift Container Platform installation.

OpenShift Container Platform is removed from your infrastructure.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.