Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 6. Installing a cluster on AWS in a restricted network


In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC).

6.1. Prerequisites

  • You reviewed details about the OpenShift Container Platform installation and update processes.
  • You read the documentation on selecting a cluster installation method and preparing it for users.
  • You mirrored the images for a disconnected installation to your registry and obtained the

    imageContentSources
    data for your version of OpenShift Container Platform.

    Important

    Because the installation media is on the mirror host, you can use that computer to complete all installation steps.

  • You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements:

    • Contains the mirror registry
    • Has firewall rules or a peering connection to access the mirror registry hosted elsewhere
  • You configured an AWS account to host the cluster.

    Important

    If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.

  • You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation.
  • If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.

    Note

    If you are configuring a proxy, be sure to also review this site list.

6.2. About installations in restricted networks

In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.

If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere.

To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.

6.2.1. Additional limits

Clusters in restricted networks have the following additional limitations and restrictions:

  • The
    ClusterVersion
    status includes an
    Unable to retrieve available updates
    error.
  • By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.

6.3. About using a custom VPC

In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.

Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.

6.3.1. Requirements for using your VPC

The installation program no longer creates the following components:

  • Internet gateways
  • NAT gateways
  • Subnets
  • Route tables
  • VPCs
  • VPC DHCP options
  • VPC endpoints
Note

The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.

If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.

The installation program cannot:

  • Subdivide network ranges for the cluster to use.
  • Set route tables for the subnets.
  • Set VPC options like DHCP.

You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.

Your VPC must meet the following characteristics:

  • The VPC must not use the

    kubernetes.io/cluster/.*: owned
    ,
    Name
    , and
    openshift.io/cluster
    tags.

    The installation program modifies your subnets to add the

    kubernetes.io/cluster/.*: shared
    tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a
    Name
    tag, because it overlaps with the EC2
    Name
    field and the installation fails.

  • You must enable the

    enableDnsSupport
    and
    enableDnsHostnames
    attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.

    If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the

    platform.aws.hostedZone
    and
    platform.aws.hostedZoneRole
    fields in the
    install-config.yaml
    file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the
    Passthrough
    or
    Manual
    credentials mode.

If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available:

6.3.1.1. Option 1: Create VPC endpoints

Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:

  • ec2.<aws_region>.amazonaws.com
  • elasticloadbalancing.<aws_region>.amazonaws.com
  • s3.<aws_region>.amazonaws.com

With this option, network traffic remains private between your VPC and the required AWS services.

6.3.1.2. Option 2: Create a proxy without VPC endpoints

As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services.

6.3.1.3. Option 3: Create a proxy with VPC endpoints

As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:

  • ec2.<aws_region>.amazonaws.com
  • elasticloadbalancing.<aws_region>.amazonaws.com
  • s3.<aws_region>.amazonaws.com

When configuring the proxy in the

install-config.yaml
file, add these endpoints to the
noProxy
field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services.

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines.

Expand
ComponentAWS typeDescription

VPC

  • AWS::EC2::VPC
  • AWS::EC2::VPCEndpoint

You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3.

Public subnets

  • AWS::EC2::Subnet
  • AWS::EC2::SubnetNetworkAclAssociation

Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules.

Internet gateway

  • AWS::EC2::InternetGateway
  • AWS::EC2::VPCGatewayAttachment
  • AWS::EC2::RouteTable
  • AWS::EC2::Route
  • AWS::EC2::SubnetRouteTableAssociation
  • AWS::EC2::NatGateway
  • AWS::EC2::EIP

You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios.

Network access control

  • AWS::EC2::NetworkAcl
  • AWS::EC2::NetworkAclEntry

You must allow the VPC to access the following ports:

Port

Reason

80

Inbound HTTP traffic

443

Inbound HTTPS traffic

22

Inbound SSH traffic

1024
-
65535

Inbound ephemeral traffic

0
-
65535

Outbound ephemeral traffic

Private subnets

  • AWS::EC2::Subnet
  • AWS::EC2::RouteTable
  • AWS::EC2::SubnetRouteTableAssociation

Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them.

6.3.2. VPC validation

To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

  • All the subnets that you specify exist.
  • You provide private subnets.
  • The subnet CIDRs belong to the machine CIDR that you specified.
  • You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
  • You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.

If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the

kubernetes.io/cluster/.*: shared
tag is removed from the subnets that it used.

6.3.3. Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.

The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.

6.3.4. Isolation between clusters

If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:

  • You can install multiple OpenShift Container Platform clusters in the same VPC.
  • ICMP ingress is allowed from the entire network.
  • TCP 22 ingress (SSH) is allowed to the entire network.
  • Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
  • Control plane TCP 22623 ingress (MCS) is allowed to the entire network.

6.4. Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
  • Access Quay.io to obtain the packages that are required to install your cluster.
  • Obtain the packages that are required to perform cluster updates.

6.5. Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the

~/.ssh/authorized_keys
list for the
core
user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user

core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The

./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.

Important

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

Note

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure

  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 
    1
    1
    Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
    Note

    If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the

    x86_64
    ,
    ppc64le
    , and
    s390x
    architectures, do not create a key that uses the
    ed25519
    algorithm. Instead, create a key that uses the
    rsa
    or
    ecdsa
    algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the

    ~/.ssh/id_ed25519.pub
    public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the

    ./openshift-install gather
    command.

    Note

    On some distributions, default SSH private key identities such as

    ~/.ssh/id_rsa
    and
    ~/.ssh/id_dsa
    are managed automatically.

    1. If the

      ssh-agent
      process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"

      Example output

      Agent pid 31874

      Note

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the

    ssh-agent
    :

    $ ssh-add <path>/<file_name> 
    1
    1
    Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519

    Example output

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

Next steps

  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

6.6. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).

Prerequisites

  • You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
  • You have the
    imageContentSources
    values that were generated during mirror registry creation.
  • You have obtained the contents of the certificate for your mirror registry.

Procedure

  1. Create the

    install-config.yaml
    file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 
      1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.

      When specifying the directory:

      • Verify that the directory has the
        execute
        permission. This permission is required to run Terraform binaries under the installation directory.
      • Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

        Note

        Always delete the

        ~/.powervs
        directory to avoid reusing a stale configuration. Run the following command:

        $ rm -rf ~/.powervs
    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your

        ssh-agent
        process uses.

      2. Select AWS as the platform to target.
      3. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
      4. Select the AWS region to deploy the cluster to.
      5. Select the base domain for the Route 53 service that you configured for your cluster.
      6. Enter a descriptive name for your cluster.
  2. Edit the

    install-config.yaml
    file to give the additional information that is required for an installation in a restricted network.

    1. Update the

      pullSecret
      value to contain the authentication information for your registry:

      pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'

      For

      <mirror_host_name>
      , specify the registry domain name that you specified in the certificate for your mirror registry, and for
      <credentials>
      , specify the base64-encoded user name and password for your mirror registry.

    2. Add the

      additionalTrustBundle
      parameter and value.

      additionalTrustBundle: |
        -----BEGIN CERTIFICATE-----
        ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
        -----END CERTIFICATE-----

      The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry.

    3. Define the subnets for the VPC to install the cluster in:

      subnets:
      - subnet-1
      - subnet-2
      - subnet-3
    4. Add the image content resources, which resemble the following YAML excerpt:

      imageContentSources:
      - mirrors:
        - <mirror_host_name>:5000/<repo_name>/release
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - <mirror_host_name>:5000/<repo_name>/release
        source: registry.redhat.io/ocp/release

      For these values, use the

      imageContentSources
      that you recorded during mirror registry creation.

    5. Optional: Set the publishing strategy to

      Internal
      :

      publish: Internal

      By setting this option, you create an internal Ingress Controller and a private load balancer.

  3. Make any other modifications to the
    install-config.yaml
    file that you require. You can find more information about the available parameters in the Installation configuration parameters section.
  4. Back up the

    install-config.yaml
    file so that you can use it to install multiple clusters.

    Important

    The

    install-config.yaml
    file is consumed during the installation process. If you want to reuse the file, you must back it up now.

6.6.1. Minimum resource requirements for cluster installation

Each cluster machine must meet the following minimum requirements:

Expand
Table 6.1. Minimum resource requirements
MachineOperating SystemvCPU [1]Virtual RAMStorageInput/Output Per Second (IOPS)[2]

Bootstrap

RHCOS

4

16 GB

100 GB

300

Control plane

RHCOS

4

16 GB

100 GB

300

Compute

RHCOS, RHEL 8.6 and later [3]

2

8 GB

100 GB

300

  1. One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
  2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
  3. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
Note

As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:

  • x86-64 architecture requires x86-64-v2 ISA
  • ARM64 architecture requires ARMv8.0-A ISA
  • IBM Power architecture requires Power 9 ISA
  • s390x architecture requires z14 ISA

For more information, see RHEL Architectures.

If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.

6.6.2. Sample customized install-config.yaml file for AWS

You can customize the installation configuration file (

install-config.yaml
) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

Important

This sample YAML file is provided for reference only. You must obtain your

install-config.yaml
file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com 
1

credentialsMode: Mint 
2

controlPlane: 
3
 
4

  hyperthreading: Enabled 
5

  name: master
  platform:
    aws:
      zones:
      - us-west-2a
      - us-west-2b
      rootVolume:
        iops: 4000
        size: 500
        type: io1 
6

      metadataService:
        authentication: Optional 
7

      type: m6i.xlarge
  replicas: 3
compute: 
8

- hyperthreading: Enabled 
9

  name: worker
  platform:
    aws:
      rootVolume:
        iops: 2000
        size: 500
        type: io1 
10

      metadataService:
        authentication: Optional 
11

      type: c5.4xlarge
      zones:
      - us-west-2c
  replicas: 3
metadata:
  name: test-cluster 
12

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes 
13

  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: us-west-2 
14

    propagateUserTags: true 
15

    userTags:
      adminContact: jdoe
      costCenter: 7536
    subnets: 
16

    - subnet-1
    - subnet-2
    - subnet-3
    amiID: ami-0c5d3e03c0ab9b19a 
17

    serviceEndpoints: 
18

      - name: ec2
        url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
    hostedZone: Z3URY6TWQ91KVV 
19

fips: false 
20

sshKey: ssh-ed25519 AAAA... 
21

pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}' 
22

additionalTrustBundle: | 
23

    -----BEGIN CERTIFICATE-----
    <MY_TRUSTED_CA_CERT>
    -----END CERTIFICATE-----
imageContentSources: 
24

- mirrors:
  - <local_registry>/<local_repository_name>/release
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - <local_registry>/<local_repository_name>/release
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
1 12 14
Required. The installation program prompts you for this value.
2
Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide.
3 8 15
If you do not provide these parameters and values, the installation program provides the default value.
4
The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
5 9
Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
Important

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as

m4.2xlarge
or
m5.2xlarge
, for your machines if you disable simultaneous multithreading.

6 10
To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000.
7 11
Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required. To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional. If no value is specified, both IMDSv1 and IMDSv2 are allowed.
Note

The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets.

13
The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN. The default value is OVNKubernetes.
16
If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
17
The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
18
The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate.
19
The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
20
Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

21
You can optionally provide the sshKey value that you use to access the machines in your cluster.
Note

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your

ssh-agent
process uses.

22
For <local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000. For <credentials>, specify the base64-encoded user name and password for your mirror registry.
23
Provide the contents of the certificate file that you used for your mirror registry.
24
Provide the imageContentSources section from the output of the command to mirror the repository.

6.6.3. Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the

install-config.yaml
file.

Prerequisites

  • You have an existing
    install-config.yaml
    file.
  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the

    Proxy
    object’s
    spec.noProxy
    field to bypass the proxy if necessary.

    Note

    The

    Proxy
    object
    status.noProxy
    field is populated with the values of the
    networking.machineNetwork[].cidr
    ,
    networking.clusterNetwork[].cidr
    , and
    networking.serviceNetwork[]
    fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the

    Proxy
    object
    status.noProxy
    field is also populated with the instance metadata endpoint (
    169.254.169.254
    ).

Procedure

  1. Edit your

    install-config.yaml
    file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 
    1
    
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 
    2
    
      noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 
    3
    
    additionalTrustBundle: | 
    4
    
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 
    5
    1
    A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2
    A proxy URL to use for creating HTTPS connections outside the cluster.
    3
    A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. If you have added the Amazon EC2,Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field.
    4
    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    5
    Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always. Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly.
    Note

    The installation program does not support the proxy

    readinessEndpoints
    field.

    Note

    If the installer times out, restart and then complete the deployment by using the

    wait-for
    command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named

cluster
that uses the proxy settings in the provided
install-config.yaml
file. If no proxy settings are provided, a
cluster
Proxy
object is still created, but it will have a nil
spec
.

Note

Only the

Proxy
object named
cluster
is supported, and no additional proxies can be created.

6.7. Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (

oc
) to interact with OpenShift Container Platform from a command-line interface. You can install
oc
on Linux, Windows, or macOS.

Important

If you installed an earlier version of

oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of
oc
.

6.7.1. Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (

oc
) binary on Linux by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the architecture from the Product Variant drop-down list.
  3. Select the appropriate version from the Version drop-down list.
  4. Click Download Now next to the OpenShift v4.14 Linux Client entry and save the file.
  5. Unpack the archive:

    $ tar xvf <file>
  6. Place the

    oc
    binary in a directory that is on your
    PATH
    .

    To check your

    PATH
    , execute the following command:

    $ echo $PATH

Verification

  • After you install the OpenShift CLI, it is available using the

    oc
    command:

    $ oc <command>

6.7.2. Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (

oc
) binary on Windows by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version from the Version drop-down list.
  3. Click Download Now next to the OpenShift v4.14 Windows Client entry and save the file.
  4. Unzip the archive with a ZIP program.
  5. Move the

    oc
    binary to a directory that is on your
    PATH
    .

    To check your

    PATH
    , open the command prompt and execute the following command:

    C:\> path

Verification

  • After you install the OpenShift CLI, it is available using the

    oc
    command:

    C:\> oc <command>

6.7.3. Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (

oc
) binary on macOS by using the following procedure.

Procedure

  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
  2. Select the appropriate version from the Version drop-down list.
  3. Click Download Now next to the OpenShift v4.14 macOS Client entry and save the file.

    Note

    For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry.

  4. Unpack and unzip the archive.
  5. Move the

    oc
    binary to a directory on your PATH.

    To check your

    PATH
    , open a terminal and execute the following command:

    $ echo $PATH

Verification

  • Verify your installation by using an

    oc
    command:

    $ oc <command>

By default, administrator secrets are stored in the

kube-system
project. If you configured the
credentialsMode
parameter in the
install-config.yaml
file to
Manual
, you must use one of the following alternatives:

6.8.1. Manually creating long-term credentials

The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster

kube-system
namespace.

Procedure

  1. If you did not set the

    credentialsMode
    parameter in the
    install-config.yaml
    configuration file to
    Manual
    , modify the value as shown:

    Sample configuration file snippet

    apiVersion: v1
    baseDomain: example.com
    credentialsMode: Manual
    # ...

  2. If you have not previously created installation manifest files, do so by running the following command:

    $ openshift-install create manifests --dir <installation_directory>

    where

    <installation_directory>
    is the directory in which the installation program creates files.

  3. Set a

    $RELEASE_IMAGE
    variable with the release image from your installation file by running the following command:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  4. Extract the list of

    CredentialsRequest
    custom resources (CRs) from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
      --from=$RELEASE_IMAGE \
      --credentials-requests \
      --included \
    1
    
      --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \
    2
    
      --to=<path_to_directory_for_credentials_requests> 
    3
    1
    The --included parameter includes only the manifests that your specific cluster configuration requires.
    2
    Specify the location of the install-config.yaml file.
    3
    Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it.

    This command creates a YAML file for each

    CredentialsRequest
    object.

    Sample CredentialsRequest object

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component_credentials_request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AWSProviderSpec
        statementEntries:
        - effect: Allow
          action:
          - iam:GetUser
          - iam:GetUserPolicy
          - iam:ListAccessKeys
          resource: "*"
      ...

  5. Create YAML files for secrets in the

    openshift-install
    manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the
    spec.secretRef
    for each
    CredentialsRequest
    object.

    Sample CredentialsRequest object with secrets

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component_credentials_request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AWSProviderSpec
        statementEntries:
        - effect: Allow
          action:
          - s3:CreateBucket
          - s3:DeleteBucket
          resource: "*"
          ...
      secretRef:
        name: <component_secret>
        namespace: <component_namespace>
      ...

    Sample Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: <component_secret>
      namespace: <component_namespace>
    data:
      aws_access_key_id: <base64_encoded_aws_access_key_id>
      aws_secret_access_key: <base64_encoded_aws_secret_access_key>

Important

Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.

6.8.2. Configuring an AWS cluster to use short-term credentials

To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster.

6.8.2.1. Configuring the Cloud Credential Operator utility

To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (

ccoctl
) binary.

Note

The

ccoctl
utility is a Linux binary that must run in a Linux environment.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You have created an AWS account for the

    ccoctl
    utility to use with the following permissions:

    Example 6.1. Required AWS permissions

    Required iam permissions

    • iam:CreateOpenIDConnectProvider
    • iam:CreateRole
    • iam:DeleteOpenIDConnectProvider
    • iam:DeleteRole
    • iam:DeleteRolePolicy
    • iam:GetOpenIDConnectProvider
    • iam:GetRole
    • iam:GetUser
    • iam:ListOpenIDConnectProviders
    • iam:ListRolePolicies
    • iam:ListRoles
    • iam:PutRolePolicy
    • iam:TagOpenIDConnectProvider
    • iam:TagRole

    Required s3 permissions

    • s3:CreateBucket
    • s3:DeleteBucket
    • s3:DeleteObject
    • s3:GetBucketAcl
    • s3:GetBucketTagging
    • s3:GetObject
    • s3:GetObjectAcl
    • s3:GetObjectTagging
    • s3:ListBucket
    • s3:PutBucketAcl
    • s3:PutBucketPolicy
    • s3:PutBucketPublicAccessBlock
    • s3:PutBucketTagging
    • s3:PutObject
    • s3:PutObjectAcl
    • s3:PutObjectTagging

    Required cloudfront permissions

    • cloudfront:ListCloudFrontOriginAccessIdentities
    • cloudfront:ListDistributions
    • cloudfront:ListTagsForResource

    If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the

    ccoctl
    utility requires the following additional permissions:

    Example 6.2. Additional permissions for a private S3 bucket with CloudFront

    • cloudfront:CreateCloudFrontOriginAccessIdentity
    • cloudfront:CreateDistribution
    • cloudfront:DeleteCloudFrontOriginAccessIdentity
    • cloudfront:DeleteDistribution
    • cloudfront:GetCloudFrontOriginAccessIdentity
    • cloudfront:GetCloudFrontOriginAccessIdentityConfig
    • cloudfront:GetDistribution
    • cloudfront:TagResource
    • cloudfront:UpdateDistribution
    Note

    These additional permissions support the use of the

    --create-private-s3-bucket
    option when processing credentials requests with the
    ccoctl aws create-all
    command.

Procedure

  1. Set a variable for the OpenShift Container Platform release image by running the following command:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  2. Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:

    $ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
    Note

    Ensure that the architecture of the

    $RELEASE_IMAGE
    matches the architecture of the environment in which you will use the
    ccoctl
    tool.

  3. Extract the

    ccoctl
    binary from the CCO container image within the OpenShift Container Platform release image by running the following command:

    $ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
  4. Change the permissions to make

    ccoctl
    executable by running the following command:

    $ chmod 775 ccoctl

Verification

  • To verify that

    ccoctl
    is ready to use, display the help file. Use a relative file name when you run the command, for example:

    $ ./ccoctl.rhel9

    Example output

    OpenShift credentials provisioning tool
    
    Usage:
      ccoctl [command]
    
    Available Commands:
      alibabacloud Manage credentials objects for alibaba cloud
      aws          Manage credentials objects for AWS cloud
      azure        Manage credentials objects for Azure
      gcp          Manage credentials objects for Google cloud
      help         Help about any command
      ibmcloud     Manage credentials objects for IBM Cloud
      nutanix      Manage credentials objects for Nutanix
    
    Flags:
      -h, --help   help for ccoctl
    
    Use "ccoctl [command] --help" for more information about a command.

You have the following options when creating AWS resources:

  • You can use the
    ccoctl aws create-all
    command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command.
  • If you need to review the JSON files that the
    ccoctl
    tool creates before modifying AWS resources, or if the process the
    ccoctl
    tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually.
6.8.2.2.1. Creating AWS resources with a single command

If the process the

ccoctl
tool uses to create AWS resources automatically meets the requirements of your organization, you can use the
ccoctl aws create-all
command to automate the creation of AWS resources.

Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually".

Note

By default,

ccoctl
creates objects in the directory in which the commands are run. To create the objects in a different directory, use the
--output-dir
flag. This procedure uses
<path_to_ccoctl_output_dir>
to refer to this directory.

Prerequisites

You must have:

  • Extracted and prepared the
    ccoctl
    binary.

Procedure

  1. Set a

    $RELEASE_IMAGE
    variable with the release image from your installation file by running the following command:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  2. Extract the list of

    CredentialsRequest
    objects from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
      --from=$RELEASE_IMAGE \
      --credentials-requests \
      --included \
    1
    
      --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \
    2
    
      --to=<path_to_directory_for_credentials_requests> 
    3
    1
    The --included parameter includes only the manifests that your specific cluster configuration requires.
    2
    Specify the location of the install-config.yaml file.
    3
    Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it.
    Note

    This command might take a few moments to run.

  3. Use the

    ccoctl
    tool to process all
    CredentialsRequest
    objects by running the following command:

    $ ccoctl aws create-all \
      --name=<name> \
    1
    
      --region=<aws_region> \
    2
    
      --credentials-requests-dir=<path_to_credentials_requests_directory> \
    3
    
      --output-dir=<path_to_ccoctl_output_dir> \
    4
    
      --create-private-s3-bucket 
    5
    1
    Specify the name used to tag any cloud resources that are created for tracking.
    2
    Specify the AWS region in which cloud resources will be created.
    3
    Specify the directory containing the files for the component CredentialsRequest objects.
    4
    Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run.
    5
    Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter.
    Note

    If your cluster uses Technology Preview features that are enabled by the

    TechPreviewNoUpgrade
    feature set, you must include the
    --enable-tech-preview
    parameter.

Verification

  • To verify that the OpenShift Container Platform secrets are created, list the files in the

    <path_to_ccoctl_output_dir>/manifests
    directory:

    $ ls <path_to_ccoctl_output_dir>/manifests

    Example output

    cluster-authentication-02-config.yaml
    openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml
    openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
    openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml
    openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml
    openshift-image-registry-installer-cloud-credentials-credentials.yaml
    openshift-ingress-operator-cloud-credentials-credentials.yaml
    openshift-machine-api-aws-cloud-credentials-credentials.yaml

    You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.

6.8.2.2.2. Creating AWS resources individually

You can use the

ccoctl
tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments.

Otherwise, you can use the

ccoctl aws create-all
command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command".

Note

By default,

ccoctl
creates objects in the directory in which the commands are run. To create the objects in a different directory, use the
--output-dir
flag. This procedure uses
<path_to_ccoctl_output_dir>
to refer to this directory.

Some

ccoctl
commands make AWS API calls to create or modify AWS resources. You can use the
--dry-run
flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the
--cli-input-json
parameters.

Prerequisites

  • Extract and prepare the
    ccoctl
    binary.

Procedure

  1. Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command:

    $ ccoctl aws create-key-pair

    Example output

    2021/04/13 11:01:02 Generating RSA keypair
    2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private
    2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public
    2021/04/13 11:01:03 Copying signing key for use by installer

    where

    serviceaccount-signer.private
    and
    serviceaccount-signer.public
    are the generated key files.

    This command also creates a private key that the cluster requires during installation in

    /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key
    .

  2. Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command:

    $ ccoctl aws create-identity-provider \
      --name=<name> \
    1
    
      --region=<aws_region> \
    2
    
      --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 
    3
    1
    <name> is the name used to tag any cloud resources that are created for tracking.
    2
    <aws-region> is the AWS region in which cloud resources will be created.
    3
    <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated.

    Example output

    2021/04/13 11:16:09 Bucket <name>-oidc created
    2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated
    2021/04/13 11:16:10 Reading public key
    2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated
    2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com

    where

    openid-configuration
    is a discovery document and
    keys.json
    is a JSON web key set file.

    This command also creates a YAML configuration file in

    /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml
    . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens.

  3. Create IAM roles for each component in the cluster:

    1. Set a

      $RELEASE_IMAGE
      variable with the release image from your installation file by running the following command:

      $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
    2. Extract the list of

      CredentialsRequest
      objects from the OpenShift Container Platform release image:

      $ oc adm release extract \
        --from=$RELEASE_IMAGE \
        --credentials-requests \
        --included \
      1
      
        --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \
      2
      
        --to=<path_to_directory_for_credentials_requests> 
      3
      1
      The --included parameter includes only the manifests that your specific cluster configuration requires.
      2
      Specify the location of the install-config.yaml file.
      3
      Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it.
    3. Use the

      ccoctl
      tool to process all
      CredentialsRequest
      objects by running the following command:

      $ ccoctl aws create-iam-roles \
        --name=<name> \
        --region=<aws_region> \
        --credentials-requests-dir=<path_to_credentials_requests_directory> \
        --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
      Note

      For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the

      --region
      parameter.

      If your cluster uses Technology Preview features that are enabled by the

      TechPreviewNoUpgrade
      feature set, you must include the
      --enable-tech-preview
      parameter.

      For each

      CredentialsRequest
      object,
      ccoctl
      creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each
      CredentialsRequest
      object from the OpenShift Container Platform release image.

Verification

  • To verify that the OpenShift Container Platform secrets are created, list the files in the

    <path_to_ccoctl_output_dir>/manifests
    directory:

    $ ls <path_to_ccoctl_output_dir>/manifests

    Example output

    cluster-authentication-02-config.yaml
    openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml
    openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
    openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml
    openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml
    openshift-image-registry-installer-cloud-credentials-credentials.yaml
    openshift-ingress-operator-cloud-credentials-credentials.yaml
    openshift-machine-api-aws-cloud-credentials-credentials.yaml

    You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.

6.8.2.3. Incorporating the Cloud Credential Operator utility manifests

To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (

ccoctl
) created to the correct directories for the installation program.

Prerequisites

  • You have configured an account with the cloud platform that hosts your cluster.
  • You have configured the Cloud Credential Operator utility (
    ccoctl
    ).
  • You have created the cloud provider resources that are required for your cluster with the
    ccoctl
    utility.

Procedure

  1. If you did not set the

    credentialsMode
    parameter in the
    install-config.yaml
    configuration file to
    Manual
    , modify the value as shown:

    Sample configuration file snippet

    apiVersion: v1
    baseDomain: example.com
    credentialsMode: Manual
    # ...

  2. If you have not previously created installation manifest files, do so by running the following command:

    $ openshift-install create manifests --dir <installation_directory>

    where

    <installation_directory>
    is the directory in which the installation program creates files.

  3. Copy the manifests that the

    ccoctl
    utility generated to the
    manifests
    directory that the installation program created by running the following command:

    $ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
  4. Copy the

    tls
    directory that contains the private key to the installation directory:

    $ cp -a /<path_to_ccoctl_output_dir>/tls .

6.9. Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

Important

You can run the

create cluster
command of the installation program only once, during initial installation.

Prerequisites

  • You have configured an account with the cloud platform that hosts your cluster.
  • You have the OpenShift Container Platform installation program and the pull secret for your cluster.
  • You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.

Procedure

  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ 
    1
    
        --log-level=info 
    2
    1
    For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2
    To view different installation details, specify warn, debug, or error instead of info.
  2. Optional: Remove or disable the

    AdministratorAccess
    policy from the IAM account that you used to install the cluster.

    Note

    The elevated permissions provided by the

    AdministratorAccess
    policy are required only during installation.

Verification

When the cluster deployment completes successfully:

  • The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
    kubeadmin
    user.
  • Credential information also outputs to
    <installation_directory>/.openshift_install.log
    .
Important

Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s

Important
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
    node-bootstrapper
    certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.
  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

6.10. Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster

kubeconfig
file. The
kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • You deployed an OpenShift Container Platform cluster.
  • You installed the
    oc
    CLI.

Procedure

  1. Export the

    kubeadmin
    credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run

    oc
    commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

6.11. Disabling the default OperatorHub catalog sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.

Procedure

  • Disable the sources for the default catalogs by adding

    disableAllDefaultSources: true
    to the
    OperatorHub
    object:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Tip

Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.

6.12. Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

6.13. Next steps

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben