搜索

1.9. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure

download PDF

In OpenShift Container Platform version 4.5, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide and an internal mirror of the installation release content.

重要

While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the GCP APIs.

The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods.

重要

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.

1.9.1. Prerequisites

1.9.2. Configuring your GCP project

Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it.

1.9.2.1. Creating a GCP project

To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster.

Procedure

  • Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation.

    重要

    Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing.

1.9.2.2. Enabling API services in GCP

Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation.

Prerequisites

  • You created a project to host your cluster.

Procedure

  • Enable the following required API services in the project that hosts your cluster. See Enabling services in the GCP documentation.

    表 1.37. Required API services
    API serviceConsole service name

    Compute Engine API

    compute.googleapis.com

    Google Cloud APIs

    cloudapis.googleapis.com

    Cloud Resource Manager API

    cloudresourcemanager.googleapis.com

    Google DNS API

    dns.googleapis.com

    IAM Service Account Credentials API

    iamcredentials.googleapis.com

    Identity and Access Management (IAM) API

    iam.googleapis.com

    Service Management API

    servicemanagement.googleapis.com

    Service Usage API

    serviceusage.googleapis.com

    Google Cloud Storage JSON API

    storage-api.googleapis.com

    Cloud Storage

    storage-component.googleapis.com

1.9.2.3. Configuring DNS for GCP

To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster.

Procedure

  1. Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source.

    注意

    If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains.

  2. Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation.

    Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as clusters.openshiftcorp.com.

  3. Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation.

    You typically have four name servers.

  4. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers.
  5. If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation.
  6. If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain. This process might include a request to your company’s IT department or the division that controls the root domain and DNS services for your company.

1.9.2.4. GCP account limits

The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster.

A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys.

表 1.38. GCP resources used in a default cluster
ServiceComponentLocationTotal resources requiredResources removed after bootstrap

Service account

IAM

Global

5

0

Firewall rules

Networking

Global

11

1

Forwarding rules

Compute

Global

2

0

Health checks

Compute

Global

2

0

Images

Compute

Global

1

0

Networks

Networking

Global

1

0

Routers

Networking

Global

1

0

Routes

Networking

Global

2

0

Subnetworks

Compute

Global

2

0

Target pools

Networking

Global

2

0

注意

If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region.

Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient.

If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit:

  • asia-east2
  • asia-northeast2
  • asia-south1
  • australia-southeast1
  • europe-north1
  • europe-west2
  • europe-west3
  • europe-west6
  • northamerica-northeast1
  • southamerica-east1
  • us-west2

You can increase resource quotas from the GCP console, but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster.

1.9.2.5. Creating a service account in GCP

OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one.

Prerequisites

  • You created a project to host your cluster.

Procedure

  1. Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation.
  2. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources.

    注意

    While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable.

  3. Create the service account key in JSON format. See Creating service account keys in the GCP documentation.

    The service account key is required to create a cluster.

1.9.2.5.1. Required GCP permissions

When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. To deploy an OpenShift Container Platform cluster, the service account requires the following permissions. If you deploy your cluster into an existing VPC, the service account does not require certain networking permissions, which are noted in the following lists:

Required roles for the installation program

  • Compute Admin
  • Security Admin
  • Service Account Admin
  • Service Account User
  • Storage Admin

Required roles for creating network resources during installation

  • DNS Administrator

Required roles for user-provisioned GCP infrastructure

  • Deployment Manager Editor
  • Service Account Key Admin

Optional roles

For the cluster to create new limited credentials for its Operators, add the following role:

  • Service Account Key Admin

The roles are applied to the service accounts that the control plane and compute machines use:

表 1.39. GCP service account permissions
AccountRoles

Control Plane

roles/compute.instanceAdmin

roles/compute.networkAdmin

roles/compute.securityAdmin

roles/storage.admin

roles/iam.serviceAccountUser

Compute

roles/compute.viewer

roles/storage.admin

1.9.2.6. Supported GCP regions

You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions:

  • asia-east1 (Changhua County, Taiwan)
  • asia-east2 (Hong Kong)
  • asia-northeast1 (Tokyo, Japan)
  • asia-northeast2 (Osaka, Japan)
  • asia-south1 (Mumbai, India)
  • asia-southeast1 (Jurong West, Singapore)
  • australia-southeast1 (Sydney, Australia)
  • europe-north1 (Hamina, Finland)
  • europe-west1 (St. Ghislain, Belgium)
  • europe-west2 (London, England, UK)
  • europe-west3 (Frankfurt, Germany)
  • europe-west4 (Eemshaven, Netherlands)
  • europe-west6 (Zürich, Switzerland)
  • northamerica-northeast1 (Montréal, Québec, Canada)
  • southamerica-east1 (São Paulo, Brazil)
  • us-central1 (Council Bluffs, Iowa, USA)
  • us-east1 (Moncks Corner, South Carolina, USA)
  • us-east4 (Ashburn, Northern Virginia, USA)
  • us-west1 (The Dalles, Oregon, USA)
  • us-west2 (Los Angeles, California, USA)

1.9.2.7. Installing and configuring CLI tools for GCP

To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP.

Prerequisites

  • You created a project to host your cluster.
  • You created a service account and granted it the required permissions.

Procedure

  1. Install the following binaries in $PATH:

    • gcloud
    • gsutil

    See Install the latest Cloud SDK version in the GCP documentation.

  2. Authenticate using the gcloud tool with your configured service account.

    See Authorizing with a service account in the GCP documentation.

1.9.3. Creating the installation files for GCP

To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files.

1.9.3.1. Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP).

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure

  1. Create the install-config.yaml file.

    1. Run the following command:

      $ ./openshift-install create install-config --dir=<installation_directory> 1
      1
      For <installation_directory>, specify the directory name to store the files that the installation program creates.
      重要

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        注意

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select gcp as the platform to target.
      3. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
      4. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
      5. Select the region to deploy the cluster to.
      6. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
      7. Enter a descriptive name for your cluster.
      8. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    重要

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

1.9.3.2. Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

重要

The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

Prerequisites

  • Obtain the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host.
  • Create the install-config.yaml installation configuration file.

Procedure

  1. Generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir=<installation_directory> 1

    Example output

    INFO Consuming Install Config from target directory
    WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings

    1
    For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.

    Because you create your own compute machines later in the installation process, you can safely ignore this warning.

  2. Remove the Kubernetes manifest files that define the control plane machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml

    By removing these files, you prevent the cluster from automatically generating control plane machines.

  3. Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage the worker machines yourself, you do not need to initialize these machines.

  4. Modify the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file to prevent pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.
    2. Locate the mastersSchedulable parameter and set its value to False.
    3. Save and exit the file.
  5. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file:

    apiVersion: config.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      baseDomain: example.openshift.com
      privateZone: 1
        id: mycluster-100419-private-zone
      publicZone: 2
        id: example.openshift.com
    status: {}
    1 2
    Remove this section completely.

    If you do so, you must add ingress DNS records manually in a later step.

  6. Obtain the Ignition config files:

    $ ./openshift-install create ignition-configs --dir=<installation_directory> 1
    1
    For <installation_directory>, specify the same installation directory.

    The following files are generated in the directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

1.9.4. Exporting common variables

1.9.4.1. Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Generate the Ignition config files for your cluster.
  • Install the jq package.

Procedure

  • To extract and view the infrastructure name from the Ignition config file metadata, run the following command:

    $ jq -r .infraID <installation_directory>/metadata.json 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

    Example output

    openshift-vw9j6 1

    1
    The output of this command is your cluster name and a random string.

1.9.4.2. Exporting common variables for Deployment Manager templates

You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP).

注意

Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures.

Prerequisites

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
  • Generate the Ignition config files for your cluster.
  • Install the jq package.

Procedure

  1. Export the following common variables to be used by the provided Deployment Manager templates:

    $ export BASE_DOMAIN='<base_domain>'
    $ export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>'
    $ export NETWORK_CIDR='10.0.0.0/16'
    $ export MASTER_SUBNET_CIDR='10.0.0.0/19'
    $ export WORKER_SUBNET_CIDR='10.0.32.0/19'
    
    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    $ export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json`
    $ export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json`
    $ export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`
    $ export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.

1.9.5. Creating a VPC in GCP

You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template.

注意

If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.

Procedure

  1. Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires.
  2. Create a 01_vpc.yaml resource definition file:

    $ cat <<EOF >01_vpc.yaml
    imports:
    - path: 01_vpc.py
    
    resources:
    - name: cluster-vpc
      type: 01_vpc.py
      properties:
        infra_id: '${INFRA_ID}' 1
        region: '${REGION}' 2
        master_subnet_cidr: '${MASTER_SUBNET_CIDR}' 3
        worker_subnet_cidr: '${WORKER_SUBNET_CIDR}' 4
    EOF
    1
    infra_id is the INFRA_ID infrastructure name from the extraction step.
    2
    region is the region to deploy the cluster into, for example us-central1.
    3
    master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/19.
    4
    worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.32.0/19.
  3. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-vpc --config 01_vpc.yaml

1.9.5.1. Deployment Manager template for the VPC

You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster:

例 1.19. 01_vpc.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-network',
        'type': 'compute.v1.network',
        'properties': {
            'region': context.properties['region'],
            'autoCreateSubnetworks': False
        }
    }, {
        'name': context.properties['infra_id'] + '-master-subnet',
        'type': 'compute.v1.subnetwork',
        'properties': {
            'region': context.properties['region'],
            'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
            'ipCidrRange': context.properties['master_subnet_cidr']
        }
    }, {
        'name': context.properties['infra_id'] + '-worker-subnet',
        'type': 'compute.v1.subnetwork',
        'properties': {
            'region': context.properties['region'],
            'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
            'ipCidrRange': context.properties['worker_subnet_cidr']
        }
    }, {
        'name': context.properties['infra_id'] + '-router',
        'type': 'compute.v1.router',
        'properties': {
            'region': context.properties['region'],
            'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
            'nats': [{
                'name': context.properties['infra_id'] + '-nat-master',
                'natIpAllocateOption': 'AUTO_ONLY',
                'minPortsPerVm': 7168,
                'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
                'subnetworks': [{
                    'name': '$(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)',
                    'sourceIpRangesToNat': ['ALL_IP_RANGES']
                }]
            }, {
                'name': context.properties['infra_id'] + '-nat-worker',
                'natIpAllocateOption': 'AUTO_ONLY',
                'minPortsPerVm': 512,
                'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
                'subnetworks': [{
                    'name': '$(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)',
                    'sourceIpRangesToNat': ['ALL_IP_RANGES']
                }]
            }]
        }
    }]

    return {'resources': resources}

1.9.6. Networking requirements for user-provisioned infrastructure

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server.

During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster in order to establish a network connection, which allows them to download their Ignition config files.

It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.

The Kubernetes API server, which runs on each master node after a successful cluster installation, must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.

You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.

表 1.40. All machines to all machines
ProtocolPortDescription

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

VXLAN and Geneve

6081

VXLAN and Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

TCP/UDP

30000-32767

Kubernetes node port

表 1.41. All machines to control plane
ProtocolPortDescription

TCP

6443

Kubernetes API

表 1.42. Control plane machines to control plane machines
ProtocolPortDescription

TCP

2379-2380

etcd server and peer ports

Network topology requirements

The infrastructure that you provision for your cluster must meet the following network topology requirements.

Load balancers

Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:

  1. API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
    • A stateless load balancing algorithm. The options vary based on the load balancer implementation.
    注意

    Session persistence is not required for the API load balancer to function properly.

    Configure the following ports on both the front and back of the load balancers:

    表 1.43. API load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    6443

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe.

    X

    X

    Kubernetes API server

    22623

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

    X

     

    Machine config server

    注意

    The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.

  2. Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes.
    • A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

    Configure the following ports on both the front and back of the load balancers:

    表 1.44. Application Ingress load balancer
    PortBack-end machines (pool members)InternalExternalDescription

    443

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTPS traffic

    80

    The machines that run the Ingress router pods, compute, or worker, by default.

    X

    X

    HTTP traffic

提示

If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.

注意

A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.

NTP configuration

OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.

If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.

1.9.7. Creating load balancers in GCP

You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template.

注意

If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.

Procedure

  1. Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires.
  2. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires.
  3. Export the variables that the deployment template uses:

    1. Export the cluster network location:

      $ export CLUSTER_NETWORK=(`gcloud compute networks describe ${INFRA_ID}-network --format json | jq -r .selfLink`)
    2. Export the control plane subnet location:

      $ export CONTROL_SUBNET=(`gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink`)
    3. Export the three zones that the cluster uses:

      $ export ZONE_0=(`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`)
      $ export ZONE_1=(`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`)
      $ export ZONE_2=(`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`)
  4. Create a 02_infra.yaml resource definition file:

    $ cat <<EOF >02_infra.yaml
    imports:
    - path: 02_lb_ext.py
    - path: 02_lb_int.py 1
    resources:
    - name: cluster-lb-ext 2
      type: 02_lb_ext.py
      properties:
        infra_id: '${INFRA_ID}' 3
        region: '${REGION}' 4
    - name: cluster-lb-int
      type: 02_lb_int.py
      properties:
        cluster_network: '${CLUSTER_NETWORK}'
        control_subnet: '${CONTROL_SUBNET}' 5
        infra_id: '${INFRA_ID}'
        region: '${REGION}'
        zones: 6
        - '${ZONE_0}'
        - '${ZONE_1}'
        - '${ZONE_2}'
    EOF
    1 2
    Required only when deploying an external cluster.
    3
    infra_id is the INFRA_ID infrastructure name from the extraction step.
    4
    region is the region to deploy the cluster into, for example us-central1.
    5
    control_subnet is the URI to the control subnet.
    6
    zones are the zones to deploy the control plane instances into, like us-east1-b, us-east1-c, and us-east1-d.
  5. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-infra --config 02_infra.yaml
  6. Export the cluster IP address:

    $ export CLUSTER_IP=(`gcloud compute addresses describe ${INFRA_ID}-cluster-ip --region=${REGION} --format json | jq -r .address`)
  7. For an external cluster, also export the cluster public IP address:

    $ export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address`)

1.9.7.1. Deployment Manager template for the external load balancer

You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster:

例 1.20. 02_lb_ext.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-cluster-public-ip',
        'type': 'compute.v1.address',
        'properties': {
            'region': context.properties['region']
        }
    }, {
        # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver
        'name': context.properties['infra_id'] + '-api-http-health-check',
        'type': 'compute.v1.httpHealthCheck',
        'properties': {
            'port': 6080,
            'requestPath': '/readyz'
        }
    }, {
        'name': context.properties['infra_id'] + '-api-target-pool',
        'type': 'compute.v1.targetPool',
        'properties': {
            'region': context.properties['region'],
            'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'],
            'instances': []
        }
    }, {
        'name': context.properties['infra_id'] + '-api-forwarding-rule',
        'type': 'compute.v1.forwardingRule',
        'properties': {
            'region': context.properties['region'],
            'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)',
            'target': '$(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)',
            'portRange': '6443'
        }
    }]

    return {'resources': resources}

1.9.7.2. Deployment Manager template for the internal load balancer

You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster:

例 1.21. 02_lb_int.py Deployment Manager template

def GenerateConfig(context):

    backends = []
    for zone in context.properties['zones']:
        backends.append({
            'group': '$(ref.' + context.properties['infra_id'] + '-master-' + zone + '-instance-group' + '.selfLink)'
        })

    resources = [{
        'name': context.properties['infra_id'] + '-cluster-ip',
        'type': 'compute.v1.address',
        'properties': {
            'addressType': 'INTERNAL',
            'region': context.properties['region'],
            'subnetwork': context.properties['control_subnet']
        }
    }, {
        # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver
        'name': context.properties['infra_id'] + '-api-internal-health-check',
        'type': 'compute.v1.healthCheck',
        'properties': {
            'httpsHealthCheck': {
                'port': 6443,
                'requestPath': '/readyz'
            },
            'type': "HTTPS"
        }
    }, {
        'name': context.properties['infra_id'] + '-api-internal-backend-service',
        'type': 'compute.v1.regionBackendService',
        'properties': {
            'backends': backends,
            'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'],
            'loadBalancingScheme': 'INTERNAL',
            'region': context.properties['region'],
            'protocol': 'TCP',
            'timeoutSec': 120
        }
    }, {
        'name': context.properties['infra_id'] + '-api-internal-forwarding-rule',
        'type': 'compute.v1.forwardingRule',
        'properties': {
            'backendService': '$(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)',
            'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)',
            'loadBalancingScheme': 'INTERNAL',
            'ports': ['6443','22623'],
            'region': context.properties['region'],
            'subnetwork': context.properties['control_subnet']
        }
    }]

    for zone in context.properties['zones']:
        resources.append({
            'name': context.properties['infra_id'] + '-master-' + zone + '-instance-group',
            'type': 'compute.v1.instanceGroup',
            'properties': {
                'namedPorts': [
                    {
                        'name': 'ignition',
                        'port': 22623
                    }, {
                        'name': 'https',
                        'port': 6443
                    }
                ],
                'network': context.properties['cluster_network'],
                'zone': zone
            }
        })

    return {'resources': resources}

You will need this template in addition to the 02_lb_ext.py template when you create an external cluster.

1.9.8. Creating a private DNS zone in GCP

You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template.

注意

If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.

Procedure

  1. Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires.
  2. Create a 02_dns.yaml resource definition file:

    $ cat <<EOF >02_dns.yaml
    imports:
    - path: 02_dns.py
    
    resources:
    - name: cluster-dns
      type: 02_dns.py
      properties:
        infra_id: '${INFRA_ID}' 1
        cluster_domain: '${CLUSTER_NAME}.${BASE_DOMAIN}' 2
        cluster_network: '${CLUSTER_NETWORK}' 3
    EOF
    1
    infra_id is the INFRA_ID infrastructure name from the extraction step.
    2
    cluster_domain is the domain for the cluster, for example openshift.example.com.
    3
    cluster_network is the selfLink URL to the cluster network.
  3. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-dns --config 02_dns.yaml
  4. The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually:

    1. Add the internal DNS entries:

      $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
      $ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
      $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
      $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
      $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
    2. For an external cluster, also add the external DNS entries:

      $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
      $ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
      $ gcloud dns record-sets transaction add ${CLUSTER_PUBLIC_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
      $ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}

1.9.8.1. Deployment Manager template for the private DNS

You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster:

例 1.22. 02_dns.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-private-zone',
        'type': 'dns.v1.managedZone',
        'properties': {
            'description': '',
            'dnsName': context.properties['cluster_domain'] + '.',
            'visibility': 'private',
            'privateVisibilityConfig': {
                'networks': [{
                    'networkUrl': context.properties['cluster_network']
                }]
            }
        }
    }]

    return {'resources': resources}

1.9.9. Creating firewall rules in GCP

You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template.

注意

If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.

Procedure

  1. Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires.
  2. Create a 03_firewall.yaml resource definition file:

    $ cat <<EOF >03_firewall.yaml
    imports:
    - path: 03_firewall.py
    
    resources:
    - name: cluster-firewall
      type: 03_firewall.py
      properties:
        allowed_external_cidr: '0.0.0.0/0' 1
        infra_id: '${INFRA_ID}' 2
        cluster_network: '${CLUSTER_NETWORK}' 3
        network_cidr: '${NETWORK_CIDR}' 4
    EOF
    1
    allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to ${NETWORK_CIDR}.
    2
    infra_id is the INFRA_ID infrastructure name from the extraction step.
    3
    cluster_network is the selfLink URL to the cluster network.
    4
    network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16.
  3. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-firewall --config 03_firewall.yaml

1.9.9.1. Deployment Manager template for firewall rules

You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster:

例 1.23. 03_firewall.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-bootstrap-in-ssh',
        'type': 'compute.v1.firewall',
        'properties': {
            'network': context.properties['cluster_network'],
            'allowed': [{
                'IPProtocol': 'tcp',
                'ports': ['22']
            }],
            'sourceRanges': [context.properties['allowed_external_cidr']],
            'targetTags': [context.properties['infra_id'] + '-bootstrap']
        }
    }, {
        'name': context.properties['infra_id'] + '-api',
        'type': 'compute.v1.firewall',
        'properties': {
            'network': context.properties['cluster_network'],
            'allowed': [{
                'IPProtocol': 'tcp',
                'ports': ['6443']
            }],
            'sourceRanges': [context.properties['allowed_external_cidr']],
            'targetTags': [context.properties['infra_id'] + '-master']
        }
    }, {
        'name': context.properties['infra_id'] + '-health-checks',
        'type': 'compute.v1.firewall',
        'properties': {
            'network': context.properties['cluster_network'],
            'allowed': [{
                'IPProtocol': 'tcp',
                'ports': ['6080', '6443', '22624']
            }],
            'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'],
            'targetTags': [context.properties['infra_id'] + '-master']
        }
    }, {
        'name': context.properties['infra_id'] + '-etcd',
        'type': 'compute.v1.firewall',
        'properties': {
            'network': context.properties['cluster_network'],
            'allowed': [{
                'IPProtocol': 'tcp',
                'ports': ['2379-2380']
            }],
            'sourceTags': [context.properties['infra_id'] + '-master'],
            'targetTags': [context.properties['infra_id'] + '-master']
        }
    }, {
        'name': context.properties['infra_id'] + '-control-plane',
        'type': 'compute.v1.firewall',
        'properties': {
            'network': context.properties['cluster_network'],
            'allowed': [{
                'IPProtocol': 'tcp',
                'ports': ['10257']
            },{
                'IPProtocol': 'tcp',
                'ports': ['10259']
            },{
                'IPProtocol': 'tcp',
                'ports': ['22623']
            }],
            'sourceTags': [
                context.properties['infra_id'] + '-master',
                context.properties['infra_id'] + '-worker'
            ],
            'targetTags': [context.properties['infra_id'] + '-master']
        }
    }, {
        'name': context.properties['infra_id'] + '-internal-network',
        'type': 'compute.v1.firewall',
        'properties': {
            'network': context.properties['cluster_network'],
            'allowed': [{
                'IPProtocol': 'icmp'
            },{
                'IPProtocol': 'tcp',
                'ports': ['22']
            }],
            'sourceRanges': [context.properties['network_cidr']],
            'targetTags': [
                context.properties['infra_id'] + '-master',
                context.properties['infra_id'] + '-worker'
            ]
        }
    }, {
        'name': context.properties['infra_id'] + '-internal-cluster',
        'type': 'compute.v1.firewall',
        'properties': {
            'network': context.properties['cluster_network'],
            'allowed': [{
                'IPProtocol': 'udp',
                'ports': ['4789', '6081']
            },{
                'IPProtocol': 'tcp',
                'ports': ['9000-9999']
            },{
                'IPProtocol': 'udp',
                'ports': ['9000-9999']
            },{
                'IPProtocol': 'tcp',
                'ports': ['10250']
            },{
                'IPProtocol': 'tcp',
                'ports': ['30000-32767']
            },{
                'IPProtocol': 'udp',
                'ports': ['30000-32767']
            }],
            'sourceTags': [
                context.properties['infra_id'] + '-master',
                context.properties['infra_id'] + '-worker'
            ],
            'targetTags': [
                context.properties['infra_id'] + '-master',
                context.properties['infra_id'] + '-worker'
            ]
        }
    }]

    return {'resources': resources}

1.9.10. Creating IAM roles in GCP

You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template.

注意

If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.

Procedure

  1. Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires.
  2. Create a 03_iam.yaml resource definition file:

    $ cat <<EOF >03_iam.yaml
    imports:
    - path: 03_iam.py
    resources:
    - name: cluster-iam
      type: 03_iam.py
      properties:
        infra_id: '${INFRA_ID}' 1
    EOF
    1
    infra_id is the INFRA_ID infrastructure name from the extraction step.
  3. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-iam --config 03_iam.yaml
  4. Export the variable for the master service account:

    $ export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^${INFRA_ID}-m@${PROJECT_NAME}." --format json | jq -r '.[0].email'`)
  5. Export the variable for the worker service account:

    $ export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^${INFRA_ID}-w@${PROJECT_NAME}." --format json | jq -r '.[0].email'`)
  6. Export the variable for the subnet that hosts the compute machines:

    $ export COMPUTE_SUBNET=(`gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink`)
  7. The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually:

    $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin"
    $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin"
    $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin"
    $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser"
    $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
    
    $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer"
    $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
  8. Create a service account key and store it locally for later use:

    $ gcloud iam service-accounts keys create service-account-key.json --iam-account=${MASTER_SERVICE_ACCOUNT}

1.9.10.1. Deployment Manager template for IAM roles

You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster:

例 1.24. 03_iam.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-master-node-sa',
        'type': 'iam.v1.serviceAccount',
        'properties': {
            'accountId': context.properties['infra_id'] + '-m',
            'displayName': context.properties['infra_id'] + '-master-node'
        }
    }, {
        'name': context.properties['infra_id'] + '-worker-node-sa',
        'type': 'iam.v1.serviceAccount',
        'properties': {
            'accountId': context.properties['infra_id'] + '-w',
            'displayName': context.properties['infra_id'] + '-worker-node'
        }
    }]

    return {'resources': resources}

1.9.11. Creating the RHCOS cluster image for the GCP infrastructure

You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes.

Procedure

  1. Obtain the RHCOS image from the RHCOS image mirror page.

    重要

    The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.

    The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz.

  2. Create the Google storage bucket:

    $ gsutil mb gs://<bucket_name>
  3. Upload the RHCOS image to the Google storage bucket:

    $ gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz  gs://<bucket_name>
  4. Export the uploaded RHCOS image location as a variable:

    $ export IMAGE_SOURCE=`gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz`
  5. Create the cluster image:

    $ gcloud compute images create "${INFRA_ID}-rhcos-image" \
        --source-uri="${IMAGE_SOURCE}"

1.9.12. Creating the bootstrap machine in GCP

You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template.

注意

If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.
  • Create and configure networking and load balancers in GCP.
  • Create control plane and compute roles.
  • Ensure pyOpenSSL is installed.

Procedure

  1. Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires.
  2. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires:

    $ export CLUSTER_IMAGE=(`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)
  3. Create a bucket and upload the bootstrap.ign file:

    $ gsutil mb gs://${INFRA_ID}-bootstrap-ignition
    $ gsutil cp <installation_directory>/bootstrap.ign gs://${INFRA_ID}-bootstrap-ignition/
  4. Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable:

    $ export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print $5}'`
  5. Create a 04_bootstrap.yaml resource definition file:

    $ cat <<EOF >04_bootstrap.yaml
    imports:
    - path: 04_bootstrap.py
    
    resources:
    - name: cluster-bootstrap
      type: 04_bootstrap.py
      properties:
        infra_id: '${INFRA_ID}' 1
        region: '${REGION}' 2
        zone: '${ZONE_0}' 3
    
        cluster_network: '${CLUSTER_NETWORK}' 4
        control_subnet: '${CONTROL_SUBNET}' 5
        image: '${CLUSTER_IMAGE}' 6
        machine_type: 'n1-standard-4' 7
        root_volume_size: '128' 8
    
        bootstrap_ign: '${BOOTSTRAP_IGN}' 9
    EOF
    1
    infra_id is the INFRA_ID infrastructure name from the extraction step.
    2
    region is the region to deploy the cluster into, for example us-central1.
    3
    zone is the zone to deploy the bootstrap instance into, for example us-central1-b.
    4
    cluster_network is the selfLink URL to the cluster network.
    5
    control_subnet is the selfLink URL to the control subnet.
    6
    image is the selfLink URL to the RHCOS image.
    7
    machine_type is the machine type of the instance, for example n1-standard-4.
    8
    root_volume_size is the boot disk size for the bootstrap machine.
    9
    bootstrap_ign is the URL output when creating a signed URL.
  6. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml
  7. The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually.

    1. Add the bootstrap instance to the internal load balancer instance group:

      $ gcloud compute instance-groups unmanaged add-instances \
          ${INFRA_ID}-bootstrap-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap
    2. Add the bootstrap instance group to the internal load balancer backend service:

      $ gcloud compute backend-services add-backend \
          ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0}

1.9.12.1. Deployment Manager template for the bootstrap machine

You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster:

例 1.25. 04_bootstrap.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-bootstrap-public-ip',
        'type': 'compute.v1.address',
        'properties': {
            'region': context.properties['region']
        }
    }, {
        'name': context.properties['infra_id'] + '-bootstrap',
        'type': 'compute.v1.instance',
        'properties': {
            'disks': [{
                'autoDelete': True,
                'boot': True,
                'initializeParams': {
                    'diskSizeGb': context.properties['root_volume_size'],
                    'sourceImage': context.properties['image']
                }
            }],
            'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'],
            'metadata': {
                'items': [{
                    'key': 'user-data',
                    'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '","verification":{}}},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}',
                }]
            },
            'networkInterfaces': [{
                'subnetwork': context.properties['control_subnet'],
                'accessConfigs': [{
                    'natIP': '$(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)'
                }]
            }],
            'tags': {
                'items': [
                    context.properties['infra_id'] + '-master',
                    context.properties['infra_id'] + '-bootstrap'
                ]
            },
            'zone': context.properties['zone']
        }
    }, {
        'name': context.properties['infra_id'] + '-bootstrap-instance-group',
        'type': 'compute.v1.instanceGroup',
        'properties': {
            'namedPorts': [
                {
                    'name': 'ignition',
                    'port': 22623
                }, {
                    'name': 'https',
                    'port': 6443
                }
            ],
            'network': context.properties['cluster_network'],
            'zone': context.properties['zone']
        }
    }]

    return {'resources': resources}

1.9.13. Creating the control plane machines in GCP

You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template.

注意

If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.
  • Create and configure networking and load balancers in GCP.
  • Create control plane and compute roles.
  • Create the bootstrap machine.

Procedure

  1. Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires.
  2. Export the following variable required by the resource definition:

    $ export MASTER_IGNITION=`cat <installation_directory>/master.ign`
  3. Create a 05_control_plane.yaml resource definition file:

    $ cat <<EOF >05_control_plane.yaml
    imports:
    - path: 05_control_plane.py
    
    resources:
    - name: cluster-control-plane
      type: 05_control_plane.py
      properties:
        infra_id: '${INFRA_ID}' 1
        zones: 2
        - '${ZONE_0}'
        - '${ZONE_1}'
        - '${ZONE_2}'
    
        control_subnet: '${CONTROL_SUBNET}' 3
        image: '${CLUSTER_IMAGE}' 4
        machine_type: 'n1-standard-4' 5
        root_volume_size: '128'
        service_account_email: '${MASTER_SERVICE_ACCOUNT}' 6
    
        ignition: '${MASTER_IGNITION}' 7
    EOF
    1
    infra_id is the INFRA_ID infrastructure name from the extraction step.
    2
    zones are the zones to deploy the control plane instances into, for example us-central1-a, us-central1-b, and us-central1-c.
    3
    control_subnet is the selfLink URL to the control subnet.
    4
    image is the selfLink URL to the RHCOS image.
    5
    machine_type is the machine type of the instance, for example n1-standard-4.
    6
    service_account_email is the email address for the master service account that you created.
    7
    ignition is the contents of the master.ign file.
  4. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --config 05_control_plane.yaml
  5. The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually.

    • Run the following commands to add the control plane machines to the appropriate instance groups:

      $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0
      $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1
      $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2
    • For an external cluster, you must also run the following commands to add the control plane machines to the target pools:

      $ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0
      $ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1
      $ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2

1.9.13.1. Deployment Manager template for control plane machines

You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster:

例 1.26. 05_control_plane.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-m-0',
        'type': 'compute.v1.instance',
        'properties': {
            'disks': [{
                'autoDelete': True,
                'boot': True,
                'initializeParams': {
                    'diskSizeGb': context.properties['root_volume_size'],
                    'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd',
                    'sourceImage': context.properties['image']
                }
            }],
            'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'],
            'metadata': {
                'items': [{
                    'key': 'user-data',
                    'value': context.properties['ignition']
                }]
            },
            'networkInterfaces': [{
                'subnetwork': context.properties['control_subnet']
            }],
            'serviceAccounts': [{
                'email': context.properties['service_account_email'],
                'scopes': ['https://www.googleapis.com/auth/cloud-platform']
            }],
            'tags': {
                'items': [
                    context.properties['infra_id'] + '-master',
                ]
            },
            'zone': context.properties['zones'][0]
        }
    }, {
        'name': context.properties['infra_id'] + '-m-1',
        'type': 'compute.v1.instance',
        'properties': {
            'disks': [{
                'autoDelete': True,
                'boot': True,
                'initializeParams': {
                    'diskSizeGb': context.properties['root_volume_size'],
                    'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd',
                    'sourceImage': context.properties['image']
                }
            }],
            'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'],
            'metadata': {
                'items': [{
                    'key': 'user-data',
                    'value': context.properties['ignition']
                }]
            },
            'networkInterfaces': [{
                'subnetwork': context.properties['control_subnet']
            }],
            'serviceAccounts': [{
                'email': context.properties['service_account_email'],
                'scopes': ['https://www.googleapis.com/auth/cloud-platform']
            }],
            'tags': {
                'items': [
                    context.properties['infra_id'] + '-master',
                ]
            },
            'zone': context.properties['zones'][1]
        }
    }, {
        'name': context.properties['infra_id'] + '-m-2',
        'type': 'compute.v1.instance',
        'properties': {
            'disks': [{
                'autoDelete': True,
                'boot': True,
                'initializeParams': {
                    'diskSizeGb': context.properties['root_volume_size'],
                    'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd',
                    'sourceImage': context.properties['image']
                }
            }],
            'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'],
            'metadata': {
                'items': [{
                    'key': 'user-data',
                    'value': context.properties['ignition']
                }]
            },
            'networkInterfaces': [{
                'subnetwork': context.properties['control_subnet']
            }],
            'serviceAccounts': [{
                'email': context.properties['service_account_email'],
                'scopes': ['https://www.googleapis.com/auth/cloud-platform']
            }],
            'tags': {
                'items': [
                    context.properties['infra_id'] + '-master',
                ]
            },
            'zone': context.properties['zones'][2]
        }
    }]

    return {'resources': resources}

1.9.14. Wait for bootstrap completion and remove bootstrap resources in GCP

After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.
  • Create and configure networking and load balancers in GCP.
  • Create control plane and compute roles.
  • Create the bootstrap machine.
  • Create the control plane machines.

Procedure

  1. Change to the directory that contains the installation program and run the following command:

    $ ./openshift-install wait-for bootstrap-complete --dir=<installation_directory> \ 1
        --log-level info 2
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2
    To view different installation details, specify warn, debug, or error instead of info.

    If the command exits without a FATAL warning, your production control plane has initialized.

  2. Delete the bootstrap resources:

    $ gcloud compute backend-services remove-backend ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0}
    $ gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign
    $ gsutil rb gs://${INFRA_ID}-bootstrap-ignition
    $ gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap

1.9.15. Creating additional worker machines in GCP

You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform.

In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file.

注意

If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites

  • Configure a GCP account.
  • Generate the Ignition config files for your cluster.
  • Create and configure a VPC and associated subnets in GCP.
  • Create and configure networking and load balancers in GCP.
  • Create control plane and compute roles.
  • Create the bootstrap machine.
  • Create the control plane machines.

Procedure

  1. Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires.
  2. Export the variables that the resource definition uses.

    1. Export the subnet that hosts the compute machines:

      $ export COMPUTE_SUBNET=(`gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink`)
    2. Export the email address for your service account:

      $ export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^${INFRA_ID}-w@${PROJECT_NAME}." --format json | jq -r '.[0].email'`)
    3. Export the location of the compute machine Ignition config file:

      $ export WORKER_IGNITION=`cat <installation_directory>/worker.ign`
  3. Create a 06_worker.yaml resource definition file:

    $ cat <<EOF >06_worker.yaml
    imports:
    - path: 06_worker.py
    
    resources:
    - name: 'worker-0' 1
      type: 06_worker.py
      properties:
        infra_id: '${INFRA_ID}' 2
        zone: '${ZONE_0}' 3
        compute_subnet: '${COMPUTE_SUBNET}' 4
        image: '${CLUSTER_IMAGE}' 5
        machine_type: 'n1-standard-4' 6
        root_volume_size: '128'
        service_account_email: '${WORKER_SERVICE_ACCOUNT}' 7
        ignition: '${WORKER_IGNITION}' 8
    - name: 'worker-1'
      type: 06_worker.py
      properties:
        infra_id: '${INFRA_ID}' 9
        zone: '${ZONE_1}' 10
        compute_subnet: '${COMPUTE_SUBNET}' 11
        image: '${CLUSTER_IMAGE}' 12
        machine_type: 'n1-standard-4' 13
        root_volume_size: '128'
        service_account_email: '${WORKER_SERVICE_ACCOUNT}' 14
        ignition: '${WORKER_IGNITION}' 15
    EOF
    1
    name is the name of the worker machine, for example worker-0.
    2 9
    infra_id is the INFRA_ID infrastructure name from the extraction step.
    3 10
    zone is the zone to deploy the worker machine into, for example us-central1-a.
    4 11
    compute_subnet is the selfLink URL to the compute subnet.
    5 12
    image is the selfLink URL to the RHCOS image.
    6 13
    machine_type is the machine type of the instance, for example n1-standard-4.
    7 14
    service_account_email is the email address for the worker service account that you created.
    8 15
    ignition is the contents of the worker.ign file.
  4. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file.
  5. Create the deployment by using the gcloud CLI:

    $ gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml

1.9.15.1. Deployment Manager template for worker machines

You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster:

例 1.27. 06_worker.py Deployment Manager template

def GenerateConfig(context):

    resources = [{
        'name': context.properties['infra_id'] + '-' + context.env['name'],
        'type': 'compute.v1.instance',
        'properties': {
            'disks': [{
                'autoDelete': True,
                'boot': True,
                'initializeParams': {
                    'diskSizeGb': context.properties['root_volume_size'],
                    'sourceImage': context.properties['image']
                }
            }],
            'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'],
            'metadata': {
                'items': [{
                    'key': 'user-data',
                    'value': context.properties['ignition']
                }]
            },
            'networkInterfaces': [{
                'subnetwork': context.properties['compute_subnet']
            }],
            'serviceAccounts': [{
                'email': context.properties['service_account_email'],
                'scopes': ['https://www.googleapis.com/auth/cloud-platform']
            }],
            'tags': {
                'items': [
                    context.properties['infra_id'] + '-worker',
                ]
            },
            'zone': context.properties['zone']
        }
    }]

    return {'resources': resources}

1.9.16. Logging in to the cluster

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites

  • Deploy an OpenShift Container Platform cluster.
  • Install the oc CLI.

Procedure

  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami

    Example output

    system:admin

1.9.17. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.18.3
    master-1  Ready     master  63m  v1.18.3
    master-2  Ready     master  64m  v1.18.3
    worker-0  NotReady  worker  76s  v1.18.3
    worker-1  NotReady  worker  70s  v1.18.3

    The output lists all of the machines that you created.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    注意

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.20.0
    master-1  Ready     master  73m  v1.20.0
    master-2  Ready     master  74m  v1.20.0
    worker-0  Ready     worker  11m  v1.20.0
    worker-1  Ready     worker  11m  v1.20.0

    注意

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

1.9.18. Optional: Adding the ingress DNS records

If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements.

Prerequisites

  • Configure a GCP account.
  • Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs.
  • Create and configure a VPC and associated subnets in GCP.
  • Create and configure networking and load balancers in GCP.
  • Create control plane and compute roles.
  • Create the bootstrap machine.
  • Create the control plane machines.
  • Create the worker machines.

Procedure

  1. Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field:

    $ oc -n openshift-ingress get service router-default

    Example output

    NAME             TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
    router-default   LoadBalancer   172.30.18.154   35.233.157.184   80:32288/TCP,443:31215/TCP   98

  2. Add the A record to your zones:

    • To use A records:

      1. Export the variable for the router IP address:

        $ export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
      2. Add the A record to the private zones:

        $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
        $ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
        $ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone
        $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
      3. For an external cluster, also add the A record to the public zones:

        $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
        $ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
        $ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
        $ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
    • To add explicit domains instead of using a wildcard, create entries for each of the cluster’s current routes:

      $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes

      Example output

      oauth-openshift.apps.your.cluster.domain.example.com
      console-openshift-console.apps.your.cluster.domain.example.com
      downloads-openshift-console.apps.your.cluster.domain.example.com
      alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com
      grafana-openshift-monitoring.apps.your.cluster.domain.example.com
      prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com

1.9.19. Completing a GCP installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready.

Prerequisites

  • Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned GCP infrastructure.
  • Install the oc CLI and log in.

Procedure

  1. Complete the cluster installation:

    $ ./openshift-install --dir=<installation_directory> wait-for install-complete 1

    Example output

    INFO Waiting up to 30m0s for the cluster to initialize...

    1
    For <installation_directory>, specify the path to the directory that you stored the installation files in.
    重要

    The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  2. Observe the running state of your cluster.

    1. Run the following command to view the current cluster version and status:

      $ oc get clusterversion

      Example output

      NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
      version             False       True          24m     Working towards 4.5.4: 99% complete

    2. Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO):

      $ oc get clusteroperators

      Example output

      NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
      authentication                             4.5.4     True        False         False      7m56s
      cloud-credential                           4.5.4     True        False         False      31m
      cluster-autoscaler                         4.5.4     True        False         False      16m
      console                                    4.5.4     True        False         False      10m
      csi-snapshot-controller                    4.5.4     True        False         False      16m
      dns                                        4.5.4     True        False         False      22m
      etcd                                       4.5.4     False       False         False      25s
      image-registry                             4.5.4     True        False         False      16m
      ingress                                    4.5.4     True        False         False      16m
      insights                                   4.5.4     True        False         False      17m
      kube-apiserver                             4.5.4     True        False         False      19m
      kube-controller-manager                    4.5.4     True        False         False      20m
      kube-scheduler                             4.5.4     True        False         False      20m
      kube-storage-version-migrator              4.5.4     True        False         False      16m
      machine-api                                4.5.4     True        False         False      22m
      machine-config                             4.5.4     True        False         False      22m
      marketplace                                4.5.4     True        False         False      16m
      monitoring                                 4.5.4     True        False         False      10m
      network                                    4.5.4     True        False         False      23m
      node-tuning                                4.5.4     True        False         False      23m
      openshift-apiserver                        4.5.4     True        False         False      17m
      openshift-controller-manager               4.5.4     True        False         False      15m
      openshift-samples                          4.5.4     True        False         False      16m
      operator-lifecycle-manager                 4.5.4     True        False         False      22m
      operator-lifecycle-manager-catalog         4.5.4     True        False         False      22m
      operator-lifecycle-manager-packageserver   4.5.4     True        False         False      18m
      service-ca                                 4.5.4     True        False         False      23m
      service-catalog-apiserver                  4.5.4     True        False         False      23m
      service-catalog-controller-manager         4.5.4     True        False         False      23m
      storage                                    4.5.4     True        False         False      17m

    3. Run the following command to view your cluster pods:

      $ oc get pods --all-namespaces

      Example output

      NAMESPACE                                               NAME                                                                READY     STATUS      RESTARTS   AGE
      kube-system                                             etcd-member-ip-10-0-3-111.us-east-2.compute.internal                1/1       Running     0          35m
      kube-system                                             etcd-member-ip-10-0-3-239.us-east-2.compute.internal                1/1       Running     0          37m
      kube-system                                             etcd-member-ip-10-0-3-24.us-east-2.compute.internal                 1/1       Running     0          35m
      openshift-apiserver-operator                            openshift-apiserver-operator-6d6674f4f4-h7t2t                       1/1       Running     1          37m
      openshift-apiserver                                     apiserver-fm48r                                                     1/1       Running     0          30m
      openshift-apiserver                                     apiserver-fxkvv                                                     1/1       Running     0          29m
      openshift-apiserver                                     apiserver-q85nm                                                     1/1       Running     0          29m
      ...
      openshift-service-ca-operator                           openshift-service-ca-operator-66ff6dc6cd-9r257                      1/1       Running     0          37m
      openshift-service-ca                                    apiservice-cabundle-injector-695b6bcbc-cl5hm                        1/1       Running     0          35m
      openshift-service-ca                                    configmap-cabundle-injector-8498544d7-25qn6                         1/1       Running     0          35m
      openshift-service-ca                                    service-serving-cert-signer-6445fc9c6-wqdqn                         1/1       Running     0          35m
      openshift-service-catalog-apiserver-operator            openshift-service-catalog-apiserver-operator-549f44668b-b5q2w       1/1       Running     0          32m
      openshift-service-catalog-controller-manager-operator   openshift-service-catalog-controller-manager-operator-b78cr2lnm     1/1       Running     0          31m

    When the current cluster version is AVAILABLE, the installation is complete.

1.9.20. Next steps

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.