Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Installing a cluster on RHV in a restricted network
In OpenShift Container Platform version 4.12, you can install a customized OpenShift Container Platform cluster on Red Hat Virtualization (RHV) in a restricted network by creating an internal mirror of the installation release content.
5.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
The following items are required to install an OpenShift Container Platform cluster on a RHV environment.
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on RHV.
You created a registry on your mirror host and obtained the
data for your version of OpenShift Container Platform.imageContentSourcesImportantBecause the installation media is on the mirror host, you can use that computer to complete all installation steps.
- You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
NoteBe sure to also review this site list if you are configuring a proxy.
5.2. About installations in restricted networks Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
5.2.1. Additional limits Link kopierenLink in die Zwischenablage kopiert!
Clusters in restricted networks have the following additional limitations and restrictions:
-
The status includes an
ClusterVersionerror.Unable to retrieve available updates - By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
5.3. Internet access for OpenShift Container Platform Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.4. Requirements for the RHV environment Link kopierenLink in die Zwischenablage kopiert!
To install and run an OpenShift Container Platform version 4.12 cluster, the RHV environment must meet the following requirements.
Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation.
The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations.
By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources.
If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly.
Requirements
- The RHV version is 4.4.
- The RHV environment has one data center whose state is Up.
- The RHV data center contains an RHV cluster.
The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster:
- Minimum 28 vCPUs: four for each of the seven virtual machines created during installation.
112 GiB RAM or more, including:
- 16 GiB or more for the bootstrap machine, which provides the temporary control plane.
- 16 GiB or more for each of the three control plane machines which provide the control plane.
- 16 GiB or more for each of the three compute machines, which run the application workloads.
- The RHV storage domain must meet these etcd backend performance requirements.
- In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster.
- To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process.
- The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP.
A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster:
-
DiskOperator -
DiskCreator -
UserTemplateBasedVm -
TemplateOwner -
TemplateCreator -
on the target cluster
ClusterAdmin
-
Apply the principle of least privilege: Avoid using an administrator account with
SuperUser
ovirt-config.yaml
5.5. Verifying the requirements for the RHV environment Link kopierenLink in die Zwischenablage kopiert!
Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures.
These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly.
Procedure
Check that the RHV version supports installation of OpenShift Container Platform version 4.12.
- In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About.
- In the window that opens, make a note of the RHV Software Version.
- Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV.
Inspect the data center, cluster, and storage.
-
In the RHV Administration Portal, click Compute
Data Centers. - Confirm that the data center where you plan to install OpenShift Container Platform is accessible.
- Click the name of that data center.
- In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active.
- Record the Domain Name for use later on.
- Confirm Free Space has at least 230 GiB.
- Confirm that the storage domain meets these etcd backend performance requirements, which you can measure by using the fio performance benchmarking tool.
- In the data center details, click the Clusters tab.
- Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on.
-
In the RHV Administration Portal, click Compute
Inspect the RHV host resources.
- In the RHV Administration Portal, click Compute > Clusters.
- Click the cluster where you plan to install OpenShift Container Platform.
- In the cluster details, click the Hosts tab.
- Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster.
- Record the number of available Logical CPU Cores for use later on.
- Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores.
Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines:
- 16 GiB required for the bootstrap machine
- 16 GiB required for each of the three control plane machines
- 16 GiB for each of the three compute machines
- Record the amount of Max free Memory for scheduling new virtual machines for use later on.
Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager’s REST API. From a virtual machine on this network, use curl to reach the RHV Manager’s REST API:
$ curl -k -u <username>@<profile>:<password> \1 https://<engine-fqdn>/ovirt-engine/api2 - 1
- For
<username>, specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For<profile>, specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For<password>, specify the password for that user name. - 2
- For
<engine-fqdn>, specify the fully qualified domain name of the RHV environment.
For example:
$ curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api
5.6. Networking requirements for user-provisioned infrastructure Link kopierenLink in die Zwischenablage kopiert!
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in
initramfs
During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.
It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines.
If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options.
The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.
Firewall
Configure your firewall so your cluster has access to required sites.
See also:
DNS
Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address.
-
Create DNS records for (internal and external resolution) and
api.<cluster_name>.<base_domain>(internal resolution) that point to the load balancer for the control plane machines.api-int.<cluster_name>.<base_domain> -
Create a DNS record for that points to the load balancer for the Ingress router. For example, ports
*.apps.<cluster_name>.<base_domain>and443of the compute machines.80
5.6.1. Setting the cluster node hostnames through DHCP Link kopierenLink in die Zwischenablage kopiert!
On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as
localhost
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation.
5.6.2. Network connectivity requirements Link kopierenLink in die Zwischenablage kopiert!
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.
This section provides details about the ports that are required.
In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat.
| Protocol | Port | Description |
|---|---|---|
| ICMP | N/A | Network reachability tests |
| TCP |
| Metrics |
|
| Host level services, including the node exporter on ports
| |
|
| The default ports that Kubernetes reserves | |
|
| openshift-sdn | |
| UDP |
| VXLAN |
|
| Geneve | |
|
| Host level services, including the node exporter on ports
| |
|
| IPsec IKE packets | |
|
| IPsec NAT-T packets | |
|
| Network Time Protocol (NTP) on UDP port
If an external NTP time server is configured, you must open UDP port
| |
| TCP/UDP |
| Kubernetes node port |
| ESP | N/A | IPsec Encapsulating Security Payload (ESP) |
| Protocol | Port | Description |
|---|---|---|
| TCP |
| Kubernetes API |
| Protocol | Port | Description |
|---|---|---|
| TCP |
| etcd server and peer ports |
NTP configuration for user-provisioned infrastructure
OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.
If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers.
5.7. User-provisioned DNS requirements Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform deployments, DNS name resolution is required for the following components:
- The Kubernetes API
- The OpenShift Container Platform application wildcard
- The bootstrap, control plane, and compute machines
Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines.
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate.
It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information.
The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record,
<cluster_name>
<base_domain>
install-config.yaml
<component>.<cluster_name>.<base_domain>.
| Component | Record | Description |
|---|---|---|
| Kubernetes API |
| A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
|
| A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. | |
| Routes |
| A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example,
|
| Bootstrap machine |
| A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. |
| Control plane machines |
| DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. |
| Compute machines |
| DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. |
In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration.
You can use the
dig
5.7.1. Example DNS configuration for user-provisioned clusters Link kopierenLink in die Zwischenablage kopiert!
This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is
ocp4
example.com
Example DNS A record configuration for a user-provisioned cluster
The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster.
Example 5.1. Sample DNS zone database
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
IN MX 10 smtp.example.com.
;
;
ns1.example.com. IN A 192.168.1.5
smtp.example.com. IN A 192.168.1.5
;
helper.example.com. IN A 192.168.1.5
helper.ocp4.example.com. IN A 192.168.1.5
;
api.ocp4.example.com. IN A 192.168.1.5
api-int.ocp4.example.com. IN A 192.168.1.5
;
*.apps.ocp4.example.com. IN A 192.168.1.5
;
bootstrap.ocp4.example.com. IN A 192.168.1.96
;
control-plane0.ocp4.example.com. IN A 192.168.1.97
control-plane1.ocp4.example.com. IN A 192.168.1.98
control-plane2.ocp4.example.com. IN A 192.168.1.99
;
compute0.ocp4.example.com. IN A 192.168.1.11
compute1.ocp4.example.com. IN A 192.168.1.7
;
;EOF
- 1
- Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer.
- 2
- Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications.
- 3
- Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.Note
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
- 4
- Provides name resolution for the bootstrap machine.
- 5 6 7
- Provides name resolution for the control plane machines.
- 8 9
- Provides name resolution for the compute machines.
Example DNS PTR record configuration for a user-provisioned cluster
The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster.
Example 5.2. Sample DNS zone database for reverse records
$TTL 1W
@ IN SOA ns1.example.com. root (
2019070700 ; serial
3H ; refresh (3 hours)
30M ; retry (30 minutes)
2W ; expiry (2 weeks)
1W ) ; minimum (1 week)
IN NS ns1.example.com.
;
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com.
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com.
;
96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com.
;
97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com.
98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com.
99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com.
;
11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com.
7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com.
;
;EOF
- 1
- Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer.
- 2
- Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications.
- 3
- Provides reverse DNS resolution for the bootstrap machine.
- 4 5 6
- Provides reverse DNS resolution for the control plane machines.
- 7 8
- Provides reverse DNS resolution for the compute machines.
A PTR record is not required for the OpenShift Container Platform application wildcard.
5.7.2. Load balancing requirements for user-provisioned infrastructure Link kopierenLink in die Zwischenablage kopiert!
Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately.
The load balancing infrastructure must meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
- A stateless load balancing algorithm. The options vary based on the load balancer implementation.
ImportantDo not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster.
Configure the following ports on both the front and back of the load balancers:
Expand Table 5.5. API load balancer Port Back-end machines (pool members) Internal External Description 6443Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the
endpoint for the API server health check probe./readyzX
X
Kubernetes API server
22623Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.
X
Machine config server
NoteThe load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the
endpoint to the removal of the API server instance from the pool. Within the time frame after/readyzreturns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values./readyzApplication Ingress load balancer: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster.
Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
- A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
TipIf the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
Configure the following ports on both the front and back of the load balancers:
Expand Table 5.6. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443The machines that run the Ingress Controller pods, compute, or worker, by default.
X
X
HTTPS traffic
80The machines that run the Ingress Controller pods, compute, or worker, by default.
X
X
HTTP traffic
NoteIf you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
5.7.2.1. Example load balancer configuration for user-provisioned clusters Link kopierenLink in die Zwischenablage kopiert!
This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an
/etc/haproxy/haproxy.cfg
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you are using HAProxy as a load balancer and SELinux is set to
enforcing
setsebool -P haproxy_connect_any=1
Example 5.3. Sample API and application Ingress load balancer configuration
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen api-server-6443
bind *:6443
mode tcp
option httpchk GET /readyz HTTP/1.0
option log-health-checks
balance roundrobin
server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup
server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
listen machine-config-server-22623
bind *:22623
mode tcp
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup
server master0 master0.ocp4.example.com:22623 check inter 1s
server master1 master1.ocp4.example.com:22623 check inter 1s
server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443
bind *:443
mode tcp
balance source
server worker0 worker0.ocp4.example.com:443 check inter 1s
server worker1 worker1.ocp4.example.com:443 check inter 1s
listen ingress-router-80
bind *:80
mode tcp
balance source
server worker0 worker0.ocp4.example.com:80 check inter 1s
server worker1 worker1.ocp4.example.com:80 check inter 1s
- 1
- Port
6443handles the Kubernetes API traffic and points to the control plane machines. - 2 4
- The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete.
- 3
- Port
22623handles the machine config server traffic and points to the control plane machines. - 5
- Port
443handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. - 6
- Port
80handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.NoteIf you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
If you are using HAProxy as a load balancer, you can check that the
haproxy
6443
22623
443
80
netstat -nltupe
5.8. Setting up the installation machine Link kopierenLink in die Zwischenablage kopiert!
To run the binary
openshift-install
Procedure
Update or install Python3 and Ansible. For example:
# dnf update python3 ansible-
Install the
python3-ovirt-engine-sdk4package to get the Python Software Development Kit. Install the
Ansible role. On the RHV Manager and other Red Hat Enterprise Linux (RHEL) machines, this role is distributed as theovirt.image-templatepackage. For example, enter:ovirt-ansible-image-template# dnf install ovirt-ansible-image-templateInstall the
Ansible role. On the RHV Manager and other RHEL machines, this role is distributed as theovirt.vm-infrapackage.ovirt-ansible-vm-infra# dnf install ovirt-ansible-vm-infraCreate an environment variable and assign an absolute or relative path to it. For example, enter:
$ export ASSETS_DIR=./wrkNoteThe installation program uses this variable to create a directory where it saves important installation-related files. Later, the installation process reuses this variable to locate those asset files. Avoid deleting this assets directory; it is required for uninstalling the cluster.
5.9. Setting up the CA certificate for RHV Link kopierenLink in die Zwischenablage kopiert!
Download the CA certificate from the Red Hat Virtualization (RHV) Manager and set it up on the installation machine.
You can download the certificate from a webpage on the RHV Manager or by using a
curl
Later, you provide the certificate to the installation program.
Procedure
Use either of these two methods to download the CA certificate:
-
Go to the Manager’s webpage, . Then, under Downloads, click the CA Certificate link.
https://<engine-fqdn>/ovirt-engine/ Run the following command:
$ curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem1 - 1
- For
<engine-fqdn>, specify the fully qualified domain name of the RHV Manager, such asrhv-env.virtlab.example.com.
-
Go to the Manager’s webpage,
Configure the CA file to grant rootless user access to the Manager. Set the CA file permissions to have an octal value of
(symbolic value:0644):-rw-r—r--$ sudo chmod 0644 /tmp/ca.pemFor Linux, copy the CA certificate to the directory for server certificates. Use
to preserve the permissions:-p$ sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pemAdd the certificate to the certificate manager for your operating system:
- For macOS, double-click the certificate file and use the Keychain Access utility to add the file to the System keychain.
For Linux, update the CA trust:
$ sudo update-ca-trustNoteIf you use your own certificate authority, make sure the system trusts it.
5.10. Generating a key pair for cluster node SSH access Link kopierenLink in die Zwischenablage kopiert!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the
~/.ssh/authorized_keys
core
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user
core
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The
./openshift-install gather
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the
,x86_64, andppc64learchitectures. do not create a key that uses thes390xalgorithm. Instead, create a key that uses theed25519orrsaalgorithm.ecdsaView the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
public key:~/.ssh/id_ed25519.pub$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
command../openshift-install gatherNoteOn some distributions, default SSH private key identities such as
and~/.ssh/id_rsaare managed automatically.~/.ssh/id_dsaIf the
process is not already running for your local user, start it as a background task:ssh-agent$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
:ssh-agent$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.11. Downloading the Ansible playbooks Link kopierenLink in die Zwischenablage kopiert!
Download the Ansible playbooks for installing OpenShift Container Platform version 4.12 on RHV.
Procedure
On your installation machine, run the following commands:
$ mkdir playbooks$ cd playbooks$ xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/workers.yml'
Next steps
-
After you download these Ansible playbooks, you must also create the environment variable for the assets directory and customize the file before you create an installation configuration file by running the installation program.
inventory.yml
5.12. The inventory.yml file Link kopierenLink in die Zwischenablage kopiert!
You use the
inventory.yml
inventory.yml
The following
inventory.yml
Example inventory.yml file
---
all:
vars:
ovirt_cluster: "Default"
ocp:
assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}"
ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml"
# ---
# {op-system} section
# ---
rhcos:
image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz"
local_cmp_image_path: "/tmp/rhcos.qcow2.gz"
local_image_path: "/tmp/rhcos.qcow2"
# ---
# Profiles section
# ---
control_plane:
cluster: "{{ ovirt_cluster }}"
memory: 16GiB
sockets: 4
cores: 1
template: rhcos_tpl
operating_system: "rhcos_x64"
type: high_performance
graphical_console:
headless_mode: false
protocol:
- spice
- vnc
disks:
- size: 120GiB
name: os
interface: virtio_scsi
storage_domain: depot_nvme
nics:
- name: nic1
network: lab
profile: lab
compute:
cluster: "{{ ovirt_cluster }}"
memory: 16GiB
sockets: 4
cores: 1
template: worker_rhcos_tpl
operating_system: "rhcos_x64"
type: high_performance
graphical_console:
headless_mode: false
protocol:
- spice
- vnc
disks:
- size: 120GiB
name: os
interface: virtio_scsi
storage_domain: depot_nvme
nics:
- name: nic1
network: lab
profile: lab
# ---
# Virtual machines section
# ---
vms:
- name: "{{ metadata.infraID }}-bootstrap"
ocp_type: bootstrap
profile: "{{ control_plane }}"
type: server
- name: "{{ metadata.infraID }}-master0"
ocp_type: master
profile: "{{ control_plane }}"
- name: "{{ metadata.infraID }}-master1"
ocp_type: master
profile: "{{ control_plane }}"
- name: "{{ metadata.infraID }}-master2"
ocp_type: master
profile: "{{ control_plane }}"
- name: "{{ metadata.infraID }}-worker0"
ocp_type: worker
profile: "{{ compute }}"
- name: "{{ metadata.infraID }}-worker1"
ocp_type: worker
profile: "{{ compute }}"
- name: "{{ metadata.infraID }}-worker2"
ocp_type: worker
profile: "{{ compute }}"
Enter values for parameters whose descriptions begin with "Enter." Otherwise, you can use the default value or replace it with a new value.
General section
-
: Enter the name of an existing RHV cluster in which to install the OpenShift Container Platform cluster.
ovirt_cluster -
: The path of a directory the
ocp.assets_dirinstallation program creates to store the files that it generates.openshift-install -
: The path of the
ocp.ovirt_config_pathfile the installation program generates, for example,ovirt-config.yaml. This file contains the credentials required to interact with the REST API of the Manager../wrk/install-config.yaml
Red Hat Enterprise Linux CoreOS (RHCOS) section
-
: Enter the URL of the RHCOS image you specified for download.
image_url -
: The path of a local download directory for the compressed RHCOS image.
local_cmp_image_path -
: The path of a local directory for the extracted RHCOS image.
local_image_path
Profiles section
This section consists of two profiles:
-
: The profile of the bootstrap and control plane nodes.
control_plane -
: The profile of workers nodes in the compute plane.
compute
These profiles have the following parameters. The default values of the parameters meet the minimum requirements for running a production cluster. You can increase or customize these values to meet your workload requirements.
-
: The value gets the cluster name from
clusterin the General Section.ovirt_cluster -
: The amount of memory, in GB, for the virtual machine.
memory -
: The number of sockets for the virtual machine.
sockets -
: The number of cores for the virtual machine.
cores -
: The name of the virtual machine template. If plan to install multiple clusters, and these clusters use templates that contain different specifications, prepend the template name with the ID of the cluster.
template -
: The type of guest operating system in the virtual machine. With oVirt/RHV version 4.4, this value must be
operating_systemso the value ofrhcos_x64can be passed to the VM.Ignition script - : Enter
typeas the type of the virtual machine.serverImportantYou must change the value of the
parameter fromtypetohigh_performance.server -
: The disk specifications. The
disksandcontrol_planenodes can have different storage domains.compute -
: The minimum disk size.
size -
: Enter the name of a disk connected to the target cluster in RHV.
name -
: Enter the interface type of the disk you specified.
interface -
: Enter the storage domain of the disk you specified.
storage_domain -
: Enter the
nicsandnamethe virtual machines use. You can also specify the virtual network interface profile. By default, NICs obtain their MAC addresses from the oVirt/RHV MAC pool.network
Virtual machines section
This final section,
vms
vms
-
: The name of the virtual machine. In this case,
nameprepends the virtual machine name with the infrastructure ID from themetadata.infraIDfile.metadata.yml -
: The role of the virtual machine in the OpenShift Container Platform cluster. Possible values are
ocp_type,bootstrap,master.worker - : The name of the profile from which each virtual machine inherits specifications. Possible values in this example are
profileorcontrol_plane.computeYou can override the value a virtual machine inherits from its profile. To do this, you add the name of the profile attribute to the virtual machine in
and assign it an overriding value. To see an example of this, examine theinventory.ymlvirtual machine in the precedingname: "{{ metadata.infraID }}-bootstrap"example: It has ainventory.ymlattribute whose value,type, overrides the value of theserverattribute this virtual machine would otherwise inherit from thetypeprofile.control_plane
Metadata variables
For virtual machines,
metadata.infraID
metadata.json
The playbooks use the following code to read
infraID
ocp.assets_dir
---
- name: include metadata.json vars
include_vars:
file: "{{ ocp.assets_dir }}/metadata.json"
name: metadata
...
5.13. Specifying the RHCOS image settings Link kopierenLink in die Zwischenablage kopiert!
Update the Red Hat Enterprise Linux CoreOS (RHCOS) image settings of the
inventory.yml
image_url
local_cmp_image_path
local_image_path
Procedure
- Locate the RHCOS image download page for the version of OpenShift Container Platform you are installing, such as Index of /pub/openshift-v4/dependencies/rhcos/latest/latest.
-
From that download page, copy the URL of an OpenStack image, such as
qcow2.https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz Edit the
playbook you downloaded earlier. In it, paste the URL as the value forinventory.yml. For example:image_urlrhcos: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz"
5.14. Creating the install config file Link kopierenLink in die Zwischenablage kopiert!
You create an installation configuration file by running the installation program,
openshift-install
When you finish responding to the prompts, the installation program creates an initial version of the
install-config.yaml
./wrk/install-config.yaml
The installation program also creates a file,
$HOME/.ovirt/ovirt-config.yaml
NOTE: The installation process does not use values you supply for some parameters, such as
Internal API virtual IP
Ingress virtual IP
It also uses the values you supply for parameters in
inventory.yml
oVirt cluster
oVirt storage
oVirt network
install-config.yaml
virtual IPs
Procedure
Run the installation program:
$ openshift-install create install-config --dir $ASSETS_DIRRespond to the installation program’s prompts with information about your system.
Example output
? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>
? SSH Public Key /home/user/.ssh/id_dsa.pub
? Platform <ovirt>
? Engine FQDN[:PORT] [? for help] <engine.fqdn>
? Enter ovirt-engine username <ocpadmin@internal>
? Enter password <******>
? oVirt cluster <cluster>
? oVirt storage <storage>
? oVirt network <net>
? Internal API virtual IP <172.16.0.252>
? Ingress virtual IP <172.16.0.251>
? Base Domain <example.org>
? Cluster Name <ocp4>
? Pull Secret [? for help] <********>
For
Internal API virtual IP
Ingress virtual IP
Together, the values you enter for the
oVirt cluster
Base Domain
https://api.ocp4.example.org:6443/
https://console-openshift-console.apps.ocp4.example.org
You can get the pull secret from the Red Hat OpenShift Cluster Manager.
5.15. Sample install-config.yaml file for RHV Link kopierenLink in die Zwischenablage kopiert!
You can customize the
install-config.yaml
apiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: test
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
- 1
- The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
- 2 5
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 3 6
- Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to
Disabled. If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines.NoteSimultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the
parameter has no effect.hyperthreadingImportantIf you disable
, whether in the BIOS or in thehyperthreadingfile, ensure that your capacity planning accounts for the dramatically decreased machine performance.install-config.yaml - 4
- You must set this value to
0when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster.NoteIf you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines.
- 7
- The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
- 8
- The cluster name that you specified in your DNS records.
- 9
- A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic.Note
Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range.
- 10
- The subnet prefix length to assign to each individual node. For example, if
hostPrefixis set to23, then each node is assigned a/23subnet out of the givencidr, which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. - 11
- The cluster network plugin to install. The supported values are
OVNKubernetesandOpenShiftSDN. The default value isOVNKubernetes. - 12
- The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic.
- 13
- You must set the platform to
none. You cannot provide additional platform configuration variables for RHV infrastructure.ImportantClusters that are installed with the platform type
are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation.none - 14
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
,x86_64, andppc64learchitectures.s390x - 15
- The pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
- 16
- The SSH public key for the
coreuser in Red Hat Enterprise Linux CoreOS (RHCOS).NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
process uses.ssh-agent
5.15.1. Configuring the cluster-wide proxy during installation Link kopierenLink in die Zwischenablage kopiert!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
Prerequisites
-
You have an existing file.
install-config.yaml You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
object’sProxyfield to bypass the proxy if necessary.spec.noProxyNoteThe
objectProxyfield is populated with the values of thestatus.noProxy,networking.machineNetwork[].cidr, andnetworking.clusterNetwork[].cidrfields from your installation configuration.networking.serviceNetwork[]For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
objectProxyfield is also populated with the instance metadata endpoint (status.noProxy).169.254.169.254
Procedure
Edit your
file and add the proxy settings. For example:install-config.yamlapiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle>5 - 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCAfield of theProxyobject. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxyobject to reference theuser-ca-bundleconfig map in thetrustedCAfield. The allowed values areProxyonlyandAlways. UseProxyonlyto reference theuser-ca-bundleconfig map only whenhttp/httpsproxy is configured. UseAlwaysto always reference theuser-ca-bundleconfig map. The default value isProxyonly.
NoteThe installation program does not support the proxy
field.readinessEndpointsNoteIf the installer times out, restart and then complete the deployment by using the
command of the installer. For example:wait-for$ ./openshift-install wait-for install-complete --log-level debug- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named
cluster
install-config.yaml
cluster
Proxy
spec
Only the
Proxy
cluster
5.16. Customizing install-config.yaml Link kopierenLink in die Zwischenablage kopiert!
Here, you use three Python scripts to override some of the installation program’s default behaviors:
- By default, the installation program uses the machine API to create nodes. To override this default behavior, you set the number of compute nodes to zero replicas. Later, you use Ansible playbooks to create the compute nodes.
- By default, the installation program sets the IP range of the machine network for nodes. To override this default behavior, you set the IP range to match your infrastructure.
-
By default, the installation program sets the platform to . However, installing a cluster on user-provisioned infrastructure is more similar to installing a cluster on bare metal. Therefore, you delete the ovirt platform section from
ovirtand change the platform toinstall-config.yaml. Instead, you usenoneto specify all of the required settings.inventory.yml
These snippets work with Python 3 and Python 2.
Procedure
Set the number of compute nodes to zero replicas:
$ python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["compute"][0]["replicas"] = 0 open(path, "w").write(yaml.dump(conf, default_flow_style=False))'Set the IP range of the machine network. For example, to set the range to
, enter:172.16.0.0/16$ python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16" open(path, "w").write(yaml.dump(conf, default_flow_style=False))'Remove the
section and change the platform toovirt:none$ python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) platform = conf["platform"] del platform["ovirt"] platform["none"] = {} open(path, "w").write(yaml.dump(conf, default_flow_style=False))'WarningRed Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to
, allowing OpenShift Container Platform to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as installing a cluster on any platform, and has the following limitations:none- There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities.
- The oVirt CSI driver will not be installed and there will be no CSI capabilities.
5.17. Generate manifest files Link kopierenLink in die Zwischenablage kopiert!
Use the installation program to generate a set of manifest files in the assets directory.
The command to generate the manifest files displays a warning message before it consumes the
install-config.yaml
If you plan to reuse the
install-config.yaml
Procedure
Optional: Create a backup copy of the
file:install-config.yaml$ cp install-config.yaml install-config.yaml.backupGenerate a set of manifests in your assets directory:
$ openshift-install create manifests --dir $ASSETS_DIRThis command displays the following messages.
Example output
INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settingsThe command generates the following manifest files:
Example output
$ tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml
Next steps
- Make control plane nodes non-schedulable.
5.18. Making control-plane nodes non-schedulable Link kopierenLink in die Zwischenablage kopiert!
Because you are manually creating and deploying the control plane machines, you must configure a manifest file to make the control plane nodes non-schedulable.
Procedure
To make the control plane nodes non-schedulable, enter:
$ python3 -c 'import os, yaml path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"] data = yaml.safe_load(open(path)) data["spec"]["mastersSchedulable"] = False open(path, "w").write(yaml.dump(data, default_flow_style=False))'
5.19. Building the Ignition files Link kopierenLink in die Zwischenablage kopiert!
To build the Ignition files from the manifest files you just generated and modified, you run the installation program. This action creates a Red Hat Enterprise Linux CoreOS (RHCOS) machine,
initramfs
In addition to the Ignition files, the installation program generates the following:
-
An directory that contains the admin credentials for connecting to the cluster with the
authandocutilities.kubectl -
A file that contains information such as the OpenShift Container Platform cluster name, cluster ID, and infrastructure ID for the current installation.
metadata.json
The Ansible playbooks for this installation process use the value of
infraID
Certificates in Ignition configuration files expire after 24 hours. Complete the cluster installation and keep the cluster running in a non-degraded state for 24 hours so that the first certificate rotation can finish.
Procedure
To build the Ignition files, enter:
$ openshift-install create ignition-configs --dir $ASSETS_DIRExample output
$ tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
5.20. Creating templates and virtual machines Link kopierenLink in die Zwischenablage kopiert!
After confirming the variables in the
inventory.yml
create-templates-and-vms.yml
This playbook uses the connection parameters for the RHV Manager from
$HOME/.ovirt/ovirt-config.yaml
metadata.json
If a local Red Hat Enterprise Linux CoreOS (RHCOS) image is not already present, the playbook downloads one from the URL you specified for
image_url
inventory.yml
The playbook creates a template based on the
control_plane
compute
inventory.yml
When the playbook finishes, the virtual machines it creates are stopped. You can get information from them to help configure other infrastructure elements. For example, you can get the virtual machines' MAC addresses to configure DHCP to assign permanent IP addresses to the virtual machines.
Procedure
-
In , under the
inventory.ymlandcontrol_planevariables, change both instances ofcomputetotype: high_performance.type: server Optional: If you plan to perform multiple installations to the same cluster, create different templates for each OpenShift Container Platform installation. In the
file, prepend the value ofinventory.ymlwithtemplate. For example:infraIDcontrol_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: "{{ metadata.infraID }}-rhcos_tpl" operating_system: "rhcos_x64" ...Create the templates and virtual machines:
$ ansible-playbook -i inventory.yml create-templates-and-vms.yml
5.21. Creating the bootstrap machine Link kopierenLink in die Zwischenablage kopiert!
You create a bootstrap machine by running the
bootstrap.yml
bootstrap.ign
To monitor the bootstrap process, you use the console in the RHV Administration Portal or connect to the virtual machine by using SSH.
Procedure
Create the bootstrap machine:
$ ansible-playbook -i inventory.yml bootstrap.ymlConnect to the bootstrap machine using a console in the Administration Portal or SSH. Replace
with the bootstrap node IP address. To use SSH, enter:<bootstrap_ip>$ ssh core@<boostrap.ip>Collect
journald unit logs for the release image service from the bootstrap node:bootkube.service[core@ocp4-lk6b4-bootstrap ~]$ journalctl -b -f -u release-image.service -u bootkube.serviceNoteThe
log on the bootstrap node outputs etcdbootkube.serviceerrors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.connection refused
5.22. Creating the control plane nodes Link kopierenLink in die Zwischenablage kopiert!
You create the control plane nodes by running the
masters.yml
master.ign
https://api-int.ocp4.example.org:22623/config/master
Procedure
Create the control plane nodes:
$ ansible-playbook -i inventory.yml masters.ymlWhile the playbook creates your control plane, monitor the bootstrapping process:
$ openshift-install wait-for bootstrap-complete --dir $ASSETS_DIRExample output
INFO API v1.25.0 up INFO Waiting up to 40m0s for bootstrapping to complete...When all the pods on the control plane nodes and etcd are up and running, the installation program displays the following output.
Example output
INFO It is now safe to remove the bootstrap resources
5.23. Verifying cluster status Link kopierenLink in die Zwischenablage kopiert!
You can verify your OpenShift Container Platform cluster’s status during or after installation.
Procedure
In the cluster environment, export the administrator’s kubeconfig file:
$ export KUBECONFIG=$ASSETS_DIR/auth/kubeconfigThe
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.kubeconfigView the control plane and compute machines created after a deployment:
$ oc get nodesView your cluster’s version:
$ oc get clusterversionView your Operators' status:
$ oc get clusteroperatorView all running pods in the cluster:
$ oc get pods -A
5.24. Removing the bootstrap machine Link kopierenLink in die Zwischenablage kopiert!
After the
wait-for
Procedure
To remove the bootstrap machine from the cluster, enter:
$ ansible-playbook -i inventory.yml retire-bootstrap.yml- Remove settings for the bootstrap machine from the load balancer directives.
5.25. Creating the worker nodes and completing the installation Link kopierenLink in die Zwischenablage kopiert!
Creating worker nodes is similar to creating control plane nodes. However, worker nodes workers do not automatically join the cluster. To add them to the cluster, you review and approve the workers' pending CSRs (Certificate Signing Requests).
After approving the first requests, you continue approving CSR until all of the worker nodes are approved. When you complete this process, the worker nodes become
Ready
Finally, monitor the command line to see when the installation process completes.
Procedure
Create the worker nodes:
$ ansible-playbook -i inventory.yml workers.ymlTo list all of the CSRs, enter:
$ oc get csr -AEventually, this command displays one CSR per node. For example:
Example output
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper PendingTo filter the list and see only pending CSRs, enter:
$ watch "oc get csr -A | grep pending -i"This command refreshes the output every two seconds and displays only pending CSRs. For example:
Example output
Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper PendingInspect each pending request. For example:
Example output
$ oc describe csr csr-m724nExample output
Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>If the CSR information is correct, approve the request:
$ oc adm certificate approve csr-m724nWait for the installation process to finish:
$ openshift-install wait-for install-complete --dir $ASSETS_DIR --log-level debugWhen the installation completes, the command line displays the URL of the OpenShift Container Platform web console and the administrator user name and password.
5.26. Telemetry access for OpenShift Container Platform Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.
After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.27. Disabling the default OperatorHub catalog sources Link kopierenLink in die Zwischenablage kopiert!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
Procedure
Disable the sources for the default catalogs by adding
to thedisableAllDefaultSources: trueobject:OperatorHub$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration