Chapter 3. Installing a cluster on OpenStack with customizations
In OpenShift Container Platform version 4.13, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml
before you install the cluster.
3.1. Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You verified that OpenShift Container Platform 4.13 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix.
- You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage.
- You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster.
- You have the metadata service enabled in RHOSP.
3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP
To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:
Resource | Value |
---|---|
Floating IP addresses | 3 |
Ports | 15 |
Routers | 1 |
Subnets | 1 |
RAM | 88 GB |
vCPUs | 22 |
Volume storage | 275 GB |
Instances | 7 |
Security groups | 3 |
Security group rules | 60 |
Server groups | 2 - plus 1 for each additional availability zone in each machine pool |
A cluster might function with fewer than recommended resources, but its performance is not guaranteed.
If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator
role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.
By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project>
as an administrator to increase them.
An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.
3.2.1. Control plane machines
By default, the OpenShift Container Platform installation process creates three control plane machines.
Each machine requires:
- An instance from the RHOSP quota
- A port from the RHOSP quota
- A flavor with at least 16 GB memory and 4 vCPUs
- At least 100 GB storage space from the RHOSP quota
3.2.2. Compute machines
By default, the OpenShift Container Platform installation process creates three compute machines.
Each machine requires:
- An instance from the RHOSP quota
- A port from the RHOSP quota
- A flavor with at least 8 GB memory and 2 vCPUs
- At least 100 GB storage space from the RHOSP quota
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can.
3.2.3. Bootstrap machine
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.
The bootstrap machine requires:
- An instance from the RHOSP quota
- A port from the RHOSP quota
- A flavor with at least 16 GB memory and 4 vCPUs
- At least 100 GB storage space from the RHOSP quota
3.2.4. Load balancing requirements for user-provisioned infrastructure
Deployment with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately.
The load balancing infrastructure must meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
- A stateless load balancing algorithm. The options vary based on the load balancer implementation.
ImportantDo not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster.
Configure the following ports on both the front and back of the load balancers:
Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the
/readyz
endpoint for the API server health check probe.X
X
Kubernetes API server
22623
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.
X
Machine config server
NoteThe load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the
/readyz
endpoint to the removal of the API server instance from the pool. Within the time frame after/readyz
returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.Application Ingress load balancer: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster.
Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
- A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
TipIf the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
Configure the following ports on both the front and back of the load balancers:
Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443
The machines that run the Ingress Controller pods, compute, or worker, by default.
X
X
HTTPS traffic
80
The machines that run the Ingress Controller pods, compute, or worker, by default.
X
X
HTTP traffic
NoteIf you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg
configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you are using HAProxy as a load balancer and SELinux is set to enforcing
, you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1
.
Example 3.1. Sample API and application Ingress load balancer configuration
global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s
- 1
- Port
6443
handles the Kubernetes API traffic and points to the control plane machines. - 2 4
- The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete.
- 3
- Port
22623
handles the machine config server traffic and points to the control plane machines. - 5
- Port
443
handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. - 6
- Port
80
handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.NoteIf you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
If you are using HAProxy as a load balancer, you can check that the haproxy
process is listening on ports 6443
, 22623
, 443
, and 80
by running netstat -nltupe
on the HAProxy node.
3.3. Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.13, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
3.4. Enabling Swift on RHOSP
Swift is operated by a user account with the swiftoperator
role. Add the role to an account before you run the installation program.
If the Red Hat OpenStack Platform (RHOSP) object storage service, commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder.
If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.
RHOSP 17 sets the rgw_max_attr_size
parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size
to at least 1024 characters.
Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW.
Prerequisites
- You have a RHOSP administrator account on the target environment.
- The Swift service is installed.
-
On Ceph RGW, the
account in url
option is enabled.
Procedure
To enable Swift on RHOSP:
As an administrator in the RHOSP CLI, add the
swiftoperator
role to the account that will access Swift:$ openstack role add --user <user> --project <project> swiftoperator
Your RHOSP deployment can now use Swift for the image registry.
3.5. Configuring an image registry with custom storage on clusters that run on RHOSP
After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage.
Procedure
Create a YAML file that specifies the storage class and availability zone to use. For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>
NoteOpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration.
From a command line, apply the configuration:
$ oc apply -f <storage_class_file_name>
Example output
storageclass.storage.k8s.io/custom-csi-storageclass created
Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the
openshift-image-registry
namespace. For example:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3
From a command line, apply the configuration:
$ oc apply -f <pvc_file_name>
Example output
persistentvolumeclaim/csi-pvc-imageregistry created
Replace the original persistent volume claim in the image registry configuration with the new claim:
$ oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]'
Example output
config.imageregistry.operator.openshift.io/cluster patched
Over the next several minutes, the configuration is updated.
Verification
To confirm that the registry is using the resources that you defined:
Verify that the PVC claim value is identical to the name that you provided in your PVC definition:
$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
Example output
... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ...
Verify that the status of the PVC is
Bound
:$ oc get pvc -n openshift-image-registry csi-pvc-imageregistry
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m
3.6. Verifying external network access
The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP).
Prerequisites
Procedure
Using the RHOSP CLI, verify the name and ID of the 'External' network:
$ openstack network list --long -c ID -c Name -c "Router Type"
Example output
+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+
A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.
If the external network’s CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml
file before you start the installation process.
The default network ranges are:
Network | Range |
---|---|
| 10.0.0.0/16 |
| 172.30.0.0/16 |
| 10.128.0.0/14 |
If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP.
If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.
3.7. Defining parameters for the installation program
The OpenShift Container Platform installation program relies on a file that is called clouds.yaml
. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.
Procedure
Create the
clouds.yaml
file:If your RHOSP distribution includes the Horizon web UI, generate a
clouds.yaml
file in it.ImportantRemember to add a password to the
auth
field. You can also keep secrets in a separate file fromclouds.yaml
.If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about
clouds.yaml
, see Config files in the RHOSP documentation.clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'
If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:
- Copy the certificate authority file to your machine.
Add the
cacerts
key to theclouds.yaml
file. The value must be an absolute, non-root-accessible path to the CA certificate:clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
TipAfter you run the installer with a custom CA certificate, you can update the certificate by editing the value of the
ca-cert.pem
key in thecloud-provider-config
keymap. On a command line, run:$ oc edit configmap -n openshift-config cloud-provider-config
Place the
clouds.yaml
file in one of the following locations:-
The value of the
OS_CLIENT_CONFIG_FILE
environment variable - The current directory
-
A Unix-specific user configuration directory, for example
~/.config/openstack/clouds.yaml
A Unix-specific site configuration directory, for example
/etc/openstack/clouds.yaml
The installation program searches for
clouds.yaml
in that order.
-
The value of the
3.8. Setting OpenStack Cloud Controller Manager options
Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP).
For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation.
Procedure
If you have not already generated manifest files for your cluster, generate them by running the following command:
$ openshift-install --dir <destination_directory> create manifests
In a text editor, open the cloud-provider configuration manifest file. For example:
$ vi openshift/manifests/cloud-provider-config.yaml
Modify the options according to the CCM reference guide.
Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:
#... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #...
- 1
- This property enables Octavia integration.
- 2
- This property sets the Octavia provider that your load balancer uses. It accepts
"ovn"
or"amphora"
as values. If you choose to use OVN, you must also setlb-method
toSOURCE_IP_PORT
. - 3
- This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here.
- 4
- This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to
True
to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. - 5
- This property sets the frequency with which endpoints are monitored. The value must be in the
time.ParseDuration()
format. This property is required if the value of thecreate-monitor
property isTrue
. - 6
- This property sets the time that monitoring requests are open before timing out. The value must be in the
time.ParseDuration()
format. This property is required if the value of thecreate-monitor
property isTrue
. - 7
- This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the
create-monitor
property isTrue
.
ImportantPrior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section.
ImportantYou must set the value of the
create-monitor
property toTrue
if you use services that have the value of the.spec.externalTrafficPolicy
property set toLocal
. The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that haveETP
parameter values set toLocal
might not respond when thelb-provider
value is set to"ovn"
.ImportantFor installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider.
Save the changes to the file and proceed with installation.
TipYou can update your cloud provider configuration after you run the installer. On a command line, run:
$ oc edit configmap -n openshift-config cloud-provider-config
After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a
SchedulingDisabled
status.
3.9. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
3.10. Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
When specifying the directory:
-
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
NoteAlways delete the
~/.powervs
directory to avoid reusing a stale configuration. Run the following command:$ rm -rf ~/.powervs
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.- Select openstack as the platform to target.
- Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
- Specify the floating IP address to use for external access to the OpenShift API.
- Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
- Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
- Enter a name for your cluster. The name must be 14 or fewer characters long.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yaml
file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
Additional resources
See Installation configuration parameters section for more information about the available parameters.
3.10.1. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
You have an existing
install-config.yaml
file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.NoteThe
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. - 5
- Optional: The policy to determine the configuration of the
Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.
NoteThe installation program does not support the proxy
readinessEndpoints
field.NoteIf the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete --log-level debug
- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the Proxy
object named cluster
is supported, and no additional proxies can be created.
3.11. Installation configuration parameters
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml
file.
3.11.1. Required configuration parameters
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
The API version for the | String |
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
Kubernetes resource | Object |
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
The configuration for the specific platform upon which to perform the installation: | Object |
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
{ "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"you@example.com" }, "quay.io":{ "auth":"b3Blb=", "email":"you@example.com" } } } |
3.11.2. Network configuration parameters
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.
Parameter | Description | Values |
---|---|---|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
| The Red Hat OpenShift Networking network plugin to install. |
Either |
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 |
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 |
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 |
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
3.11.3. Optional configuration parameters
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
| Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing. | String array |
|
Selects an initial set of optional capabilities to enable. Valid values are | String |
|
Extends the set of optional capabilities beyond what you specify in | String array |
| Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
compute: hyperthreading: |
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
Required if you use |
|
|
Required if you use |
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
| Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". |
String. The name of the feature set to enable, such as |
| The configuration for the machines that comprise the control plane. |
Array of |
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String |
controlPlane: hyperthreading: |
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
Required if you use |
|
|
Required if you use |
|
| The number of control plane machines to provision. |
The only supported value is |
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note
If your AWS account has service control policies (SCP) enabled, you must configure the |
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
Required if you use | String |
| Specify one or more repositories that may also contain the same images. | Array of strings |
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to Important
If the value of the field is set to |
| The SSH key to authenticate access to your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
For example, |
- Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
3.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters
Additional RHOSP configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
| For compute machines, the root volume’s type. |
String, for example |
| For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
| For control plane machines, the root volume’s type. |
String, for example |
|
The name of the RHOSP cloud to use from the list of clouds in the |
String, for example |
| The RHOSP external network name to be used for installation. |
String, for example |
| The RHOSP flavor to use for control plane and compute machines.
This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the |
String, for example |
3.11.5. Optional RHOSP configuration parameters
Optional RHOSP configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. |
A list of one or more UUIDs as strings. For example, |
| Additional security groups that are associated with compute machines. |
A list of one or more UUIDs as strings. For example, |
| RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. |
A list of strings. For example, |
| For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. |
A list of strings, for example |
|
Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include
An
If you use a strict |
A server group policy to apply to the machine pool. For example, |
| Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. |
A list of one or more UUIDs as strings. For example, |
| Additional security groups that are associated with control plane machines. |
A list of one or more UUIDs as strings. For example, |
| RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. |
A list of strings. For example, |
| For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. |
A list of strings, for example |
|
Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include
An
If you use a strict |
A server group policy to apply to the machine pool. For example, |
| The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. | An HTTP or HTTPS URL, optionally with an SHA-256 checksum.
For example, |
|
Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if
You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the
You can also use this property to enable the QEMU guest agent by including the |
A list of key-value string pairs. For example, |
| The default machine pool platform configuration. |
{ "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } |
|
An existing floating IP address to associate with the Ingress port. To use this property, you must also define the |
An IP address, for example |
|
An existing floating IP address to associate with the API load balancer. To use this property, you must also define the |
An IP address, for example |
| IP addresses for external DNS servers that cluster instances use for DNS resolution. |
A list of IP addresses as strings. For example, |
|
Whether or not to use the default, internal load balancer. If the value is set to |
|
| The UUID of a RHOSP subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.
The first item in If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP. |
A UUID as a string. For example, |
3.11.6. RHOSP parameters for failure domains
RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Red Hat OpenStack Platform (RHOSP) deployments do not have a single implementation of failure domains. Instead, availability zones are defined individually for each service, such as the compute service, Nova; the networking service, Neutron; and the storage service, Cinder.
Beginning with OpenShift Container Platform 4.13, there is a unified definition of failure domains for RHOSP deployments that covers all supported availability zone types. You can use failure domains to control related aspects of Nova, Neutron, and Cinder configurations from a single place.
In RHOSP, a port describes a network connection and maps to an interface inside a compute machine. A port also:
- Is defined by a network or by one more or subnets
- Connects a machine to one or more subnets
Failure domains group the services of your deployment by using ports. If you use failure domains, each machine connects to:
-
The
portTarget
object with the IDcontrol-plane
while that object exists. -
All non-control-plane
portTarget
objects within its own failure domain. -
All networks in the machine pool’s
additionalNetworkIDs
list.
To configure failure domains for a machine pool, edit availability zone and port target parameters under controlPlane.platform.openstack.failureDomains
.
Parameter | Description | Values |
---|---|---|
| An availability zone for the server. If not specified, the cluster default is used. |
The name of the availability zone. For example, |
| An availability zone for the root volume. If not specified, the cluster default is used. |
The name of the availability zone. For example, |
|
A list of |
A list of |
|
The ID of an individual port target. To select that port target as the first network for machines, set the value of this parameter to |
|
| Required. The name or ID of the network to attach to machines in the failure domain. |
A network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 or: network: name: my-network-1 |
| Subnets to allocate fixed IP addresses to. These subnets must exist within the same network as the port. |
A list of |
You cannot combine zone fields and failure domains. If you want to use failure domains, the controlPlane.zone
and controlPlane.rootVolume.zone
fields must be left unset.
3.11.7. Custom subnets in RHOSP deployments
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet
in the install-config.yaml
file.
This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet
property to the subnet’s UUID.
Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements:
-
The subnet that is used by
platform.openstack.machinesSubnet
has DHCP enabled. -
The CIDR of
platform.openstack.machinesSubnet
matches the CIDR ofnetworking.machineNetwork
. - The installation program user has permission to create ports on this network, including ports with fixed IP addresses.
Clusters that use custom subnets have the following limitations:
-
If you plan to install a cluster that uses floating IP addresses, the
platform.openstack.machinesSubnet
subnet must be attached to a router that is connected to theexternalNetwork
network. -
If the
platform.openstack.machinesSubnet
value is set in theinstall-config.yaml
file, the installation program does not create a private network or subnet for your RHOSP machines. -
You cannot use the
platform.openstack.externalDNS
property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network.
By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIPs
and platform.openstack.ingressVIPs
that are outside of the DHCP allocation pool.
The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace.
3.11.8. Deploying a cluster with bare metal machines
If you want your cluster to use bare metal machines, modify the install-config.yaml
file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines.
Bare-metal compute machines are not supported on clusters that use Kuryr.
Be sure that your install-config.yaml
file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not.
Prerequisites
- The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API.
- Bare metal is available as a RHOSP flavor.
- If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes.
- The RHOSP network supports both VM and bare metal server attachment.
- If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned.
- If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks.
-
You created an
install-config.yaml
file as part of the OpenShift Container Platform installation process.
Procedure
In the
install-config.yaml
file, edit the flavors for machines:-
If you want to use bare-metal control plane machines, change the value of
controlPlane.platform.openstack.type
to a bare metal flavor. -
Change the value of
compute.platform.openstack.type
to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of
platform.openstack.machinesSubnet
to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet.An example bare metal
install-config.yaml
filecontrolPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ...
-
If you want to use bare-metal control plane machines, change the value of
Use the updated install-config.yaml
file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file.
The installer may time out while waiting for bare metal machines to boot.
If the installer times out, restart and then complete the deployment by using the wait-for
command of the installer. For example:
$ ./openshift-install wait-for install-complete --log-level debug
3.11.9. Cluster deployment on RHOSP provider networks
You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.
RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them.
In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network:
OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.
Example provider network types include flat (untagged) and VLAN (802.1Q tagged).
A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections.
You can learn more about provider and tenant networks in the RHOSP documentation.
3.11.9.1. RHOSP provider network requirements for cluster installation
Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions:
- The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API.
- The RHOSP networking service has the port security and allowed address pairs extensions enabled.
The provider network can be shared with other tenants.
TipUse the
openstack network create
command with the--share
flag to create a network that can be shared.The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet.
Tip- To create a network for a project that is named "openshift," enter the following command
$ openstack network create --project openshift
- To create a subnet for a project that is named "openshift," enter the following command
$ openstack subnet create --project openshift
To learn more about creating networks on RHOSP, read the provider networks documentation.
If the cluster is owned by the
admin
user, you must run the installer as that user to create ports on the network.ImportantProvider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network.
Verify that the provider network can reach the RHOSP metadata service IP address, which is
169.254.169.254
by default.Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:
$ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
- Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.
3.11.9.2. Deploying a cluster that has a primary interface on a provider network
You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network.
Prerequisites
- Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation".
Procedure
-
In a text editor, open the
install-config.yaml
file. -
Set the value of the
platform.openstack.apiVIPs
property to the IP address for the API VIP. -
Set the value of the
platform.openstack.ingressVIPs
property to the IP address for the Ingress VIP. -
Set the value of the
platform.openstack.machinesSubnet
property to the UUID of the provider network subnet. -
Set the value of the
networking.machineNetwork.cidr
property to the CIDR block of the provider network subnet.
The platform.openstack.apiVIPs
and platform.openstack.ingressVIPs
properties must both be unassigned IP addresses from the networking.machineNetwork.cidr
block.
Section of an installation configuration file for a cluster that relies on a RHOSP provider network
... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24
You cannot set the platform.openstack.externalNetwork
or platform.openstack.externalDNS
parameters while using a provider network for the primary network interface.
When you deploy the cluster, the installer uses the install-config.yaml
file to deploy the cluster on the provider network.
You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs
list.
After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks.
3.11.10. Sample customized install-config.yaml
file for RHOSP
This sample install-config.yaml
demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options.
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA...
3.11.11. Example installation configuration section that uses failure domains
RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following section of an install-config.yaml
file demonstrates the use of failure domains in a cluster to deploy on Red Hat OpenStack Platform (RHOSP):
# ... controlPlane: name: master platform: openstack: type: m1.large failureDomains: - computeAvailabilityZone: 'nova-1' storageAvailabilityZone: 'cinder-1' portTargets: - id: control-plane network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 - computeAvailabilityZone: 'nova-2' storageAvailabilityZone: 'cinder-2' portTargets: - id: control-plane network: id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1 - computeAvailabilityZone: 'nova-3' storageAvailabilityZone: 'cinder-3' portTargets: - id: control-plane network: id: 8e4b4e0d-3865-4a9b-a769-559270271242 featureSet: TechPreviewNoUpgrade # ...
3.11.12. Installation configuration for a cluster on OpenStack with a user-managed load balancer
Deployment on OpenStack with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following example install-config.yaml
file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer.
apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 featureSet: TechPreviewNoUpgrade 3
- 1
- Regardless of which load balancer you use, the load balancer is deployed to this subnet.
- 2
- The
UserManaged
value indicates that you are using an user-managed load balancer. - 3
- Because user-managed load balancers are in Technology Preview, you must include the
TechPreviewNoUpgrade
value to deploy a cluster that uses a user-managed load balancer.
3.12. Generating a key pair for cluster node SSH access
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
3.13. Enabling access to the environment
At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.
You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.
3.13.1. Enabling access with floating IP addresses
Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications.
Procedure
Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:
$ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:
$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
Add records that follow these patterns to your DNS server for the API and Ingress FIPs:
api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
NoteIf you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your
/etc/hosts
file:-
<api_floating_ip> api.<cluster_name>.<base_domain>
-
<application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>
-
<application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>
-
<application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>
-
<application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
-
application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>
The cluster domain names in the
/etc/hosts
file grant access to the web console and the monitoring interface of your cluster locally. You can also use thekubectl
oroc
. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.-
Add the FIPs to the
install-config.yaml
file as the values of the following parameters:-
platform.openstack.ingressFloatingIP
-
platform.openstack.apiFloatingIP
-
If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork
parameter in the install-config.yaml
file.
You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.
3.13.2. Completing installation without floating IP addresses
You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses.
In the install-config.yaml
file, do not define the following parameters:
-
platform.openstack.ingressFloatingIP
-
platform.openstack.apiFloatingIP
If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork
, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own.
If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.
You can enable name resolution by creating DNS records for the API and Ingress ports. For example:
api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>
If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.
3.14. Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster
command of the installation program only once, during initial installation.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2
Verification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user. -
Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
3.15. Verifying cluster status
You can verify your OpenShift Container Platform cluster’s status during or after installation.
Procedure
In the cluster environment, export the administrator’s kubeconfig file:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
The
kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.View the control plane and compute machines created after a deployment:
$ oc get nodes
View your cluster’s version:
$ oc get clusterversion
View your Operators' status:
$ oc get clusteroperator
View all running pods in the cluster:
$ oc get pods -A
3.16. Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Additional resources
- See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
3.17. Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.
After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
3.18. Next steps
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If you need to enable external access to node ports, configure ingress cluster traffic by using a node port.
- If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses.