Installing on Oracle Edge Cloud
Installing OpenShift Container Platform on Oracle Cloud Edge
Abstract
Chapter 1. Installing a cluster on Oracle Edge Cloud by using the Assisted Installer Copy linkLink copied to clipboard!
With Oracle® Edge Cloud, you can run applications and middleware by using Oracle® Cloud Infrastructure (OCI) services on high performance cloud infrastructure in your data center.
The following procedures describe a cluster installation on Oracle® Compute Cloud@Customer as an example.
1.1. Supported Oracle Edge Cloud infrastructures Copy linkLink copied to clipboard!
The following table describes the support status of each Oracle® Edge Cloud infrastructure offering:
| Infrastructure type | Support status |
|---|---|
| Private Cloud Appliance | General Availability |
| Oracle Compute Cloud@Customer | General Availability |
| Roving Edge | Technology Preview |
1.2. Overview Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on Oracle Edge Cloud by using the Assisted Installer.
For an alternative installation method, see "Installing a cluster on Oracle® Edge Cloud by using the Agent-based Installer".
Preinstallation considerations
- Ensure that your installation meets the prerequisites specified for Oracle. For details, see the "Access and Considerations" section in the Oracle documentation.
- Ensure that your infrastructure is certified and uses a compatible cloud instance type. For details, see Oracle Cloud Infrastructure.
- Ensure that you are performing the installation on a virtual machine.
Installation process
The installation process builds a bastion host within the designated compartment of the OpenShift Container Platform cluster. The bastion host is used to run two Terraform scripts:
- The first script builds IAM Resources in the OCI Home region of the Oracle® Edge Cloud system (two Dynamic Groups and one Policy).
- The second script builds the infrastructure resources on the Oracle® Edge Cloud system to support the OpenShift Container Platform cluster, including the OpenShift Container Platform VCN, public and private subnets, load balancers, Internet GW, NAT GW, and DNS server. The script includes all the resources needed to activate the control plane nodes and compute nodes that form a cluster.
The bastion host is installed in the designated OpenShift Container Platform Compartment and configured to communicate through a designated Oracle® Edge Cloud DRG Subnet or Internet GW Subnet within the Oracle® Edge Cloud parent tenancy.
The installation process subsequently provisions three control plane (master) nodes and three compute (worker) nodes, together with the external and internal Load Balancers that form the cluster. This is the standard implementation for Oracle Edge Cloud.
Main steps
The main steps of the procedure are as follows:
- Preparing the Oracle® Edge Cloud bastion server.
- Running the Terraform script via the Home region.
- Preparing the OpenShift Container Platform image for Oracle Edge Cloud.
- Running the Terraform script via the Oracle® Edge Cloud region.
- Installing the cluster by using the Assisted Installer web console.
1.3. Preparing the OCI bastion server Copy linkLink copied to clipboard!
By implementing a bastion host, you can securely and efficiently manage access to your Oracle Cloud Infrastructure (OCI) resources, ensuring that your private instances remain protected and accessible only through a secure, controlled entry point.
Prerequisites
- See the "Bastion server - prerequisites" section in the Oracle documentation.
Procedure
- Install the bastion server. For details, see the "Bastion Installation" section in the Oracle documentation.
- Install the Terraform application which is used to run the Terraform script. For details, see the "Terraform Installation" section in the Oracle documentation.
- Install and configure the OCI command-line interface (CLI). For details, see the "Installing and Configuring the OCI CLI" section in the Oracle documentation.
Additional resources
1.4. Running the Terraform script via the Home region Copy linkLink copied to clipboard!
Copy the Terraform scripts createInfraResources.tf and terraform.tfvars onto the bastion server. Then run the createInfraResources.tf script to create the Dynamic Group Identity resources on your Oracle Cloud Infrastructure (OCI) Home Region. These resources include dynamic groups, policies, and tags.
Prerequisites
- You have tenancy privileges to create Dynamic Groups and Policies. If not, you can manually provision them during this procedure.
Procedure
- Connect to the bastion server via SSH.
-
Create
OpenShift\createResourceOnHomeRegionfolders. -
Copy the
createInfraResources.tfandterraform.tfvarsfiles from the C3_PCA GitHub repository into thecreateResourceOnHomeRegionfolder. - Ensure that you have access to the source environment, and that your C3 certificate has been exported.
-
Run the
createInfraResources.tfTerraform script.
For the full procedure, see the "Terraform Script Execution Part-1 (Run Script via Home Region)" section in the Oracle documentation.
1.5. Preparing the OCI image Copy linkLink copied to clipboard!
Generate the OpenShift Container Platform ISO image in the Assisted Installer on the Red Hat portal. Then, convert the image to an Oracle Edge Cloud compatible image and upload it to the Custom Images page of your Oracle Edge Cloud environment.
You can generate, convert and upload the image on your laptop and not on the bastion server or within environments such as Oracle Solution Center.
1.5.1. Generating the image in the Assisted Installer Copy linkLink copied to clipboard!
Create a cluster and download the discovery ISO image.
Procedure
- Log in to Assisted Installer web console with your credentials.
- In the Red Hat OpenShift tile, select OpenShift.
- In the Red Hat OpenShift Container Platform tile, select Create Cluster.
- On the Cluster Type page, scroll to the end of the Cloud tab, and select Oracle Cloud Infrastructure (virtual machines).
- On the Create an OpenShift Cluster page, select the Interactive tile.
On the Cluster Details page, complete the following fields:
Expand Field Action required Cluster name
Specify the name of your OpenShift Container Platform cluster. This name is the same name you used to create the resource via the Terraform scripts. The name must be between 1-54 characters. It can use lowercase alphanumeric characters or hyphen (-), but must start and end with a lowercase letter or a number.
Base domain
Specify the base domain of the cluster. This is the value used for the
zone_dnsvariables in the Terraform scripts that run on Oracle® Edge Cloud. Make a note of the value.OpenShift version
Select OpenShift 4.16.20. If it is not immediately visible, scroll to the end of the dropdown menu, select Show all available versions, and type the version in the search box.
Integrate with external partner platforms
Select Oracle Cloud Infrastructure.
After you specify this value, the Include custom manifests checkbox is selected by default and the Custom manifests page is added to the wizard.
- Leave the default settings for the remaining fields, and click Next.
- On the Operators page, click Next.
On the Host Discovery page, click Add hosts and complete the following steps:
NoteThe minimal ISO image is the mandatory Provisioning type for the Oracle Edge Cloud, and cannot be changed.
In the SSH public key field, add the SSH public key by copying the output of the following command:
cat ~/.ssh/id_rsa.put
$ cat ~/.ssh/id_rsa.putCopy to Clipboard Copied! Toggle word wrap Toggle overflow The SSH public key will be installed on all OpenShift Container Platform control plane and compute nodes.
- Click the Show proxy settings checkbox.
Add the proxy variables from the
/etc/environmentfile of the bastion server that you configured earlier:http_proxy=http://www-proxy.<your_domain>.com:80 https_proxy=http://www-proxy.<your_domain>.com:80 no_proxy=localhost,127.0.0.1,1,2,3,4,5,6,7,8,9,0,.<your_domain>.com #(ie.oracle.com,.oraclecorp.com)
http_proxy=http://www-proxy.<your_domain>.com:80 https_proxy=http://www-proxy.<your_domain>.com:80 no_proxy=localhost,127.0.0.1,1,2,3,4,5,6,7,8,9,0,.<your_domain>.com #(ie.oracle.com,.oraclecorp.com)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Generate Discovery ISO to generate the discovery ISO image file.
-
Click Download Discovery ISO to save the file to your local system. After you download the ISO file, you can rename it as required, for example
discovery_image_<your_cluster_name>.iso.
1.5.2. Converting and uploading the image to Oracle Edge Cloud Copy linkLink copied to clipboard!
Convert the ISO image to an Oracle Cloud Infrastructure (OCI) image and upload it to your Oracle Edge Cloud system from your OCI Home Region Object Store.
Procedure
- Convert the image from ISO to OCI.
- Upload the OCI image to an OCI bucket, and generate a Pre-Authenticated Request (PAR) URL.
- Import the OCI image to the Oracle® Edge Cloud portal.
- Copy the Oracle Cloud Identifier (OCID) of the image for use in the next procedure.
For the full procedure, see step 6 - 8 in the "OpenShift Image Preparation" section of the Oracle documentation.
1.6. Running the Terraform script via the C3 region Copy linkLink copied to clipboard!
Run the terraform.tfvars Terraform script to create all infrastructure resources on Oracle® Edge Cloud. These resources include the OpenShift Container Platform VCN, public and private subnets, load balancers, internet GW, NAT GW, and DNS server.
This procedure deploys a cluster consisting of three control plane (master) and three compute (worker) nodes. After deployment, you must rename and reboot the nodes. This process temporarily duplicates nodes, requiring manual cleanup in the next procedure.
Procedure
- Connect to the bastion server via SSH.
- Set the C3 Certificate location and export the certificate.
-
Run the
terraform.tfvarsscript to create three control plane nodes and three compute nodes. - Update the labels for the control plane and compute nodes.
- Stop and restart the instances one by one on the Oracle® Edge Cloud portal.
For the full procedure, see the "Terraform Script Execution - Part 2" section in the Oracle documentation.
1.7. Completing the installation by using the Assisted Installer web console Copy linkLink copied to clipboard!
After you configure the infrastructure, the instances are now running and are ready to be registered with Red Hat.
1.7.1. Assigning node roles Copy linkLink copied to clipboard!
If the Terraform scripts completed successfully, twelve hosts are now listed for the cluster. Three control plane hosts and three compute hosts have the status "Disconnected". Three control plane hosts and three compute hosts have the status "Insufficient".
Delete the disconnected hosts and assign roles to the remaining hosts.
Procedure
- From the Assisted Installer web console, select the cluster and navigate to the Host discovery page.
- Delete the six hosts with a "Disconnected" status, by clicking the option button for each host and selecting Remove host. The status of the remaining hosts changes from "Insufficient" to "Ready". This process can take up to three minutes.
- From the Role column, assign the Control plane role to the three nodes with a boot size of 1.10 TB. Assign the Worker role to the three nodes with boot size of 100 GB.
- Rename any hosts with a name shorter than 63 characters, by clicking the option button for the host and selecting Change hostname. Otherwise the cluster installation will fail.
- Click Next.
- On the Storage page, click Next.
1.7.2. Configuring networking Copy linkLink copied to clipboard!
On the Networking page, add the NTP sources for any hosts that display the Some validations failed status.
Procedure
- In the Host inventory table, click the Some validations failed link for each host displaying this status.
-
Click Add NTP sources, and then add the IP address
169.254.169.254for one of the nodes. - Wait for 2 - 3 minutes until all the Some validations failed indicators disappear.
- Select Next.
1.7.3. Adding custom manifests Copy linkLink copied to clipboard!
Create, modify, and upload the four mandatory custom manifests provided by Oracle.
In the
C3/custom_manifests_C3/manifestsfolder, the following manifests are mandatory:-
oci-ccm.yml -
oci-csi.yml
-
In the
C3/custom_manifests_C3/openshiftfolder, the following manifests are mandatory:-
machineconfig-ccm.yml -
machineconfig-csi.yml
-
Prerequisites
- Prepare the custom manifests. For details, see step 8 in the "Install the Cluster using the RH Assisted Installer UI" section of the Oracle documentation.
Procedure
- Navigate to the Custom manifests page.
Upload and save the
oci-ccm.ymlandoci-csi.ymlmanifest files:- In the Folder field, select manifests.
-
In the File name field, enter
oci-ccm.yml. - In the Content section, click Browse.
-
Select the oci-ccm.yml file from the
C3/custom_ manifest_C3/manifestsfolder. -
Click Add another manifest and repeat the previous substeps for the
oci-csi.ymlfile.
Upload and save the
machineconfig-ccm.ymlandmachineconfig-csi.ymlmanifest files:- Click Add another manifest.
- In the Folder field, select openshift.
-
In the File name field, enter
machineconfig-ccm.yml. - In the Content section, click Browse.
-
Select the machineconfig-ccm.yml file from the
C3/custom_ manifest_C3/openshiftfolder. -
Click Add another manifest and repeat the previous substeps for the
machineconfig-csi.ymlfile.
- Click Next to save the custom manifests.
- From the Review and create page, click Install cluster to create your OpenShift Container Platform cluster. This process takes approximately thirty minutes.
1.8. Opening OpenShift Container Platform from the Oracle Edge Cloud web console Copy linkLink copied to clipboard!
For instructions to access the OpenShift Container Platform console from Oracle Edge Cloud, see steps 15 - 17 in the "Install the Cluster using the RH Assisted Installer UI" section of the Oracle documentation.
Chapter 2. Installing a cluster on Oracle Edge Cloud by using the Agent-based Installer Copy linkLink copied to clipboard!
You can use the Agent-based Installer to install a cluster on Oracle® Edge Cloud, so that you can run cluster workloads on on-premise infrastructure while still using Oracle® Cloud Infrastructure (OCI) services.
The following procedures describe a cluster installation on Oracle® Compute Cloud@Customer as an example.
2.1. Supported Oracle Edge Cloud infrastructures Copy linkLink copied to clipboard!
The following table describes the support status of each Oracle® Edge Cloud infrastructure offering:
| Infrastructure type | Support status |
|---|---|
| Private Cloud Appliance | General Availability |
| Oracle Compute Cloud@Customer | General Availability |
| Roving Edge | Technology Preview |
2.2. Installation process workflow Copy linkLink copied to clipboard!
The following workflow describes a high-level outline for the process of installing an OpenShift Container Platform cluster on Oracle Edge Cloud using the Agent-based Installer:
- Create Oracle Cloud Infrastructure (OCI) resources and services (Oracle).
- Prepare configuration files for the Agent-based Installer (Red Hat).
- Generate the agent ISO image (Red Hat).
- Convert the ISO image to an OCI image, upload it to an OCI Home Region Bucket, and then import the uploaded image to the Oracle Edge Cloud system (Oracle).
- Disconnected environments: Prepare a web server that is accessible by Oracle Edge Cloud instances (Red Hat).
- Disconnected environments: Upload the rootfs image to the web server (Red Hat).
- Configure your firewall for OpenShift Container Platform (Red Hat).
- Create control plane nodes and configure load balancers (Oracle).
- Create compute nodes and configure load balancers (Oracle).
- Verify that your cluster runs on Oracle Edge Cloud (Oracle).
2.3. Creating OCI infrastructure resources and services Copy linkLink copied to clipboard!
You must create an Oracle Edge Cloud environment on your virtual machine (VM) shape. By creating this environment, you can install OpenShift Container Platform and deploy a cluster on an infrastructure that supports a wide range of cloud options and strong security policies. Having prior knowledge of Oracle Cloud Infrastructure (OCI) components can help you with understanding the concept of OCI resources and how you can configure them to meet your organizational needs.
To ensure compatibility with OpenShift Container Platform, you must set A as the record type for each DNS record and name records as follows:
-
api.<cluster_name>.<base_domain>, which targets theapiVIPparameter of the API load balancer -
api-int.<cluster_name>.<base_domain>, which targets theapiVIPparameter of the API load balancer -
*.apps.<cluster_name>.<base_domain>, which targets theingressVIPparameter of the Ingress load balancer
The api.* and api-int.* DNS records relate to control plane machines, so you must ensure that all nodes in your installed OpenShift Container Platform cluster can access these DNS records.
Prerequisites
- You configured an OCI account to host the OpenShift Container Platform cluster. See "Access and Considerations" in OpenShift Cluster Setup with Agent Based Installer on Compute Cloud@Customer (Oracle documentation).
Procedure
Create the required OCI resources and services.
For more information, see "Terraform Script Execution" in OpenShift Cluster Setup with Agent Based Installer on Compute Cloud@Customer (Oracle documentation).
2.4. Creating configuration files for installing a cluster on Oracle Edge Cloud Copy linkLink copied to clipboard!
You must create the install-config.yaml and the agent-config.yaml configuration files so that you can use the Agent-based Installer to generate a bootable ISO image. The Agent-based installation comprises a bootable ISO that has the Assisted discovery agent and the Assisted Service. Both of these components are required to perform the cluster installation, but the latter component runs on only one of the hosts.
You can also use the Agent-based Installer to generate or accept Zero Touch Provisioning (ZTP) custom resources.
Prerequisites
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing the method for users.
- You have read the "Preparing to install with the Agent-based Installer" documentation.
- You downloaded the Agent-Based Installer and the command-line interface (CLI) from the Red Hat Hybrid Cloud Console.
If you are installing in a disconnected environment, you have prepared a mirror registry in your environment and mirrored release images to the registry.
ImportantCheck that your
openshift-installbinary version relates to your local image container registry and not a shared registry, such as Red Hat Quay, by running the following command:./openshift-install version
$ ./openshift-install versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for a shared registry binary
./openshift-install 4.20.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.20ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64
./openshift-install 4.20.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.20ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You have logged in to the OpenShift Container Platform with administrator privileges.
Procedure
Create an installation directory to store configuration files in by running the following command:
mkdir ~/<directory_name>
$ mkdir ~/<directory_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
install-config.yamlconfiguration file to meet the needs of your organization and save the file in the directory you created.install-config.yamlfile that sets an external platformCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The base domain of your cloud provider.
- 2
- The IP address from the virtual cloud network (VCN) that the CIDR allocates to resources and components that operate on your network.
- 3 4
- Depending on your infrastructure, you can select either
arm64oramd64. - 5
- Set
OCIas the external platform, so that OpenShift Container Platform can integrate with OCI. - 6
- Specify your SSH public key.
- 7
- The pull secret that you need for authenticate purposes when downloading container images for OpenShift Container Platform components and services, such as Quay.io. See Install OpenShift Container Platform 4 from the Red Hat Hybrid Cloud Console.
Create a directory on your local system named
openshift. This must be a subdirectory of the installation directory.ImportantDo not move the
install-config.yamloragent-config.yamlconfiguration files to theopenshiftdirectory.Configure the Oracle custom manifest files.
- Go to "Prepare the OpenShift Master Images" in OpenShift Cluster Setup with Agent Based Installer on Compute Cloud@Customer (Oracle documentation).
-
Copy and paste the
oci-ccm.yml,oci-csi.yml, andmachineconfig-ccm.ymlfiles into youropenshiftdirectory. -
Edit the
oci-ccm.ymlandoci-csi.ymlfiles to specify the compartment Oracle® Cloud Identifier (OCID), VCN OCID, subnet OCID from the load balancer, the security lists OCID, and thec3-cert.pemsection.
Configure the
agent-config.yamlconfiguration file to meet your organization’s requirements.Sample
agent-config.yamlfile for an IPv4 network.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The cluster name that you specified in your DNS record.
- 2
- The namespace of your cluster on OpenShift Container Platform.
- 3
- If you use IPv4 as the network IP address format, ensure that you set the
rendezvousIPparameter to an IPv4 address that the VCN’s Classless Inter-Domain Routing (CIDR) method allocates on your network. Also ensure that at least one instance from the pool of instances that you booted with the ISO matches the IP address value you set for therendezvousIPparameter. - 4
- The URL of the server where you want to upload the rootfs image. This parameter is required only for disconnected environments.
Generate a minimal ISO image, which excludes the rootfs image, by entering the following command in your installation directory:
./openshift-install agent create image --log-level debug
$ ./openshift-install agent create image --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command also completes the following actions:
-
Creates a subdirectory,
./<installation_directory>/auth directory:, and placeskubeadmin-passwordandkubeconfigfiles in the subdirectory. -
Creates a
rendezvousIPfile based on the IP address that you specified in theagent-config.yamlconfiguration file. Optional: Any modifications you made to
agent-config.yamlandinstall-config.yamlconfiguration files get imported to the Zero Touch Provisioning (ZTP) custom resources.ImportantThe Agent-based Installer uses Red Hat Enterprise Linux CoreOS (RHCOS). The rootfs image, which is mentioned in a later step, is required for booting, recovering, and repairing your operating system.
-
Creates a subdirectory,
Disconnected environments only: Upload the rootfs image to a web server.
-
Go to the
./<installation_directory>/boot-artifactsdirectory that was generated when you created the minimal ISO image. Use your preferred web server, such as any Hypertext Transfer Protocol daemon (
httpd), to upload the rootfs image to the location specified in thebootArtifactsBaseURLparameter of theagent-config.yamlfile.For example, if the
bootArtifactsBaseURLparameter stateshttp://192.168.122.20, you would upload the generated rootfs image to this location so that the Agent-based installer can access the image fromhttp://192.168.122.20/agent.x86_64-rootfs.img. After the Agent-based installer boots the minimal ISO for the external platform, the Agent-based Installer downloads the rootfs image from thehttp://192.168.122.20/agent.x86_64-rootfs.imglocation into the system memory.NoteThe Agent-based Installer also adds the value of the
bootArtifactsBaseURLto the minimal ISO Image’s configuration, so that when the Operator boots a cluster’s node, the Agent-based Installer downloads the rootfs image into system memory.ImportantConsider that the full ISO image, which is in excess of
1GB, includes the rootfs image. The image is larger than the minimal ISO Image, which is typically less than150MB.
-
Go to the
2.5. Configuring your firewall for OpenShift Container Platform Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function.
There are no special configuration considerations for services running on only controller nodes compared to worker nodes.
If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster.
Procedure
Set the following registry URLs for your firewall’s allowlist:
Expand URL Port Function registry.redhat.io443
Provides core container images
access.redhat.com443
Hosts a signature store that a container client requires for verifying images pulled from
registry.access.redhat.com. In a firewall environment, ensure that this resource is on the allowlist.registry.access.redhat.com443
Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images.
quay.io443
Provides core container images
cdn.quay.io443
Provides core container images
cdn01.quay.io443
Provides core container images
cdn02.quay.io443
Provides core container images
cdn03.quay.io443
Provides core container images
cdn04.quay.io443
Provides core container images
cdn05.quay.io443
Provides core container images
cdn06.quay.io443
Provides core container images
sso.redhat.com443
The
https://console.redhat.comsite uses authentication fromsso.redhat.comicr.io443
Provides IBM Cloud Pak container images. This domain is only required if you use IBM Cloud Paks.
cp.icr.io443
Provides IBM Cloud Pak container images. This domain is only required if you use IBM Cloud Paks.
-
You can use the wildcard
*.quay.ioinstead ofcdn.quay.ioandcdn0[1-6].quay.ioin your allowlist. -
You can use the wildcard
*.access.redhat.comto simplify the configuration and ensure that all subdomains, includingregistry.access.redhat.com, are allowed. -
When you add a site, such as
quay.io, to your allowlist, do not add a wildcard entry, such as*.quay.io, to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such ascdn01.quay.io.
-
You can use the wildcard
- Set your firewall’s allowlist to include any site that provides resources for a language or framework that your builds require.
If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Lightspeed:
Expand URL Port Function cert-api.access.redhat.com443
Required for Telemetry
api.access.redhat.com443
Required for Telemetry
infogw.api.openshift.com443
Required for Telemetry
console.redhat.com443
Required for Telemetry and for
insights-operatorIf you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud:
Expand Cloud URL Port Function Alibaba
*.aliyuncs.com443
Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use.
AWS
aws.amazon.com443
Used to install and manage clusters in an AWS environment.
*.amazonaws.comAlternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist:
443
Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use.
ec2.amazonaws.com443
Used to install and manage clusters in an AWS environment.
events.amazonaws.com443
Used to install and manage clusters in an AWS environment.
iam.amazonaws.com443
Used to install and manage clusters in an AWS environment.
route53.amazonaws.com443
Used to install and manage clusters in an AWS environment.
*.s3.amazonaws.com443
Used to install and manage clusters in an AWS environment.
*.s3.<aws_region>.amazonaws.com443
Used to install and manage clusters in an AWS environment.
*.s3.dualstack.<aws_region>.amazonaws.com443
Used to install and manage clusters in an AWS environment.
sts.amazonaws.com443
Used to install and manage clusters in an AWS environment.
sts.<aws_region>.amazonaws.com443
Used to install and manage clusters in an AWS environment.
tagging.us-east-1.amazonaws.com443
Used to install and manage clusters in an AWS environment. This endpoint is always
us-east-1, regardless of the region the cluster is deployed in.ec2.<aws_region>.amazonaws.com443
Used to install and manage clusters in an AWS environment.
elasticloadbalancing.<aws_region>.amazonaws.com443
Used to install and manage clusters in an AWS environment.
servicequotas.<aws_region>.amazonaws.com443
Required. Used to confirm quotas for deploying the service.
tagging.<aws_region>.amazonaws.com443
Allows the assignment of metadata about AWS resources in the form of tags.
*.cloudfront.net443
Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront.
GCP
*.googleapis.com443
Required to access Google Cloud services and resources. Review Cloud Endpoints in the Google Cloud documentation to find the endpoints to allow for your APIs.
accounts.google.com443
Required to access your Google Cloud account.
Microsoft Azure
management.azure.com443
Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs.
*.blob.core.windows.net443
Required to download Ignition files.
login.microsoftonline.com443
Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs.
Allowlist the following URLs:
Expand URL Port Function *.apps.<cluster_name>.<base_domain>443
Required to access the default cluster routes unless you set an ingress wildcard during installation.
api.openshift.com443
Required both for your cluster token and to check if updates are available for the cluster.
console.redhat.com443
Required for your cluster token.
mirror.openshift.com443
Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source.
quayio-production-s3.s3.amazonaws.com443
Required to access Quay image content in AWS.
rhcos.mirror.openshift.com443
Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images.
sso.redhat.com443
The
https://console.redhat.comsite uses authentication fromsso.redhat.comstorage.googleapis.com/openshift-release443
A source of release image signatures, although the Cluster Version Operator needs only a single functioning source.
Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow
*.apps.<cluster_name>.<base_domain>, then allow these routes:-
oauth-openshift.apps.<cluster_name>.<base_domain> -
canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> -
console-openshift-console.apps.<cluster_name>.<base_domain>, or the hostname that is specified in thespec.route.hostnamefield of theconsoles.operator/clusterobject if the field is not empty.
-
Allowlist the following URL for optional third-party content:
Expand URL Port Function registry.connect.redhat.com443
Required for all third-party images and certified operators.
If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs:
-
1.rhel.pool.ntp.org -
2.rhel.pool.ntp.org -
3.rhel.pool.ntp.org
-
If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall.
2.6. Running a cluster on Oracle Edge Cloud Copy linkLink copied to clipboard!
To run a cluster on Oracle® Edge Cloud, you must first convert your generated Agent ISO image into an OCI image, upload it to an OCI Home Region Bucket, and then import the uploaded image to the Oracle Edge Cloud system.
Oracle Edge Cloud supports the following OpenShift Container Platform cluster topologies:
- Installing an OpenShift Container Platform cluster on a single node.
- A highly available cluster that has a minimum of three control plane instances and two compute instances.
- A compact three-node cluster that has a minimum of three control plane instances.
Prerequisites
- You generated an Agent ISO image. See the "Creating configuration files for installing a cluster on Oracle Edge Cloud" section.
Procedure
- Convert the agent ISO image to an OCI image, upload it to an OCI Home Region Bucket, and then import the uploaded image to the Oracle Edge Cloud system. See "Prepare the OpenShift Master Images" in OpenShift Cluster Setup with Agent Based Installer on Compute Cloud@Customer (Oracle documentation) for instructions.
- Create control plane instances on Oracle Edge Cloud. See "Create control plane instances on C3 and Master Node LB Backend Sets" in OpenShift Cluster Setup with Agent Based Installer on Compute Cloud@Customer (Oracle documentation) for instructions.
Create a compute instance from the supplied base image for your cluster topology. See "Add worker nodes" in OpenShift Cluster Setup with Agent Based Installer on Compute Cloud@Customer (Oracle documentation) for instructions.
ImportantBefore you create the compute instance, check that you have enough memory and disk resources for your cluster. Additionally, ensure that at least one compute instance has the same IP address as the address stated under
rendezvousIPin theagent-config.yamlfile.
2.7. Verifying that your Agent-based cluster installation runs on Oracle Edge Cloud Copy linkLink copied to clipboard!
Verify that your cluster was installed and is running effectively on Oracle Edge Cloud.
Prerequisites
- You created all the required Oracle Cloud Infrastructure (OCI) resources and services. See the "Creating OCI infrastructure resources and services" section.
-
You created
install-config.yamlandagent-config.yamlconfiguration files. See the "Creating configuration files for installing a cluster on Oracle Edge Cloud" section. - You uploaded the agent ISO image to a default Oracle Object Storage bucket, and you created a compute instance on Oracle Edge Cloud. For more information, see "Running a cluster on Oracle Edge Cloud".
Procedure
After you deploy the compute instance on a self-managed node in your OpenShift Container Platform cluster, you can monitor the cluster’s status by choosing one of the following options:
From the OpenShift Container Platform CLI, enter the following command:
./openshift-install agent wait-for install-complete --log-level debug
$ ./openshift-install agent wait-for install-complete --log-level debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the
rendezvoushost node that runs the bootstrap node. After the host reboots, the host forms part of the cluster.Use the
kubeconfigAPI to check the status of various OpenShift Container Platform components. For theKUBECONFIGenvironment variable, set the relative path of the cluster’skubeconfigconfiguration file:export KUBECONFIG=~/auth/kubeconfig
$ export KUBECONFIG=~/auth/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of each of the cluster’s self-managed nodes. CCM applies a label to each node to designate the node as running in a cluster on OCI.
oc get nodes -A
$ oc get nodes -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Output example
NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f
NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of each of the cluster’s Operators, with the CCM Operator status being a good indicator that your cluster is running.
oc get co
$ oc get coCopy to Clipboard Copied! Toggle word wrap Toggle overflow Truncated output example
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.20.0-0 True False False 6m18s baremetal 4.20.0-0 True False False 2m42s network 4.20.0-0 True True False 5m58s Progressing: … …NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.20.0-0 True False False 6m18s baremetal 4.20.0-0 True False False 2m42s network 4.20.0-0 True True False 5m58s Progressing: … …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.