Installing on IBM PowerVC
Installing OpenShift Container Platform on IBM PowerVC
Abstract
Chapter 1. Preparing to install on IBM PowerVC Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on IBM® Power® Virtualization Center (IBM PowerVC) using installer-provisioned infrastructure.
1.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
1.1.1. Installing a cluster on installer-provisioned infrastructure Copy linkLink copied to clipboard!
You can install a cluster on IBM PowerVC infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:
Installing a cluster on IBM PowerVC with customizations: You can install a customized cluster on IBM PowerVC. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
Chapter 2. Installing a cluster on IBM PowerVC with customizations Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.21, you can install a customized cluster on IBM PowerVC. To customize the installation, modify parameters in the install-config.yaml before you install the cluster.
2.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You have a load balancing service you can use with the IBM PowerVC network you intend to use.
- You have a DHCP server backing the IBM PowerVC network you intend to use.
2.2. Resource guidelines for installing OpenShift Container Platform on IBM PowerVC Copy linkLink copied to clipboard!
To support an OpenShift Container Platform installation, it is recommended that your IBM PowerVC has room for the following resources available:
| Resource | Value |
|---|---|
| Subnets | 1 |
| RAM | 88 GB |
| vCPUs | 22 |
| Volume storage | 275 GB |
| Instances | 7 |
A cluster might function with fewer than recommended resources.
2.3. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procedure
Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Tip- Select your infrastructure provider from the Run it yourself section of the page.
- Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
Important- The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
- Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
tar -xvf openshift-install-linux.tar.gz
$ tar -xvf openshift-install-linux.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
TipAlternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
2.4. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on
IBM PowerVC.
Prerequisites
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
./openshift-install create install-config --dir <installation_directory>
$ ./openshift-install create install-config --dir <installation_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <installation_directory>: For<installation_directory>, specify the directory name to store the files that the installation program creates.When specifying the directory:
-
Verify that the directory has the
executepermission. This permission is required to run Terraform binaries under the installation directory. - Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Enter a descriptive name for your cluster.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
2.5. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- You have configured an account with the cloud platform that hosts your cluster.
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
- You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Procedure
In the directory that contains the installation program, initialize the cluster deployment by running the following command:
./openshift-install create cluster --dir <installation_directory> \ --log-level=info$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadminuser. -
Credential information also outputs to
<installation_directory>/.openshift_install.log.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Example output
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
2.6. Installing the OpenShift CLI on Linux Copy linkLink copied to clipboard!
To manage your cluster and deploy applications from the command line, install the OpenShift CLI (oc) binary on Linux.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.
Download and install the new version of oc.
Procedure
- Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
- Select the architecture from the Product Variant list.
- Select the appropriate version from the Version list.
- Click Download Now next to the OpenShift v4.21 Linux Clients entry and save the file.
Unpack the archive:
tar xvf <file>
$ tar xvf <file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After you install the OpenShift CLI, it is available using the
occommand:oc <command>
$ oc <command>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Installing the OpenShift CLI on Windows Copy linkLink copied to clipboard!
To manage your cluster and deploy applications from the command line, install OpenShift CLI (oc) binary on Windows.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.
Download and install the new version of oc.
Procedure
- Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
- Select the appropriate version from the Version list.
- Click Download Now next to the OpenShift v4.21 Windows Client entry and save the file.
- Extract the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATHvariable.To check your
PATHvariable, open the command prompt and execute the following command:path
C:\> pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After you install the OpenShift CLI, it is available using the
occommand:oc <command>
C:\> oc <command>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Installing the OpenShift CLI on macOS Copy linkLink copied to clipboard!
To manage your cluster and deploy applications from the command line, install the OpenShift CLI (oc) binary on macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.
Download and install the new version of oc.
Procedure
- Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
- Select the architecture from the Product Variant list.
- Select the appropriate version from the Version list.
Click Download Now next to the OpenShift v4.21 macOS Clients entry and save the file.
NoteFor macOS arm64, choose the OpenShift v4.21 macOS arm64 Client entry.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on yourPATHvariable.To check your
PATHvariable, open a terminal and execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify your installation by using an
occommand:oc <command>
$ oc <command>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Verifying cluster status Copy linkLink copied to clipboard!
You can verify your OpenShift Container Platform cluster’s status during or after installation.
Procedure
In the cluster environment, export the administrator’s kubeconfig file:
export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
The
kubeconfigfile contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.View the control plane and compute machines created after a deployment:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow View your cluster’s version:
oc get clusterversion
$ oc get clusterversionCopy to Clipboard Copied! Toggle word wrap Toggle overflow View your Operators' status:
oc get clusteroperator
$ oc get clusteroperatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow View all running pods in the cluster:
oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
Procedure
Export the
kubeadmincredentials by running the following command:export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<installation_directory>- Specifies the path to the directory that stores the installation files.
Verify you can run
occommands successfully using the exported configuration by running the following command:oc whoami
$ oc whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
system:admin
system:adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
To provide metrics about cluster health and the success of updates, the Telemetry service requires internet access. When connected, this service runs automatically by default and registers your cluster to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager,use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. For more information about subscription watch, see "Data Gathered and Used by Red Hat’s subscription services" in the Additional resources section.
Chapter 3. Installation configuration parameters for IBM PowerVC Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster on IBM® Power® Virtualization Center (IBM PowerVC), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further.
3.1. Available installation configuration parameters for IBM PowerVC Copy linkLink copied to clipboard!
The following tables specify the required, optional, and IBM PowerVC-specific installation configuration parameters that you can set as part of the installation process.
After installation, you cannot change these parameters in the install-config.yaml file.
3.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description |
|---|---|
apiVersion:
|
The API version for the Value: String |
baseDomain:
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the
Value: A fully-qualified domain or subdomain name, such as |
metadata:
|
Kubernetes resource Value: Object |
metadata: name:
|
The name of the cluster. DNS records for the cluster are all subdomains of
Value: String of lowercase letters, hyphens ( |
platform:
|
The configuration for the specific platform upon which to perform the installation: Value: Object |
pullSecret:
| Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. Value: |
3.1.2. Additional IBM PowerVC configuration parameters Copy linkLink copied to clipboard!
Additional configuration parameters are described in the following table:
| Parameter | Description |
|---|---|
platform:
powervc:
cloud:
|
The name of the cloud to use from the list of clouds in the
In the cloud configuration in the
Value: String, for example |
3.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional configuration parameters are described in the following table:
| Parameter | Description |
|---|---|
compute:
platform:
powervc:
zones:
| Compute availability zones to install machines on. If this parameter is not set, the installation program relies on the default settings that the administrator configured.
Value: A list of strings. For example, |
controlPlane:
platform:
powervc:
zones:
| Compute availability zones to install machines on. If this parameter is not set, the installation program relies on the default settings that the administrator configured.
Value: A list of strings. For example, |
platform:
powervc:
clusterOSImage:
| The name of the existing image.
Value: the name of an existing image, for example |
platform:
powervc:
controlPlanePort:
fixedIPs:
| Subnets for the machines to use. Value: A list of subnet names or UUIDs to use in cluster installation. |
platform:
powervc:
controlPlanePort:
network:
| A network for the machines to use. Value: The UUID or name of an network to use in cluster installation. |
platform:
powervc:
defaultMachinePlatform:
| The default machine pool platform configuration. Value: {
"type": "my-compute-template",
}
|
platform:
powervc:
externalDNS:
| IP addresses for external DNS servers that cluster instances use for DNS resolution.
Value: A list of IP addresses as strings. For example, |
platform:
powervc:
loadbalancer:
|
Whether or not to use the default, internal load balancer. If the value is set to
Value: |
platform:
powervc:
apiVIPs:
| Virtual IP (VIP) addresses that you configured for control plane API access.
Value: A list of IP addresses as strings. For example, |
platform:
powervc:
ingressVIPs:
| Virtual IP (VIP) addresses that you configured for cluster ingress.
Value: A list of IP addresses as strings. For example, |
3.1.4. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or configure different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description |
|---|---|
networking:
| The configuration for the cluster network. Value: Object Note
You cannot change parameters specified by the |
networking: networkType:
| The Red Hat OpenShift Networking network plugin to install.
Value: |
networking: clusterNetwork:
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. Value: An array of objects. For example: networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
|
networking:
clusterNetwork:
cidr:
|
Required if you use An IPv4 network.
Value: An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
networking:
clusterNetwork:
hostPrefix:
|
The subnet prefix length to assign to each individual node. For example, if Value: A subnet prefix.
The default value is |
networking: serviceNetwork:
|
The IP address block for services. The default value is The OVN-Kubernetes network plugins supports only a single IP address block for the service network. Value: An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16
|
networking: machineNetwork:
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. Value: An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16
|
networking:
machineNetwork:
cidr:
|
Required if you use Value: An IP network block in CIDR notation.
For example, Note
Set the |
networking:
ovnKubernetesConfig:
ipv4:
internalJoinSubnet:
|
Configures the IPv4 join subnet that is used internally by
Value: An IP network block in CIDR notation. The default value is |
3.1.5. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description |
|---|---|
additionalTrustBundle:
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle might also be used when a proxy has been configured. Value: String |
capabilities:
| Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing. Value: String array |
capabilities: baselineCapabilitySet:
|
Selects an initial set of optional capabilities to enable. Valid values are Value: String |
capabilities: additionalEnabledCapabilities:
|
Extends the set of optional capabilities beyond what you specify in Value: String array |
cpuPartitioningMode:
| Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. You can only enable workload partitioning during installation. You cannot disable it after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section.
Value: |
compute:
| The configuration for the machines that comprise the compute nodes.
Value: Array of |
compute: architecture:
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are Value: String |
compute: hyperthreading:
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.
Value: |
compute: name:
|
Required if you use
Value: |
compute: platform:
|
Required if you use
Value: |
compute: replicas:
| The number of compute machines, which are also known as worker machines, to provision.
Value: A positive integer greater than or equal to |
featureSet:
| Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates".
Value: String. The name of the feature set to enable, such as |
controlPlane:
| The configuration for the machines that form the control plane.
Value: Array of |
controlPlane: architecture:
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are Value: String |
controlPlane: architecture:
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. The valid value is the default: Value: String |
controlPlane: hyperthreading:
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.
Value: |
controlPlane: name:
|
Required if you use
Value: |
controlPlane: platform:
|
Required if you use
Value: |
controlPlane: replicas:
| The number of control plane machines to provision.
Value: Supported values are |
arbiter:
name: arbiter
|
The OpenShift Container Platform cluster requires a name for arbiter nodes. For example, |
arbiter:
replicas: 1
|
The |
credentialsMode:
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
Value: |
fips:
|
Enable or disable FIPS mode. The default is Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Important If you are using Azure File storage, you cannot enable FIPS mode.
Value: |
endpoint: name: <endpoint_name> clusterUseOnly: `true` or `false`
|
The Important
When
When you want the installation program to use the public API endpoints and cluster operators to use the API endpoint overrides, set Value: String or boolean |
imageContentSources:
| Sources and repositories for the release-image content.
Value: Array of objects. Includes a |
imageContentSources: source:
|
Required if you use Value: String |
imageContentSources: mirrors:
| Specify one or more repositories that might also contain the same images. Value: Array of strings |
publish:
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.
Value:
Setting this field to Important
If the value of the field is set to |
sshKey:
| The SSH key to authenticate access to your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
Value: For example, |
Chapter 4. Uninstalling a cluster on IBM PowerVC Copy linkLink copied to clipboard!
You can remove a cluster that you deployed to IBM PowerVC.
4.1. Removing a cluster that uses installer-provisioned infrastructure Copy linkLink copied to clipboard!
You can remove a cluster that uses installer-provisioned infrastructure that you provisioned from your cloud platform.
After uninstallation, check your cloud provider for any resources that were not removed properly, especially with user-provisioned infrastructure clusters. Some resources might exist because either the installation program did not create the resource or could not access the resource.
Prerequisites
- You have a copy of the installation program that you used to deploy the cluster.
- You have the files that the installation program generated when you created your cluster.
Procedure
From the directory that has the installation program on the computer that you used to install the cluster, run the following command:
./openshift-install destroy cluster \ --dir <installation_directory> --log-level info
$ ./openshift-install destroy cluster \ --dir <installation_directory> --log-level infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <installation_directory>
- Specify the path to the directory that you stored the installation files in.
- --log-level info
To view different details, specify
warn,debug, orerrorinstead ofinfo.NoteYou must specify the directory that includes the cluster definition files for your cluster. The installation program requires the
metadata.jsonfile in this directory to delete the cluster.
-
Optional: Delete the
<installation_directory>directory and the OpenShift Container Platform installation program.
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.