Clusters
Cluster management
Abstract
Chapter 1. Cluster lifecycle with multicluster engine operator overview
The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. From the hub cluster, you can create and manage clusters, as well as destroy any clusters that you created. You can also hibernate, resume, and detach clusters. Learn more about the cluster lifecycle capabilities from the following documentation.
Access the Support matrix to learn about hub cluster and managed cluster requirements and support.
Information:
- Your cluster is created by using the OpenShift Container Platform cluster installer with the Hive resource. You can find more information about the process of installing OpenShift Container Platform clusters at Installing and configuring OpenShift Container Platform clusters in the OpenShift Container Platform documentation.
- With your OpenShift Container Platform cluster, you can use multicluster engine operator as a standalone cluster manager for cluster lifecycle function, or you can use it as part of a Red Hat Advanced Cluster Management hub cluster.
- If you are using OpenShift Container Platform only, the operator is included with subscription. Visit About multicluster engine for Kubernetes operator from the OpenShift Container Platform documentation.
- If you subscribe to Red Hat Advanced Cluster Management, you also receive the operator with installation. You can create, manage, and monitor other Kubernetes clusters with the Red Hat Advanced Cluster Management hub cluster. See the Red Hat Advanced Cluster Management Installing and upgrading documentation.
Release images are the version of OpenShift Container Platform that you use when you create a cluster. For clusters that are created using Red Hat Advanced Cluster Management, you can enable automatic upgrading of your release images. For more information about release images in Red Hat Advanced Cluster Management, see Release images.
The components of the cluster lifecycle management architecture are included in the Cluster lifecycle architecture.
1.1. Release notes
Learn about the current release.
Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.
Best practice: Upgrade to the most recent version.
If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your credentials.
You can also learn more about the Customer Portal documentation at Red Hat Customer Portal FAQ.
The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform.
For full support information, see the Support matrix. For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy.
1.1.1. What’s new in cluster lifecycle with the multicluster engine operator
Important: Some features and components are identified and released as Technology Preview.
Learn more about what is new this release:
- The open source Open Cluster Management repository is ready for interaction, growth, and contributions from the open community. To get involved, see open-cluster-management.io. You can access the GitHub repository for more information, as well.
- Cluster lifecycle
- Hosted control planes
1.1.1.1. Cluster lifecycle
Learn about what’s new relating to Cluster lifecycle with multicluster engine operator.
Use the integrated console to create a credential and cluster that runs in the Nutanix environment. See Managing credentials and Cluster creation introduction for more information.
1.1.1.2. Hosted control planes
Hosted control planes for OpenShift Container Platform is now generally available on bare metal and OpenShift Virtualization. For more information, see the following topics:
Technology Preview: As a Technology Preview feature, you can provision a hosted control plane cluster on the following platforms:
- Amazon Web Services: For more information, see Configuring hosted control plane clusters on AWS (Technology Preview).
-
IBM Z: For more information, see Configuring the hosting cluster on
x86
bare metal for IBM Z compute nodes (Technology Preview). - IBM Power: For more information, see Configuring the hosting cluster on a 64-bit x86 OpenShift Container Platform cluster to create hosted control planes for IBM Power compute nodes (Technology Preview).
- The hosted control planes feature is now enabled by default. If you want to disable the hosted control planes feature, or if you disabled it and want to manually enable it, see Enabling or disabling the hosted control planes feature.
- Hosted clusters are now automatically imported to your multicluster engine operator cluster. If you want to disable this feature, see Disabling the automatic import of hosted clusters into multicluster engine operator.
1.1.2. Cluster lifecycle known issues
Review the known issues for cluster lifecycle with multicluster engine operator. The following list contains known issues for this release, or known issues that continued from the previous release. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes.
1.1.2.1. Cluster management
Cluster lifecycle known issues and limitations are part of the Cluster lifecycle with multicluster engine operator documentation.
1.1.2.1.1. Limitation with nmstate
Develop quicker by configuring copy and paste features. To configure the copy-from-mac
feature in the assisted-installer
, you must add the mac-address
to the nmstate
definition interface and the mac-mapping
interface. The mac-mapping
interface is provided outside the nmstate
definition interface. As a result, you must provide the same mac-address
twice.
1.1.2.1.2. A StorageVersionMigration
error
When you upgrade from multicluster engine operator from 2.4.0 to 2.4.1, you might find a StorageVersionMigration
error in the log of the cluster-manager
pods that run in the multicluster-engine
namespace. These error messages have no impact on multicluster engine operator. Therefore, you do not need to take any action to resolve these errors.
See the following sample of the error message:
"CRDMigrationController" controller failed to sync "cluster-manager", err: ClusterManager.operator.open-cluster-management.io "cluster-manager" is invalid: status.conditions[5].reason: Invalid value: "StorageVersionMigration Failed. ": status.conditions[5].reason in body should match '^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$'
You might also find an error in the condition in the status of StorageVersionMigration managedclustersetbindings.v1beta1.cluster.open-cluster-management.io
and managedclustersets.v1beta1.cluster.open-cluster-management.io
. See the following sample of the error message:
status: conditions: - lastUpdateTime: "2023-11-29T20:28:14Z" message: 'failed to list resources: the server could not find the requested resource' status: "True" type: Failed
1.1.2.1.3. Lack of support for cluster proxy add-on
The cluster proxy add-on does not support proxy settings between the multicluster engine operator hub cluster and the managed cluster. If there is a proxy server between the multicluster engine operator hub cluster and the managed cluster, you can disable the cluster proxy add-on.
1.1.2.1.4. Prehook failure does not fail the hosted cluster creation
If you use the automation template for the hosted cluster creation and the prehook job fails, then it looks like the hosted cluster creation is still progressing. This is normal because the hosted cluster was designed with no complete failure state, and therefore, it keeps trying to create the cluster.
1.1.2.1.5. Manual removal of the VolSync CSV required on managed cluster when removing the add-on
When you remove the VolSync ManagedClusterAddOn
from the hub cluster, it removes the VolSync operator subscription on the managed cluster but does not remove the cluster service version (CSV). To remove the CSV from the managed clusters, run the following command on each managed cluster from which you are removing VolSync:
oc delete csv -n openshift-operators volsync-product.v0.6.0
If you have a different version of VolSync installed, replace v0.6.0
with your installed version.
1.1.2.1.6. Deleting a managed cluster set does not automatically remove its label
After you delete a ManagedClusterSet
, the label that is added to each managed cluster that associates the cluster to the cluster set is not automatically removed. Manually remove the label from each of the managed clusters that were included in the deleted managed cluster set. The label resembles the following example: cluster.open-cluster-management.io/clusterset:<ManagedClusterSet Name>
.
1.1.2.1.7. ClusterClaim error
If you create a Hive ClusterClaim
against a ClusterPool
and manually set the ClusterClaimspec
lifetime field to an invalid golang time value, the product stops fulfilling and reconciling all ClusterClaims
, not just the malformed claim.
If this error occurs. you see the following content in the clusterclaim-controller
pod logs, which is a specific example with the pool name and invalid lifetime included:
E0203 07:10:38.266841 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to watch *v1.ClusterClaim: failed to list *v1.ClusterClaim: v1.ClusterClaimList.Items: []v1.ClusterClaim: v1.ClusterClaim.v1.ClusterClaim.Spec: v1.ClusterClaimSpec.Lifetime: unmarshalerDecoder: time: unknown unit "w" in duration "1w", error found in #10 byte of ...|time":"1w"}},{"apiVe|..., bigger context ...|clusterPoolName":"policy-aas-hubs","lifetime":"1w"}},{"apiVersion":"hive.openshift.io/v1","kind":"Cl|...
You can delete the invalid claim.
If the malformed claim is deleted, claims begin successfully reconciling again without any further interaction.
1.1.2.1.8. The product channel out of sync with provisioned cluster
The clusterimageset
is in fast
channel, but the provisioned cluster is in stable
channel. Currently the product does not sync the channel
to the provisioned OpenShift Container Platform cluster.
Change to the right channel in the OpenShift Container Platform console. Click Administration > Cluster Settings > Details Channel.
1.1.2.1.9. Restoring the connection of a managed cluster with custom CA certificates to its restored hub cluster might fail
After you restore the backup of a hub cluster that managed a cluster with custom CA certificates, the connection between the managed cluster and the hub cluster might fail. This is because the CA certificate was not backed up on the restored hub cluster. To restore the connection, copy the custom CA certificate information that is in the namespace of your managed cluster to the <managed_cluster>-admin-kubeconfig
secret on the restored hub cluster.
Tip: If you copy this CA certificate to the hub cluster before creating the backup copy, the backup copy includes the secret information. When the backup copy is used to restore in the future, the connection between the hub and managed clusters will automatically complete.
1.1.2.1.10. The local-cluster might not be automatically recreated
If the local-cluster is deleted while disableHubSelfManagement
is set to false
, the local-cluster is recreated by the MulticlusterHub
operator. After you detach a local-cluster, the local-cluster might not be automatically recreated.
To resolve this issue, modify a resource that is watched by the
MulticlusterHub
operator. See the following example:oc delete deployment multiclusterhub-repo -n <namespace>
-
To properly detach the local-cluster, set the
disableHubSelfManagement
to true in theMultiClusterHub
.
1.1.2.1.11. Selecting a subnet is required when creating an on-premises cluster
When you create an on-premises cluster using the console, you must select an available subnet for your cluster. It is not marked as a required field.
1.1.2.1.12. Cluster provisioning with Infrastructure Operator fails
When creating OpenShift Container Platform clusters using the Infrastructure Operator, the file name of the ISO image might be too long. The long image name causes the image provisioning and the cluster provisioning to fail. To determine if this is the problem, complete the following steps:
View the bare metal host information for the cluster that you are provisioning by running the following command:
oc get bmh -n <cluster_provisioning_namespace>
Run the
describe
command to view the error information:oc describe bmh -n <cluster_provisioning_namespace> <bmh_name>
An error similar to the following example indicates that the length of the filename is the problem:
Status: Error Count: 1 Error Message: Image provisioning failed: ... [Errno 36] File name too long ...
If this problem occurs, it is typically on the following versions of OpenShift Container Platform, because the infrastructure operator was not using image service:
- 4.8.17 and earlier
- 4.9.6 and earlier
To avoid this error, upgrade your OpenShift Container Platform to version 4.8.18 or later, or 4.9.7 or later.
1.1.2.1.13. Cannot use host inventory to boot with the discovery image and add hosts automatically
You cannot use a host inventory, or InfraEnv
custom resource, to both boot with the discovery image and add hosts automatically. If you used your previous InfraEnv
resource for the BareMetalHost
resource, and you want to boot the image yourself, you can work around the issue by creating a new InfraEnv
resource.
1.1.2.1.14. Local-cluster status offline after reimporting with a different name
When you accidentally try to reimport the cluster named local-cluster
as a cluster with a different name, the status for local-cluster
and for the reimported cluster display offline
.
To recover from this case, complete the following steps:
Run the following command on the hub cluster to edit the setting for self-management of the hub cluster temporarily:
oc edit mch -n open-cluster-management multiclusterhub
-
Add the setting
spec.disableSelfManagement=true
. Run the following command on the hub cluster to delete and redeploy the local-cluster:
oc delete managedcluster local-cluster
Enter the following command to remove the
local-cluster
management setting:oc edit mch -n open-cluster-management multiclusterhub
-
Remove
spec.disableSelfManagement=true
that you previously added.
1.1.2.1.15. Cluster provision with Ansible automation fails in proxy environment
An Automation template that is configured to automatically provision a managed cluster might fail when both of the following conditions are met:
- The hub cluster has cluster-wide proxy enabled.
- The Ansible Automation Platform can only be reached through the proxy.
1.1.2.1.16. Version of the klusterlet operator must be the same as the hub cluster
If you import a managed cluster by installing the klusterlet operator, the version of the klusterlet operator must be the same as the version of the hub cluster or the klusterlet operator will not work.
1.1.2.1.17. Cannot delete managed cluster namespace manually
You cannot delete the namespace of a managed cluster manually. The managed cluster namespace is automatically deleted after the managed cluster is detached. If you delete the managed cluster namespace manually before the managed cluster is detached, the managed cluster shows a continuous terminating status after you delete the managed cluster. To delete this terminating managed cluster, manually remove the finalizers from the managed cluster that you detached.
1.1.2.1.18. Hub cluster and managed clusters clock not synced
Hub cluster and manage cluster time might become out-of-sync, displaying in the console unknown
and eventually available
within a few minutes. Ensure that the OpenShift Container Platform hub cluster time is configured correctly. See Customizing nodes.
1.1.2.1.19. Importing certain versions of IBM OpenShift Container Platform Kubernetes Service clusters is not supported
You cannot import IBM OpenShift Container Platform Kubernetes Service version 3.11 clusters. Later versions of IBM OpenShift Kubernetes Service are supported.
1.1.2.1.20. Automatic secret updates for provisioned clusters is not supported
When you change your cloud provider access key on the cloud provider side, you also need to update the corresponding credential for this cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.
1.1.2.1.21. Node information from the managed cluster cannot be viewed in search
Search maps RBAC for resources in the hub cluster. Depending on user RBAC settings, users might not see node data from the managed cluster. Results from search might be different from what is displayed on the Nodes page for a cluster.
1.1.2.1.22. Process to destroy a cluster does not complete
When you destroy a managed cluster, the status continues to display Destroying
after one hour, and the cluster is not destroyed. To resolve this issue complete the following steps:
- Manually ensure that there are no orphaned resources on your cloud, and that all of the provider resources that are associated with the managed cluster are cleaned up.
Open the
ClusterDeployment
information for the managed cluster that is being removed by entering the following command:oc edit clusterdeployment/<mycluster> -n <namespace>
Replace
mycluster
with the name of the managed cluster that you are destroying.Replace
namespace
with the namespace of the managed cluster.-
Remove the
hive.openshift.io/deprovision
finalizer to forcefully stop the process that is trying to clean up the cluster resources in the cloud. -
Save your changes and verify that
ClusterDeployment
is gone. Manually remove the namespace of the managed cluster by running the following command:
oc delete ns <namespace>
Replace
namespace
with the namespace of the managed cluster.
1.1.2.1.23. Cannot upgrade OpenShift Container Platform managed clusters on OpenShift Container Platform Dedicated with the console
You cannot use the Red Hat Advanced Cluster Management console to upgrade OpenShift Container Platform managed clusters that are in the OpenShift Container Platform Dedicated environment.
1.1.2.1.24. Work manager add-on search details
The search details page for a certain resource on a certain managed cluster might fail. You must ensure that the work-manager add-on in the managed cluster is in Available
status before you can search.
1.1.2.1.25. Non-Red Hat OpenShift Container Platform managed clusters must have LoadBalancer enabled
Both Red Hat OpenShift Container Platform and non-OpenShift Container Platform clusters support the pod log feature, however non-OpenShift Container Platform clusters require LoadBalancer
to be enabled to use the feature. Complete the following steps to enable LoadBalancer
:
-
Cloud providers have different
LoadBalancer
configurations. Visit your cloud provider documentation for more information. -
Verify if
LoadBalancer
is enabled on your Red Hat Advanced Cluster Management by checking theloggingEndpoint
in the status ofmanagedClusterInfo
. Run the following command to check if the
loggingEndpoint.IP
orloggingEndpoint.Host
has a valid IP address or host name:oc get managedclusterinfo <clusterName> -n <clusterNamespace> -o json | jq -r '.status.loggingEndpoint'
For more information about the LoadBalancer
types, see the Service page in the Kubernetes documentation.
1.1.2.1.26. OpenShift Container Platform 4.10.z does not support hosted control plane clusters with proxy configuration
When you create a hosting service cluster with a cluster-wide proxy configuration on OpenShift Container Platform 4.10.z, the nodeip-configuration.service
service does not start on the worker nodes.
1.1.2.1.27. Cannot provision OpenShift Container Platform 4.11 cluster on Azure
Provisioning an OpenShift Container Platform 4.11 cluster on Azure fails due to an authentication operator timeout error. To work around the issue, use a different worker node type in the install-config.yaml
file or set the vmNetworkingType
parameter to Basic
. See the following install-config.yaml
example:
compute: - hyperthreading: Enabled name: 'worker' replicas: 3 platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 128 vmNetworkingType: 'Basic'
1.1.2.1.28. Client cannot reach iPXE script
iPXE is an open source network boot firmware. See iPXE for more details.
When booting a node, the URL length limitation in some DHCP servers cuts off the ipxeScript
URL in the InfraEnv
custom resource definition, resulting in the following error message in the console:
no bootable devices
To work around the issue, complete the following steps:
Apply the
InfraEnv
custom resource definition when using an assisted installation to expose thebootArtifacts
, which might resemble the following file:status: agentLabelSelector: matchLabels: infraenvs.agent-install.openshift.io: qe2 bootArtifacts: initrd: https://assisted-image-service-multicluster-engine.redhat.com/images/0000/pxe-initrd?api_key=0000000&arch=x86_64&version=4.11 ipxeScript: https://assisted-service-multicluster-engine.redhat.com/api/assisted-install/v2/infra-envs/00000/downloads/files?api_key=000000000&file_name=ipxe-script kernel: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-kernel-x86_64 rootfs: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-rootfs.x86_64.img
-
Create a proxy server to expose the
bootArtifacts
with short URLs. Copy the
bootArtifacts
and add them them to the proxy by running the following commands:for artifact in oc get infraenv qe2 -ojsonpath="{.status.bootArtifacts}" | jq ". | keys[]" | sed "s/\"//g" do curl -k oc get infraenv qe2 -ojsonpath="{.status.bootArtifacts.${artifact}}"` -o $artifact
-
Add the
ipxeScript
artifact proxy URL to thebootp
parameter inlibvirt.xml
.
1.1.2.1.29. Cannot delete ClusterDeployment after upgrading Red Hat Advanced Cluster Management
If you are using the removed BareMetalAssets API in Red Hat Advanced Cluster Management 2.6, the ClusterDeployment
cannot be deleted after upgrading to Red Hat Advanced Cluster Management 2.7 because the BareMetalAssets API is bound to the ClusterDeployment
.
To work around the issue, run the following command to remove the finalizers
before upgrading to Red Hat Advanced Cluster Management 2.7:
oc patch clusterdeployment <clusterdeployment-name> -p '{"metadata":{"finalizers":null}}' --type=merge
1.1.2.1.30. A cluster deployed in a disconnected environment by using the central infrastructure management service might not install
When you deploy a cluster in a disconnected environment by using the central infrastructure management service, the cluster nodes might not start installing.
This issue occurs because the cluster uses a discovery ISO image that is created from the Red Hat Enterprise Linux CoreOS live ISO image that is shipped with OpenShift Container Platform versions 4.12.0 through 4.12.2. The image contains a restrictive /etc/containers/policy.json
file that requires signatures for images sourcing from registry.redhat.io
and registry.access.redhat.com
. In a disconnected environment, the images that are mirrored might not have the signatures mirrored, which results in the image pull failing for cluster nodes at discovery. The Agent image fails to connect with the cluster nodes, which causes communication with the assisted service to fail.
To work around this issue, apply an ignition override to the cluster that sets the /etc/containers/policy.json
file to unrestrictive. The ignition override can be set in the InfraEnv
custom resource definition. The following example shows an InfraEnv
custom resource definition with the override:
apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: cluster namespace: cluster spec: ignitionConfigOverride: '{"ignition":{"version":"3.2.0"},"storage":{"files":[{"path":"/etc/containers/policy.json","mode":420,"overwrite":true,"contents":{"source":"data:text/plain;charset=utf-8;base64,ewogICAgImRlZmF1bHQiOiBbCiAgICAgICAgewogICAgICAgICAgICAidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIgogICAgICAgIH0KICAgIF0sCiAgICAidHJhbnNwb3J0cyI6CiAgICAgICAgewogICAgICAgICAgICAiZG9ja2VyLWRhZW1vbiI6CiAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgIiI6IFt7InR5cGUiOiJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dCiAgICAgICAgICAgICAgICB9CiAgICAgICAgfQp9"}}]}}'
The following example shows the unrestrictive file that is created:
{ "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } }
After this setting is changed, the clusters install.
1.1.2.1.31. Managed cluster stuck in Pending status after deployment
If the Assisted Installer agent starts slowly and you deploy a managed cluster, the managed cluster might become stuck in the Pending
status and not have any agent resources. You can work around the issue by disabling converged flow. Complete the following steps:
Create the following ConfigMap on the hub cluster:
apiVersion: v1 kind: ConfigMap metadata: name: my-assisted-service-config namespace: multicluster-engine data: ALLOW_CONVERGED_FLOW: "false"
Apply the ConfigMap by running the following command:
oc annotate --overwrite AgentServiceConfig agent unsupported.agent-install.openshift.io/assisted-service-configmap=my-assisted-service-config
1.1.2.1.32. ManagedClusterSet API specification limitation
The selectorType: LaberSelector
setting is not supported when using the Clustersets API. The selectorType: ExclusiveClusterSetLabel
setting is supported.
1.1.2.1.33. Hub cluster communication limitations
The following limitations occur if the hub cluster is not able to reach or communicate with the managed cluster:
- You cannot create a new managed cluster by using the console. You are still able to import a managed cluster manually by using the command line interface or by using the Run import commands manually option in the console.
- If you deploy an Application or ApplicationSet by using the console, or if you import a managed cluster into Argo CD, the hub cluster Argo CD controller calls the managed cluster API server. You can use AppSub or the Argo CD pull model to work around the issue.
The console page for pod logs does not work, and an error message that resembles the following appears:
Error querying resource logs: Service unavailable
1.1.2.1.34. Managed Service Account add-on limitations
The following are known issues and limitations for the managed-serviceaccount
add-on:
1.1.2.1.34.1. The managed-serviceaccount
does not support OpenShift Container Platform 3.11
The managed-serviceaccount
add-on crashes if you run it on OpenShift Container Platform 3.11.
1.1.2.1.34.2. installNamespace field can only have one value
When enabling the managed-serviceaccount
add-on, the installNamespace
field in the ManagedClusterAddOn
resource must have open-cluster-management-agent-addon
as the value. Other values are ignored. The managed-serviceaccount
add-on agent is always deployed in the open-cluster-management-agent-addon
namespace on the managed cluster.
1.1.2.1.34.3. tolerations and nodeSelector settings do not affect the managed-serviceaccount agent
The tolerations
and nodeSelector
settings configured on the MultiClusterEngine
and MultiClusterHub
resources do not affect the managed-serviceaccount
agent deployed on the local cluster. The managed-serviceaccount
add-on is not always required on the local cluster.
If the managed-serviceaccount
add-on is required, you can work around the issue by completing the following steps:
-
Create the
addonDeploymentConfig
custom resource. -
Set the
tolerations
andnodeSelector
values for the local cluster andmanaged-serviceaccount
agent. -
Update the
managed-serviceaccount
ManagedClusterAddon
in the local cluster namespace to use theaddonDeploymentConfig
custom resource you created.
See Configuring nodeSelectors and tolerations for klusterlet add-ons to learn more about how to use the addonDeploymentConfig
custom resource to configure tolerations
and nodeSelector
for add-ons.
1.1.2.1.35. Cluster proxy add-on limitations
The cluster-proxy-addon
does not work when there is another proxy configuration on your hub cluster or managed clusters. The following scenarios causes this issue:
- There is a cluster-wide-proxy configuration on your hub cluster and managed cluster.
-
There is a proxy configuration in the agent
AddOnDeploymentConfig
resource.
1.1.2.1.36. Custom ingress domain is not applied correctly
You can specify a custom ingress domain by using the ClusterDeployment
resource while installing a managed cluster, but the change is only applied after the installation by using the SyncSet
resource. As a result, the spec
field in the clusterdeployment.yaml
file displays the custom ingress domain you specified, but the status
still displays the default domain.
1.1.2.2. Hosted control planes
1.1.2.2.1. Console displays hosted cluster as Pending import
If the annotation and ManagedCluster
name do not match, the console displays the cluster as Pending import
. The cluster cannot be used by the multicluster engine operator. The same issue happens when there is no annotation and the ManagedCluster
name does not match the Infra-ID
value of the HostedCluster
resource."
1.1.2.2.2. Console might list the same version multiple times when adding a node pool to a hosted cluster
When you use the console to add a new node pool to an existing hosted cluster, the same version of OpenShift Container Platform might appear more than once in the list of options. You can select any instance in the list for the version that you want.
1.1.2.2.3. The web console lists nodes even after they are removed from the cluster and returned to the infrastructure environment
When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a Ready
state. You can verify the number of nodes in two ways:
- In the console, go to the node pool and verify that it has 0 nodes.
On the command line interface, run the following commands:
Verify that 0 nodes are in the node pool by running the following command:
oc get nodepool -A
Verify that 0 nodes are in the cluster by running the following command:
oc get nodes --kubeconfig
- Verify that 0 agents are reported as bound to the cluster by running the following command:
oc get agents -A
1.1.2.2.4. Potential DNS issues in hosted clusters configured for a dual-stack network
When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues:
-
CrashLoopBackOff
state in theservice-ca-operator
pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in thekube-system
namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve. -
Pods stuck in
ContainerCreating
state: This issue occurs because theopenshift-service-ca-operator
cannot generate themetrics-tls
secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server.
To resolve those issues, configure the DNS server settings by following the guidelines in Configuring DNS for a dual stack network.
1.1.2.2.5. On bare metal platforms, Agent resources might fail to pull ignition
On the bare metal (Agent) platform, the hosted control planes feature periodically rotates the token that the Agent uses to pull ignition. A bug causes the new token to not be propagated. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition.
As a workaround, in the Agent specification, delete the secret that the IgnitionEndpointTokenReference
property refers to, and then add or modify any label on the Agent resource. The system can then detect that the Agent resource was modified and re-create the secret with the new token.
1.1.3. Errata updates
For multicluster engine operator, the Errata updates are automatically applied when released.
Important: For reference, Errata links and GitHub numbers might be added to the content and used internally. Links that require access might not be available for the user.
1.1.3.1. Errata 2.4.6
- Delivers updates to one or more of the product container images.
1.1.3.2. Errata 2.4.5
-
Fixes the CA bundle comparison logic of the import controller that resolves an issue with the controller refreshing the bootstrap hub cluster
kubeconfig
when a custom API server certificate is used with a blank line between certificates. (ACM-10999) -
Fixes agent components that were using
open-cluster-management-image-pull-credentials
as the default image pull secret without checking whether the secret exists. Without a secret, the pods reported a warning message. This update delivers an empty secret whenopen-cluster-management-image-pull-credentials
is not set. (ACM-9982)
1.1.3.3. Errata 2.4.4
- Delivers updates to one or more of the product container images.
1.1.3.4. Errata 2.4.3
- Delivers updates to one or more of the product container images.
-
Fixes an issue where the klusterlet agent did not update when the
external-managed-kubeconfig
secret changed in the hosted mode. (ACM-8799) - Fixes an issue where the node regions and zones were incorrectly displayed from the Topology view. (ACM-8996)
- Fixes an issue where deprecated node labels were used. (ACM-9002)
1.1.3.5. Errata 2.4.2
- Delivers updates to one or more of the product container images.
-
Fixes an issue where the
AppliedManifestWork
resource was deleted. (ACM-8926)
1.1.3.6. Errata 2.4.1
- Delivers updates to one or more of the product container images.
1.1.4. Deprecations and removals Cluster lifecycle
Learn when parts of the product are deprecated or removed from multicluster engine operator. Consider the alternative actions in the Recommended action and details, which display in the tables for the current release and for two prior releases.
Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.
1.1.4.1. API deprecations and removals
multicluster engine operator follows the Kubernetes deprecation guidelines for APIs. See the Kubernetes Deprecation Policy for more details about that policy. multicluster engine operator APIs are only deprecated or removed outside of the following timelines:
-
All
V1
APIs are generally available and supported for 12 months or three releases, whichever is greater. V1 APIs are not removed, but can be deprecated outside of that time limit. -
All
beta
APIs are generally available for nine months or three releases, whichever is greater. Beta APIs are not removed outside of that time limit. -
All
alpha
APIs are not required to be supported, but might be listed as deprecated or removed if it benefits users.
1.1.4.1.1. API deprecations
Product or category | Affected item | Version | Recommended action | More details and links |
---|---|---|---|---|
ManagedServiceAccount |
The | 2.9 |
Use | None |
1.1.4.1.2. API removals
Product or category | Affected item | Version | Recommended action | More details and links |
1.1.4.2. Deprecations
A deprecated component, feature, or service is supported, but no longer recommended for use and might become obsolete in future releases. Consider the alternative actions in the Recommended action and details that are provided in the following table:
Product or category | Affected item | Version | Recommended action | More details and links |
---|---|---|---|---|
Cluster lifecycle | Create cluster on Red Hat Virtualization | 2.9 | None | None |
Cluster lifecycle | Klusterlet OLM Operator | 2.4 | None | None |
1.1.4.3. Removals
A removed item is typically function that was deprecated in previous releases and is no longer available in the product. You must use alternatives for the removed function. Consider the alternative actions in the Recommended action and details that are provided in the following table:
Product or category | Affected item | Version | Recommended action | More details and links |
1.2. About cluster lifecycle with multicluster engine operator
The multicluster engine for Kubernetes operator is the cluster lifecycle operator that provides cluster management capabilities for Red Hat OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. If you installed Red Hat Advanced Cluster Management, you do not need to install multicluster engine operator, as it is automatically installed.
See the Support matrix to learn about hub cluster and managed cluster requirements and support. for support information, as well as the following documentation:
To continue, see the remaining cluster lifecyle documentation at Cluster lifecycle with multicluster engine operator overview.
1.2.1. Console overview
OpenShift Container Platform console plug-ins are available with the OpenShift Container Platform web console and can be integrated. To use this feature, the console plug-ins must remain enabled. The multicluster engine operator displays certain console features from Infrastructure and Credentials navigation items. If you install Red Hat Advanced Cluster Management, you see more console capability.
Note: With the plug-ins enabled, you can access Red Hat Advanced Cluster Management within the OpenShift Container Platform console from the cluster switcher by selecting All Clusters from the drop-down menu.
- To disable the plug-in, be sure you are in the Administrator perspective in the OpenShift Container Platform console.
- Find Administration in the navigation and click Cluster Settings, then click Configuration tab.
-
From the list of Configuration resources, click the Console resource with the
operator.openshift.io
API group, which contains cluster-wide configuration for the web console. -
Click on the Console plug-ins tab. The
mce
plug-in is listed. Note: If Red Hat Advanced Cluster Management is installed, it is also listed asacm
. - Modify plug-in status from the table. In a few moments, you are prompted to refresh the console.
1.2.2. multicluster engine operator role-based access control
RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product:
1.2.2.1. Overview of roles
Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported:
1.2.2.1.1. Table of role definition
Role | Definition |
---|---|
|
This is an OpenShift Container Platform default role. A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
A user with cluster binding to the |
|
Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to |
Important:
- Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace.
-
If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol:
-
.
RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product.
1.2.2.2. Cluster lifecycle RBAC
View the following cluster lifecycle RBAC operations:
Create and administer cluster role bindings for all managed clusters. For example, create a cluster role binding to the cluster role
open-cluster-management:cluster-manager-admin
by entering the following command:oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>
This role is a super user, which has access to all resources and actions. You can create cluster-scoped
managedcluster
resources, the namespace for the resources that manage the managed cluster, and the resources in the namespace with this role. You might need to add theusername
of the ID that requires the role association to avoid permission errors.Run the following command to administer a cluster role binding for a managed cluster named
cluster-name
:oc create clusterrolebinding (role-binding-name) --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>
This role has read and write access to the cluster-scoped
managedcluster
resource. This is needed because themanagedcluster
is a cluster-scoped resource and not a namespace-scoped resource.Create a namespace role binding to the cluster role
admin
by entering the following command:oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=admin --user=<username>
This role has read and write access to the resources in the namespace of the managed cluster.
Create a cluster role binding for the
open-cluster-management:view:<cluster-name>
cluster role to view a managed cluster namedcluster-name
Enter the following command:oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>
This role has read access to the cluster-scoped
managedcluster
resource. This is needed because themanagedcluster
is a cluster-scoped resource.Create a namespace role binding to the cluster role
view
by entering the following command:oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=view --user=<username>
This role has read-only access to the resources in the namespace of the managed cluster.
View a list of the managed clusters that you can access by entering the following command:
oc get managedclusters.clusterview.open-cluster-management.io
This command is used by administrators and users without cluster administrator privileges.
View a list of the managed cluster sets that you can access by entering the following command:
oc get managedclustersets.clusterview.open-cluster-management.io
This command is used by administrators and users without cluster administrator privileges.
1.2.2.2.1. Cluster pools RBAC
View the following cluster pool RBAC operations:
As a cluster administrator, use cluster pool provision clusters by creating a managed cluster set and grant administrator permission to roles by adding the role to the group. View the following examples:
Grant
admin
permission to theserver-foundation-clusterset
managed cluster set with the following command:oc adm policy add-cluster-role-to-group open-cluster-management:clusterset-admin:server-foundation-clusterset server-foundation-team-admin
Grant
view
permission to theserver-foundation-clusterset
managed cluster set with the following command:oc adm policy add-cluster-role-to-group open-cluster-management:clusterset-view:server-foundation-clusterset server-foundation-team-user
Create a namespace for the cluster pool,
server-foundation-clusterpool
. View the following examples to grant role permissions:Grant
admin
permission toserver-foundation-clusterpool
for theserver-foundation-team-admin
by running the following commands:oc adm new-project server-foundation-clusterpool oc adm policy add-role-to-group admin server-foundation-team-admin --namespace server-foundation-clusterpool
As a team administrator, create a cluster pool named
ocp46-aws-clusterpool
with a cluster set label,cluster.open-cluster-management.io/clusterset=server-foundation-clusterset
in the cluster pool namespace:-
The
server-foundation-webhook
checks if the cluster pool has the cluster set label, and if the user has permission to create cluster pools in the cluster set. -
The
server-foundation-controller
grantsview
permission to theserver-foundation-clusterpool
namespace forserver-foundation-team-user
.
-
The
When a cluster pool is created, the cluster pool creates a
clusterdeployment
. Continue reading for more details:-
The
server-foundation-controller
grantsadmin
permission to theclusterdeployment
namespace forserver-foundation-team-admin
. The
server-foundation-controller
grantsview
permissionclusterdeployment
namespace forserver-foundation-team-user
.Note: As a
team-admin
andteam-user
, you haveadmin
permission to theclusterpool
,clusterdeployment
, andclusterclaim
.
-
The
1.2.2.2.2. Console and API RBAC table for cluster lifecycle
View the following console and API RBAC tables for cluster lifecycle:
Resource | Admin | Edit | View |
---|---|---|---|
Clusters | read, update, delete | - | read |
Cluster sets | get, update, bind, join | edit role not mentioned | get |
Managed clusters | read, update, delete | no edit role mentioned | get |
Provider connections | create, read, update, and delete | - | read |
API | Admin | Edit | View |
---|---|---|---|
You can use | create, read, update, delete | read, update | read |
You can use | read | read | read |
| update | update | |
You can use | create, read, update, delete | read, update | read |
| read | read | read |
You can use | create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
| create, read, update, delete | read, update | read |
1.2.2.2.3. Credentials role-based access control
The access to credentials is controlled by Kubernetes. Credentials are stored and secured as Kubernetes secrets. The following permissions apply to accessing secrets in Red Hat Advanced Cluster Management for Kubernetes:
- Users with access to create secrets in a namespace can create credentials.
- Users with access to read secrets in a namespace can also view credentials.
-
Users with the Kubernetes cluster roles of
admin
andedit
can create and edit secrets. -
Users with the Kubernetes cluster role of
view
cannot view secrets because reading the contents of secrets enables access to service account credentials.
1.2.3. Network configuration
Configure your network settings to allow the connections.
Important: The trusted CA bundle is available in the multicluster engine operator namespace, but that enhancement requires changes to your network. The trusted CA bundle ConfigMap uses the default name of trusted-ca-bundle
. You can change this name by providing it to the operator in an environment variable named TRUSTED_CA_BUNDLE
. See Configuring the cluster-wide proxy in the Networking section of Red Hat OpenShift Container Platform for more information.
Note: Registration Agent
and Work Agent
on the managed cluster do not support proxy settings because they communicate with apiserver
on the hub cluster by establishing an mTLS connection, which cannot pass through the proxy.
For the multicluster engine operator cluster networking requirements, see the following table:
Direction | Protocol | Connection | Port (if specified) |
---|---|---|---|
Outbound | Kubernetes API server of the provisioned managed cluster | 6443 | |
Outbound from the OpenShift Container Platform managed cluster to the hub cluster | TCP | Communication between the ironic agent and the bare metal operator on the hub cluster | 6180, 6183, 6385, and 5050 |
Outbound from the hub cluster to the Ironic Python Agent (IPA) on the managed cluster | TCP | Communication between the bare metal node where IPA is running and the Ironic conductor service | 9999 |
Outbound and inbound |
The | 443 | |
Inbound | The Kubernetes API server of the multicluster engine for Kubernetes operator cluster from the managed cluster | 6443 |
Note: The managed cluster must be able to reach the hub cluster control plane node IP addresses.
1.3. Installing and upgrading multicluster engine operator
The multicluster engine operator is a software operator that enhances cluster fleet management. The multicluster engine operator supportsRed Hat OpenShift Container Platform and Kubernetes cluster lifecycle management across clouds and data centers.
The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform.
For full support information, see the Support matrix. For life cycle information, see Red Hat OpenShift Container Platform Life Cycle policy.
Important: If you are using Red Hat Advanced Cluster Management version 2.5 or later, then multicluster engine for Kubernetes operator is already installed on the cluster.
See the following documentation:
1.3.1. Installing while connected online
The multicluster engine operator is installed with Operator Lifecycle Manager, which manages the installation, upgrade, and removal of the components that encompass the multicluster engine operator.
Required access: Cluster administrator
Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.
Best practice: Upgrade to the most recent version.
Important:
-
For OpenShift Container Platform Dedicated environment, you must have
cluster-admin
permissions. By defaultdedicated-admin
role does not have the required permissions to create namespaces in the OpenShift Container Platform Dedicated environment. - By default, the multicluster engine operator components are installed on worker nodes of your OpenShift Container Platform cluster without any additional configuration. You can install multicluster engine operator onto worker nodes by using the OpenShift Container Platform OperatorHub web console interface, or by using the OpenShift Container Platform CLI.
- If you have configured your OpenShift Container Platform cluster with infrastructure nodes, you can install multicluster engine operator onto those infrastructure nodes by using the OpenShift Container Platform CLI with additional resource parameters. See the Installing multicluster engine on infrastructure nodes section for those details.
If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or multicluster engine for Kubernetes operator, you will need to configure an image pull secret. For information on how to configure an image pull secret and other advanced configurations, see options in the Advanced configuration section of this documentation.
1.3.1.1. Prerequisites
Before you install multicluster engine for Kubernetes operator, see the following requirements:
- Your Red Hat OpenShift Container Platform cluster must have access to the multicluster engine operator in the OperatorHub catalog from the OpenShift Container Platform console.
- You need access to the catalog.redhat.com.
OpenShift Container Platform 4.12 or later, must be deployed in your environment, and you must be logged into with the OpenShift Container Platform CLI. See the following install documentation for OpenShift Container Platform:
-
Your OpenShift Container Platform command line interface (CLI) must be configured to run
oc
commands. See Getting started with the CLI for information about installing and configuring the OpenShift Container Platform CLI. - Your OpenShift Container Platform permissions must allow you to create a namespace.
- You must have an Internet connection to access the dependencies for the operator.
To install in a OpenShift Container Platform Dedicated environment, see the following:
- You must have the OpenShift Container Platform Dedicated environment configured and running.
-
You must have
cluster-admin
authority to the OpenShift Container Platform Dedicated environment where you are installing the engine.
- If you plan to create managed clusters by using the Assisted Installer that is provided with Red Hat OpenShift Container Platform, see Preparing to install with the Assisted Installer topic in the OpenShift Container Platform documentation for the requirements.
1.3.1.2. Confirm your OpenShift Container Platform installation
You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working. For more information about installing OpenShift Container Platform, see the OpenShift Container Platform documentation.
- Verify that multicluster engine operator is not already installed on your OpenShift Container Platform cluster. The multicluster engine operator allows only one single installation on each OpenShift Container Platform cluster. Continue with the following steps if there is no installation.
To ensure that the OpenShift Container Platform cluster is set up correctly, access the OpenShift Container Platform web console with the following command:
kubectl -n openshift-console get route console
See the following example output:
console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None
-
Open the URL in your browser and check the result. If the console URL displays
console-openshift-console.router.default.svc.cluster.local
, set the value foropenshift_master_default_subdomain
when you install OpenShift Container Platform. See the following example of a URL:https://console-openshift-console.apps.new-coral.purple-chesterfield.com
.
You can proceed to install multicluster engine operator.
1.3.1.3. Installing from the OperatorHub web console interface
Best practice: From the Administrator view in your OpenShift Container Platform navigation, install the OperatorHub web console interface that is provided with OpenShift Container Platform.
- Select Operators > OperatorHub to access the list of available operators, and select multicluster engine for Kubernetes operator.
-
Click
Install
. On the Operator Installation page, select the options for your installation:
Namespace:
- The multicluster engine operator engine must be installed in its own namespace, or project.
-
By default, the OperatorHub console installation process creates a namespace titled
multicluster-engine
. Best practice: Continue to use themulticluster-engine
namespace if it is available. -
If there is already a namespace named
multicluster-engine
, select a different namespace.
- Channel: The channel that you select corresponds to the release that you are installing. When you select the channel, it installs the identified release, and establishes that the future errata updates within that release are obtained.
Approval strategy: The approval strategy identifies the human interaction that is required for applying updates to the channel or release to which you subscribed.
- Select Automatic, which is selected by default, to ensure any updates within that release are automatically applied.
- Select Manual to receive a notification when an update is available. If you have concerns about when the updates are applied, this might be best practice for you.
Note: To upgrade to the next minor release, you must return to the OperatorHub page and select a new channel for the more current release.
- Select Install to apply your changes and create the operator.
See the following process to create the MultiClusterEngine custom resource.
- In the OpenShift Container Platform console navigation, select Installed Operators > multicluster engine for Kubernetes.
- Select the MultiCluster Engine tab.
- Select Create MultiClusterEngine.
Update the default values in the YAML file. See options in the MultiClusterEngine advanced configuration section of the documentation.
- The following example shows the default template that you can copy into the editor:
apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}
Select Create to initialize the custom resource. It can take up to 10 minutes for the multicluster engine operator engine to build and start.
After the MultiClusterEngine resource is created, the status for the resource is
Available
on the MultiCluster Engine tab.
1.3.1.4. Installing from the OpenShift Container Platform CLI
Create a multicluster engine operator engine namespace where the operator requirements are contained. Run the following command, where
namespace
is the name for your multicluster engine for Kubernetes operator namespace. The value fornamespace
might be referred to as Project in the OpenShift Container Platform environment:oc create namespace <namespace>
Switch your project namespace to the one that you created. Replace
namespace
with the name of the multicluster engine for Kubernetes operator namespace that you created in step 1.oc project <namespace>
Create a YAML file to configure an
OperatorGroup
resource. Each namespace can have only one operator group. Replacedefault
with the name of your operator group. Replacenamespace
with the name of your project namespace. See the following example:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <default> namespace: <namespace> spec: targetNamespaces: - <namespace>
Run the following command to create the
OperatorGroup
resource. Replaceoperator-group
with the name of the operator group YAML file that you created:oc apply -f <path-to-file>/<operator-group>.yaml
Create a YAML file to configure an OpenShift Container Platform Subscription. Your file should look similar to the following example:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine spec: sourceNamespace: openshift-marketplace source: redhat-operators channel: stable-2.4 installPlanApproval: Automatic name: multicluster-engine
Note: For installing the multicluster engine for Kubernetes operator on infrastructure nodes, the see Operator Lifecycle Manager Subscription additional configuration section.
Run the following command to create the OpenShift Container Platform Subscription. Replace
subscription
with the name of the subscription file that you created:oc apply -f <path-to-file>/<subscription>.yaml
Create a YAML file to configure the
MultiClusterEngine
custom resource. Your default template should look similar to the following example:apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}
Note: For installing the multicluster engine operator on infrastructure nodes, see the MultiClusterEngine custom resource additional configuration section:
Run the following command to create the
MultiClusterEngine
custom resource. Replacecustom-resource
with the name of your custom resource file:oc apply -f <path-to-file>/<custom-resource>.yaml
If this step fails with the following error, the resources are still being created and applied. Run the command again in a few minutes when the resources are created:
error: unable to recognize "./mce.yaml": no matches for kind "MultiClusterEngine" in version "operator.multicluster-engine.io/v1"
Run the following command to get the custom resource. It can take up to 10 minutes for the
MultiClusterEngine
custom resource status to display asAvailable
in thestatus.phase
field after you run the following command:oc get mce -o=jsonpath='{.items[0].status.phase}'
If you are reinstalling the multicluster engine operator and the pods do not start, see Troubleshooting reinstallation failure for steps to work around this problem.
Notes:
-
A
ServiceAccount
with aClusterRoleBinding
automatically gives cluster administrator privileges to multicluster engine operator and to any user credentials with access to the namespace where you install multicluster engine operator.
1.3.1.5. Installing on infrastructure nodes
An OpenShift Container Platform cluster can be configured to contain infrastructure nodes for running approved management components. Running components on infrastructure nodes avoids allocating OpenShift Container Platform subscription quota for the nodes that are running those management components.
After adding infrastructure nodes to your OpenShift Container Platform cluster, follow the Installing from the OpenShift Container Platform CLI instructions and add the following configurations to the Operator Lifecycle Manager Subscription and MultiClusterEngine
custom resource.
1.3.1.5.1. Add infrastructure nodes to the OpenShift Container Platform cluster
Follow the procedures that are described in Creating infrastructure machine sets in the OpenShift Container Platform documentation. Infrastructure nodes are configured with a Kubernetes taint
and label
to keep non-management workloads from running on them.
To be compatible with the infrastructure node enablement provided by multicluster engine operator, ensure your infrastructure nodes have the following taint
and label
applied:
metadata: labels: node-role.kubernetes.io/infra: "" spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/infra
1.3.1.5.2. Operator Lifecycle Manager Subscription additional configuration
Add the following additional configuration before applying the Operator Lifecycle Manager Subscription:
spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists
1.3.1.5.3. MultiClusterEngine custom resource additional configuration
Add the following additional configuration before applying the MultiClusterEngine
custom resource:
spec: nodeSelector: node-role.kubernetes.io/infra: ""
1.3.2. Install on disconnected networks
You might need to install the multicluster engine operator on Red Hat OpenShift Container Platform clusters that are not connected to the Internet. The procedure to install on a disconnected engine requires some of the same steps as the connected installation.
Important: You must install multicluster engine operator on a cluster that does not have Red Hat Advanced Cluster Management for Kubernetes earlier than 2.5 installed. The multicluster engine operator cannot co-exist with Red Hat Advanced Cluster Management for Kubernetes on versions earlier than 2.5 because they provide some of the same management components. It is recommended that you install multicluster engine operator on a cluster that has never previously installed Red Hat Advanced Cluster Management. If you are using Red Hat Advanced Cluster Management for Kubernetes at version 2.5.0 or later then multicluster engine operator is already installed on the cluster with it.
You must download copies of the packages to access them during the installation, rather than accessing them directly from the network during the installation.
1.3.2.1. Prerequisites
You must meet the following requirements before you install The multicluster engine operator:
- Red Hat OpenShift Container Platform version 4.12 or later must be deployed in your environment, and you must be logged in with the command line interface (CLI).
You need access to catalog.redhat.com.
Note: For managing bare metal clusters, you must have OpenShift Container Platform version 4.12 or later.
-
Your Red Hat OpenShift Container Platform CLI must be version 4.12 or later, and configured to run
oc
commands. - Your Red Hat OpenShift Container Platform permissions must allow you to create a namespace.
- You must have a workstation with Internet connection to download the dependencies for the operator.
1.3.2.2. Confirm your OpenShift Container Platform installation
- You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working in your cluster. For information about OpenShift Container Platform version 4.12, see OpenShift Container Platform documentation.
When and if you are connected, you can ensure that the OpenShift Container Platform cluster is set up correctly by accessing the OpenShift Container Platform web console with the following command:
kubectl -n openshift-console get route console
See the following example output:
console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None
The console URL in this example is:
https:// console-openshift-console.apps.new-coral.purple-chesterfield.com
. Open the URL in your browser and check the result.If the console URL displays
console-openshift-console.router.default.svc.cluster.local
, set the value foropenshift_master_default_subdomain
when you install OpenShift Container Platform.
1.3.2.3. Installing in a disconnected environment
Important: You need to download the required images to a mirroring registry to install the operators in a disconnected environment. Without the download, you might receive ImagePullBackOff
errors during your deployment.
Follow these steps to install the multicluster engine operator in a disconnected environment:
Create a mirror registry. If you do not already have a mirror registry, create one by completing the procedure in the Disconnected installation mirroring topic of the Red Hat OpenShift Container Platform documentation.
If you already have a mirror registry, you can configure and use your existing one.
Note: For bare metal only, you need to provide the certificate information for the disconnected registry in your
install-config.yaml
file. To access the image in a protected disconnected registry, you must provide the certificate information so the multicluster engine operator can access the registry.- Copy the certificate information from the registry.
-
Open the
install-config.yaml
file in an editor. -
Find the entry for
additionalTrustBundle: |
. Add the certificate information after the
additionalTrustBundle
line. The resulting content should look similar to the following example:additionalTrustBundle: | -----BEGIN CERTIFICATE----- certificate_content -----END CERTIFICATE----- sshKey: >-
Important: Additional mirrors for disconnected image registries are needed if the following Governance policies are required:
-
Container Security Operator policy: Locate the images in the
registry.redhat.io/quay
source. -
Compliance Operator policy: Locate the images in the
registry.redhat.io/compliance
source. Gatekeeper Operator policy: Locate the images in the
registry.redhat.io/gatekeeper
source.See the following example of mirrors lists for all three operators:
- mirrors: - <your_registry>/rhacm2 source: registry.redhat.io/rhacm2 - mirrors: - <your_registry>/quay source: registry.redhat.io/quay - mirrors: - <your_registry>/compliance source: registry.redhat.io/compliance
-
Container Security Operator policy: Locate the images in the
-
Save the
install-config.yaml
file. Create a YAML file that contains the
ImageContentSourcePolicy
with the namemce-policy.yaml
. Note: If you modify this on a running cluster, it causes a rolling restart of all nodes.apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mce-repo spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:5000/multicluster-engine source: registry.redhat.io/multicluster-engine
Apply the ImageContentSourcePolicy file by entering the following command:
oc apply -f mce-policy.yaml
Enable the disconnected Operator Lifecycle Manager Red Hat Operators and Community Operators.
the multicluster engine operator is included in the Operator Lifecycle Manager Red Hat Operator catalog.
- Configure the disconnected Operator Lifecycle Manager for the Red Hat Operator catalog. Follow the steps in the Using Operator Lifecycle Manager on restricted networks topic of theRed Hat OpenShift Container Platform documentation.
- Now that you have the image in the disconnected Operator Lifecycle Manager, continue to install the multicluster engine operator for Kubernetes from the Operator Lifecycle Manager catalog.
See Installing while connected online for the required steps.
1.3.3. Advanced configuration
The multicluster engine operator is installed using an operator that deploys all of the required components. The multicluster engine operator can be further configured during or after installation. Learn more about the advanced configuration options.
1.3.3.1. Deployed components
Add one or more of the following attributes to the MultiClusterEngine
custom resource:
Name | Description | Enabled |
assisted-service | Installs OpenShift Container Platform with minimal infrastructure prerequisites and comprehensive pre-flight validations | True |
cluster-lifecycle | Provides cluster management capabilities for OpenShift Container Platform and Kubernetes hub clusters | True |
cluster-manager | Manages various cluster-related operations within the cluster environment | True |
cluster-proxy-addon |
Automates the installation of | True |
console-mce | Enables the multicluster engine operator console plug-in | True |
discovery | Discovers and identifies new clusters within the OpenShift Cluster Manager | True |
hive | Provisions and performs initial configuration of OpenShift Container Platform clusters | True |
hypershift | Hosts OpenShift Container Platform control planes at scale with cost and time efficiency, and cross-cloud portability | True |
hypershift-local-hosting | Enables local hosting capabilities for within the local cluster environment | True |
local-cluster | Enables the import and self-management of the local hub cluster where the multicluster engine operator is deployed | True |
managedserviceacccount | Syncronizes service accounts to managed clusters and collects tokens as secret resources back to the hub cluster | False |
server-foundation | Provides foundational services for server-side operations within the multicluster environment | True |
When you install multicluster engine operator on to the cluster, not all of the listed components are enabled by default.
You can further configure multicluster engine operator during or after installation by adding one or more attributes to the MultiClusterEngine
custom resource. Continue reading for information about the attributes that you can add.
1.3.3.2. Console and component configuration
The following example displays the spec.overrides
default template that you can use to enable or disable the component:
apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterEngine
metadata:
name: multiclusterengine
spec:
overrides:
components:
- name: <name> 1
enabled: true
- 1
- Replace
name
with the name of the component.
Alternatively, you can run the following command. Replace namespace
with the name of your project and name
with the name of the component:
oc patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{"op": "add", "path": "/spec/overrides/components/-","value":{"name":"<name>","enabled":true}}]'
1.3.3.3. Local-cluster enablement
By default, the cluster that is running multicluster engine operator manages itself. To install multicluster engine operator without the cluster managing itself, specify the following values in the spec.overrides.components
settings in the MultiClusterEngine
section:
apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: local-cluster enabled: false
-
The
name
value identifies the hub cluster as alocal-cluster
. -
The
enabled
setting specifies whether the feature is enabled or disabled. When the value istrue
, the hub cluster manages itself. When the value isfalse
, the hub cluster does not manage itself.
A hub cluster that is managed by itself is designated as the local-cluster
in the list of clusters.
1.3.3.4. Custom image pull secret
If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or the multicluster engine operator, generate a secret that contains your OpenShift Container Platform pull secret information to access the entitled content from the distribution registry.
The secret requirements for OpenShift Container Platform clusters are automatically resolved by OpenShift Container Platform and multicluster engine for Kubernetes operator, so you do not have to create the secret if you are not importing other types of Kubernetes clusters to be managed.
Important: These secrets are namespace-specific, so make sure that you are in the namespace that you use for your engine.
- Download your OpenShift Container Platform pull secret file from cloud.redhat.com/openshift/install/pull-secret by selecting Download pull secret. Your OpenShift Container Platform pull secret is associated with your Red Hat Customer Portal ID, and is the same across all Kubernetes providers.
Run the following command to create your secret:
oc create secret generic <secret> -n <namespace> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson
-
Replace
secret
with the name of the secret that you want to create. -
Replace
namespace
with your project namespace, as the secrets are namespace-specific. -
Replace
path-to-pull-secret
with the path to your OpenShift Container Platform pull secret that you downloaded.
-
Replace
The following example displays the spec.imagePullSecret
template to use if you want to use a custom pull secret. Replace secret
with the name of your pull secret:
apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: imagePullSecret: <secret>
1.3.3.5. Target namespace
The operands can be installed in a designated namespace by specifying a location in the MultiClusterEngine
custom resource. This namespace is created upon application of the MultiClusterEngine
custom resource.
Important: If no target namespace is specified, the operator will install to the multicluster-engine
namespace and will set it in the MultiClusterEngine
custom resource specification.
The following example displays the spec.targetNamespace
template that you can use to specify a target namespace. Replace target
with the name of your destination namespace. Note: The target
namespace cannot be the default
namespace:
apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: targetNamespace: <target>
1.3.3.6. availabilityConfig
The hub cluster has two availabilities: High
and Basic
. By default, the hub cluster has an availability of High
, which gives hub cluster components a replicaCount
of 2
. This provides better support in cases of failover but consumes more resources than the Basic
availability, which gives components a replicaCount
of 1
.
Important: Set spec.availabilityConfig
to Basic
if you are using multicluster engine operator on a single-node OpenShift cluster.
The following examples shows the spec.availabilityConfig
template with Basic
availability:
apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: availabilityConfig: "Basic"
1.3.3.7. nodeSelector
You can define a set of node selectors in the MultiClusterEngine
to install to specific nodes on your cluster. The following example shows spec.nodeSelector
to assign pods to nodes with the label node-role.kubernetes.io/infra
:
spec: nodeSelector: node-role.kubernetes.io/infra: ""
1.3.3.8. tolerations
You can define a list of tolerations to allow the MultiClusterEngine
to tolerate specific taints defined on the cluster. The following example shows a spec.tolerations
that matches a node-role.kubernetes.io/infra
taint:
spec: tolerations: - key: node-role.kubernetes.io/infra effect: NoSchedule operator: Exists
The previous infra-node toleration is set on pods by default without specifying any tolerations in the configuration. Customizing tolerations in the configuration will replace this default behavior.
1.3.3.9. ManagedServiceAccount add-on
By default, the Managed-ServiceAccount
add-on is disabled. This component when enabled allows you to create or delete a service account on a managed cluster. To install with this add-on enabled, include the following in the MultiClusterEngine
specification in spec.overrides
:
apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: overrides: components: - name: managedserviceaccount enabled: true
The ManagedServiceAccount
add-on can be enabled after creating MultiClusterEngine
by editing the resource on the command line and setting the managedserviceaccount
component to enabled: true
. Alternatively, you can run the following command and replace <multiclusterengine-name> with the name of your MultiClusterEngine
resource.
oc patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{"op": "add", "path": "/spec/overrides/components/-","value":{"name":"managedserviceaccount","enabled":true}}]'
1.3.4. Uninstalling
When you uninstall multicluster engine for Kubernetes operator, you see two different levels of the process: A custom resource removal and a complete operator uninstall. It might take up to five minutes to complete the uninstall process.
-
The custom resource removal is the most basic type of uninstall that removes the custom resource of the
MultiClusterEngine
instance but leaves other required operator resources. This level of uninstall is helpful if you plan to reinstall using the same settings and components. - The second level is a more complete uninstall that removes most operator components, excluding components such as custom resource definitions. When you continue with this step, it removes all of the components and subscriptions that were not removed with the custom resource removal. After this uninstall, you must reinstall the operator before reinstalling the custom resource.
1.3.4.1. Prerequisite: Detach enabled services
Before you uninstall the multicluster engine for Kubernetes operator, you must detach all of the clusters that are managed by that engine. To avoid errors, detach all clusters that are still managed by the engine, then try to uninstall again.
If you have managed clusters attached, you might see the following message.
Cannot delete MultiClusterEngine resource because ManagedCluster resource(s) exist
For more information about detaching clusters, see the Removing a cluster from management section by selecting the information for your provider in Cluster creation introduction.
1.3.4.2. Removing resources by using commands
-
If you have not already. ensure that your OpenShift Container Platform CLI is configured to run
oc
commands. See Getting started with the OpenShift CLI in the OpenShift Container Platform documentation for more information about how to configure theoc
commands. Change to your project namespace by entering the following command. Replace namespace with the name of your project namespace:
oc project <namespace>
Enter the following command to remove the
MultiClusterEngine
custom resource:oc delete multiclusterengine --all
You can view the progress by entering the following command:
oc get multiclusterengine -o yaml
-
Enter the following commands to delete the multicluster-engine
ClusterServiceVersion
in the namespace it is installed in:
❯ oc get csv NAME DISPLAY VERSION REPLACES PHASE multicluster-engine.v2.0.0 multicluster engine for Kubernetes 2.0.0 Succeeded ❯ oc delete clusterserviceversion multicluster-engine.v2.0.0 ❯ oc delete sub multicluster-engine
The CSV version shown here may be different.
1.3.4.3. Deleting the components by using the console
When you use the RedHat OpenShift Container Platform console to uninstall, you remove the operator. Complete the following steps to uninstall by using the console:
- In the OpenShift Container Platform console navigation, select Operators > Installed Operators > multicluster engine for Kubernetes.
Remove the
MultiClusterEngine
custom resource.- Select the tab for Multiclusterengine.
- Select the Options menu for the MultiClusterEngine custom resource.
- Select Delete MultiClusterEngine.
Run the clean-up script according to the procedure in the following section.
Tip: If you plan to reinstall the same multicluster engine for Kubernetes operator version, you can skip the rest of the steps in this procedure and reinstall the custom resource.
- Navigate to Installed Operators.
- Remove the _ multicluster engine for Kubernetes_ operator by selecting the Options menu and selecting Uninstall operator.
1.3.4.4. Troubleshooting Uninstall
If the multicluster engine custom resource is not being removed, remove any potential remaining artifacts by running the clean-up script.
Copy the following script into a file:
#!/bin/bash oc delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io oc delete validatingwebhookconfiguration multiclusterengines.multicluster.openshift.io oc delete mce --all
See Disconnected installation mirroring for more information.
1.4. Managing credentials
A credential is required to create and manage a Red Hat OpenShift Container Platform cluster on a cloud service provider with multicluster engine operator. The credential stores the access information for a cloud provider. Each provider account requires its own credential, as does each domain on a single provider.
You can create and manage your cluster credentials. Credentials are stored as Kubernetes secrets. Secrets are copied to the namespace of a managed cluster so that the controllers for the managed cluster can access the secrets. When a credential is updated, the copies of the secret are automatically updated in the managed cluster namespaces.
Note: Changes to the pull secret, SSH keys, or base domain of the cloud provider credentials are not reflected for existing managed clusters, as they have already been provisioned using the original credentials.
Required access: Edit
- Creating a credential for Amazon Web Services
- Creating a credential for Microsoft Azure
- Creating a credential for Google Cloud Platform
- Creating a credential for VMware vSphere
- Creating a credential for Red Hat OpenStack Platform
- Creating a credential for Red Hat Virtualization
- Creating a credential for Red Hat OpenShift Cluster Manager
- Creating a credential for Ansible Automation Platform
- Creating a credential for an on-premises environment
1.4.1. Creating a credential for Amazon Web Services
You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS).
Required access: Edit
Note: This procedure must be done before you can create a cluster with multicluster engine operator.
1.4.1.1. Prerequisites
You must have the following prerequisites before creating a credential:
- A deployed multicluster engine operator hub cluster
- Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on Amazon Web Services (AWS)
- AWS login credentials, which include access key ID and secret access key. See Understanding and getting your security credentials.
- Account permissions that allow installing clusters on AWS. See Configuring an AWS account for instructions on how to configure an AWS account.
1.4.1.2. Managing a credential by using the console
To create a credential from the multicluster engine operator console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.
You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:
- Add your AWS access key ID for your AWS account. See Log in to AWS to find your ID.
- Provide the contents for your new AWS Secret Access Key.
If you want to enable a proxy, enter the proxy information:
-
HTTP proxy URL: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy URL: The secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
-
HTTP proxy URL: The URL that should be used as a proxy for
- Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
- Add your SSH private key and SSH public key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program.
You can create a cluster that uses this credential by completing the steps in Creating a cluster on Amazon Web Services or Creating a cluster on Amazon Web Services GovCloud.
You can edit your credential in the console. If the cluster was created by using this provider connection, then the <cluster-name>-aws-creds>
secret from <cluster-namespace>
will get updated with the new credentials.
Note: Updating credentials does not work for cluster pool claimed clusters.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.4.1.2.1. Creating an S3 secret
To create an Amazon Simple Storage Service (S3) secret, complete the following task from the console:
- Click Add credential > AWS > S3 Bucket. If you click For Hosted Control Plane, the name and namespace are provided.
Enter information for the following fields that are provided:
-
bucket name
: Add the name of the S3 bucket. -
aws_access_key_id
: Add your AWS access key ID for your AWS account. Log in to AWS to find your ID. -
aws_secret_access_key
: Provide the contents for your new AWS Secret Access Key. -
Region
: Enter your AWS region.
-
1.4.1.3. Creating an opaque secret by using the API
To create an opaque secret for Amazon Web Services by using the API, apply YAML content in the YAML preview window that is similar to the following example:
kind: Secret metadata: name: <managed-cluster-name>-aws-creds namespace: <managed-cluster-namespace> type: Opaque data: aws_access_key_id: $(echo -n "${AWS_KEY}" | base64 -w0) aws_secret_access_key: $(echo -n "${AWS_SECRET}" | base64 -w0)
Notes:
- Opaque secrets are not visible in the console.
- Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.
-
Add labels to your credentials to view your secret in the console. For example, the following AWS S3 Bucket
oc label secret
is appended withtype=awss3
andcredentials --from-file=….
:
oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster "cluster.open-cluster-management.io/type=awss3" oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster "cluster.open-cluster-management.io/credentials=credentials="
1.4.1.4. Additional resources
- See Understanding and getting your security credentials.
- See Configuring an AWS account.
- Log in to AWS.
- Download your Red Hat OpenShift pull secret.
- See Generating a key pair for cluster node SSH access for more information about how to generate a key.
- See Creating a cluster on Amazon Web Services.
- See Creating a cluster on Amazon Web Services GovCloud.
- Return to Creating a credential for Amazon Web Services.
1.4.2. Creating a credential for Microsoft Azure
You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Microsoft Azure or on Microsoft Azure Government.
Required access: Edit
Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator.
1.4.2.1. Prerequisites
You must have the following prerequisites before creating a credential:
- A deployed multicluster engine operator hub cluster.
- Internet access for your multicluster engine operator hub cluster so that it can create the Kubernetes cluster on Azure.
- Azure login credentials, which include your Base Domain Resource Group and Azure Service Principal JSON. See Microsoft Azure portal to get your login credentials.
- Account permissions that allow installing clusters on Azure. See How to configure Cloud Services and Configuring an Azure account for more information.
1.4.2.2. Managing a credential by using the console
To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.
- Optional: Add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential.
-
Select whether the environment for your cluster is
AzurePublicCloud
orAzureUSGovernmentCloud
. The settings are different for the Azure Government environment, so ensure that this is set correctly. - Add your Base domain resource group name for your Azure account. This entry is the resource name that you created with your Azure account. You can find your Base Domain Resource Group Name by selecting Home > DNS Zones in the Azure interface. See Create an Azure service principal with the Azure CLI to find your base domain resource group name.
Provide the contents for your Client ID. This value is generated as the
appId
property when you create a service principal with the following command:az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>
Replace service_principal with the name of your service principal.
Add your Client Secret. This value is generated as the
password
property when you create a service principal with the following command:az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>
Replace service_principal with the name of your service principal.
Add your Subscription ID. This value is the
id
property in the output of the following command:az account show
Add your Tenant ID. This value is the
tenantId
property in the output of the following command:az account show
If you want to enable a proxy, enter the proxy information:
-
HTTP proxy URL: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy URL: The secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
-
HTTP proxy URL: The URL that should be used as a proxy for
- Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
- Add your SSH private key and SSH public key to use to connect to the cluster. You can use an existing key pair, or create a new pair using a key generation program.
You can create a cluster that uses this credential by completing the steps in Creating a cluster on Microsoft Azure.
You can edit your credential in the console.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.4.2.3. Creating an opaque secret by using the API
To create an opaque secret for Microsoft Azure by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:
kind: Secret metadata: name: <managed-cluster-name>-azure-creds namespace: <managed-cluster-namespace> type: Opaque data: baseDomainResourceGroupName: $(echo -n "${azure_resource_group_name}" | base64 -w0) osServicePrincipal.json: $(base64 -w0 "${AZURE_CRED_JSON}")
Notes:
- Opaque secrets are not visible in the console.
- Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.
1.4.2.4. Additional resources
- See Microsoft Azure portal.
- See How to configure Cloud Services.
- See Configuring an Azure account.
- See Create an Azure service principal with the Azure CLI to find your base domain resource group name.
- Download your Red Hat OpenShift pull secret.
- See Generating a key pair for cluster node SSH access for more information about how to generate a key.
- See Creating a cluster on Microsoft Azure.
- Return to Creating a credential for Microsoft Azure.
1.4.3. Creating a credential for Google Cloud Platform
You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP).
Required access: Edit
Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator.
1.4.3.1. Prerequisites
You must have the following prerequisites before creating a credential:
- A deployed multicluster engine operator hub cluster
- Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on GCP
- GCP login credentials, which include user Google Cloud Platform Project ID and Google Cloud Platform service account JSON key. See Creating and managing projects.
- Account permissions that allow installing clusters on GCP. See Configuring a GCP project for instructions on how to configure an account.
1.4.3.2. Managing a credential by using the console
To create a credential from the multicluster engine operator console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, for both convenience and security.
You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:
- Add your Google Cloud Platform project ID for your GCP account. See Log in to GCP to retrieve your settings.
- Add your Google Cloud Platform service account JSON key. See the Create service accounts documentation to create your service account JSON key. Follow the steps for the GCP console.
- Provide the contents for your new Google Cloud Platform service account JSON key.
If you want to enable a proxy, enter the proxy information:
-
HTTP proxy URL: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy URL: The secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add and asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
-
HTTP proxy URL: The URL that should be used as a proxy for
- Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
- Add your SSH private key and SSH public key so you can access the cluster. You can use an existing key pair, or create a new pair using a key generation program.
You can use this connection when you create a cluster by completing the steps in Creating a cluster on Google Cloud Platform.
You can edit your credential in the console.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.4.3.3. Creating an opaque secret by using the API
To create an opaque secret for Google Cloud Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:
kind: Secret metadata: name: <managed-cluster-name>-gcp-creds namespace: <managed-cluster-namespace> type: Opaque data: osServiceAccount.json: $(base64 -w0 "${GCP_CRED_JSON}")
Notes:
- Opaque secrets are not visible in the console.
- Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.
1.4.3.4. Additional resources
- See Creating and managing projects.
- See Configuring a GCP project.
- Log in to GCP.
- See the Create service accounts to create your service account JSON key.
- Download your Red Hat OpenShift pull secret.
- See Generating a key pair for cluster node SSH access for more information about how to generate a key.
- See Creating a cluster on Google Cloud Platform.
1.4.4. Creating a credential for VMware vSphere
You need a credential to use multicluster engine operator console to deploy and manage a Red Hat OpenShift Container Platform cluster on VMware vSphere.
Required access: Edit
Notes:
- You must create a credential for VMware vSphere before you can create a cluster with multicluster engine operator.
- OpenShift Container Platform versions 4.12 and later are supported.
1.4.4.1. Prerequisites
You must have the following prerequisites before you create a credential:
- A deployed hub cluster on OpenShift Container Platform version 4.12 or later.
- Internet access for your hub cluster so it can create the Kubernetes cluster on VMware vSphere.
VMware vSphere login credentials and vCenter requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on vSphere with customizations. These credentials include the following information:
- vCenter account privileges.
- Cluster resources.
- DHCP available.
- ESXi hosts have time synchronized (for example, NTP).
1.4.4.2. Managing a credential by using the console
To create a credential from the multicluster engine operator console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.
You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:
- Add your VMware vCenter server fully-qualified host name or IP address. The value must be defined in the vCenter server root CA certificate. If possible, use the fully-qualified host name.
- Add your VMware vCenter username.
- Add your VMware vCenter password.
Add your VMware vCenter root CA certificate.
-
You can download your certificate in the
download.zip
package with the certificate from your VMware vCenter server at:https://<vCenter_address>/certs/download.zip
. Replace vCenter_address with the address to your vCenter server. -
Unpackage the
download.zip
. Use the certificates from the
certs/<platform>
directory that have a.0
extension.Tip: You can use the
ls certs/<platform>
command to list all of the available certificates for your platform.Replace
<platform>
with the abbreviation for your platform:lin
,mac
, orwin
.For example:
certs/lin/3a343545.0
Best practice: Link together multiple certificates with a
.0
extension by running thecat certs/lin/*.0 > ca.crt
command.- Add your VMware vSphere cluster name.
- Add your VMware vSphere datacenter.
- Add your VMware vSphere default datastore.
- Add your VMware vSphere disk type.
- Add your VMware vSphere folder.
- Add your VMware vSphere resource pool.
-
You can download your certificate in the
For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information:
Image content source: This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example:
repository.com:5000/openshift/ocp-release
.The path creates an image content source policy mapping in the
install-config.yaml
to the Red Hat OpenShift Container Platform release images. As an example,repository.com:5000
produces thisimageContentSource
content:- mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release-nightly - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
Additional trust bundle: This value provides the contents of the certificate file that is required to access the mirror registry.
Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the
install-config.yaml
file by using theYAML
editor. A sample entry is shown in the following example:- mirrors: - registry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2
If you want to enable a proxy, enter the proxy information:
-
HTTP proxy URL: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy URL: The secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add and asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
-
HTTP proxy URL: The URL that should be used as a proxy for
- Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
Add your SSH private key and SSH public key, which allows you to connect to the cluster.
You can use an existing key pair, or create a new one with key generation program.
You can create a cluster that uses this credential by completing the steps in Creating a cluster on VMware vSphere.
You can edit your credential in the console.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.4.4.3. Creating an opaque secret by using the API
To create an opaque secret for VMware vSphere by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:
kind: Secret metadata: name: <managed-cluster-name>-vsphere-creds namespace: <managed-cluster-namespace> type: Opaque data: username: $(echo -n "${VMW_USERNAME}" | base64 -w0) password.json: $(base64 -w0 "${VMW_PASSWORD}")
Notes:
- Opaque secrets are not visible in the console.
- Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.
1.4.4.4. Additional resources
1.4.5. Creating a credential for Red Hat OpenStack
You need a credential to use multicluster engine operator console to deploy and manage a Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform.
Notes:
- You must create a credential for Red Hat OpenStack Platform before you can create a cluster with multicluster engine operator.
- Only OpenShift Container Platform versions 4.12 and later, are supported.
1.4.5.1. Prerequisites
You must have the following prerequisites before you create a credential:
- A deployed hub cluster on OpenShift Container Platform version 4.12 or later.
- Internet access for your hub cluster so it can create the Kubernetes cluster on Red Hat OpenStack Platform.
- Red Hat OpenStack Platform login credentials and Red Hat OpenStack Platform requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on OpenStack with customizations.
Download or create a
clouds.yaml
file for accessing the CloudStack API. Within theclouds.yaml
file:- Determine the cloud auth section name to use.
- Add a line for the password, immediately following the username line.
1.4.5.2. Managing a credential by using the console
To create a credential from the multicluster engine operator console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. To enhance security and convenience, you can create a namespace specifically to host your credentials.
- Optional: You can add a Base DNS domain for your credential. If you add the base DNS domain, it is automatically populated in the correct field when you create a cluster with this credential.
-
Add your Red Hat OpenStack Platform
clouds.yaml
file contents. The contents of theclouds.yaml
file, including the password, provide the required information for connecting to the Red Hat OpenStack Platform server. The file contents must include the password, which you add to a new line immediately after theusername
. -
Add your Red Hat OpenStack Platform cloud name. This entry is the name specified in the cloud section of the
clouds.yaml
to use for establishing communication to the Red Hat OpenStack Platform server. -
Optional: For configurations that use an internal certificate authority, enter your certificate in the Internal CA certificate field to automatically update your
clouds.yaml
with the certificate information. For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information:
- Cluster OS image: This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines.
Image content sources: This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example:
repository.com:5000/openshift/ocp-release
.The path creates an image content source policy mapping in the
install-config.yaml
to the Red Hat OpenShift Container Platform release images. As an example,repository.com:5000
produces thisimageContentSource
content:- mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release-nightly - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - registry.example.com:5000/ocp4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
Additional trust bundle: This value provides the contents of the certificate file that is required to access the mirror registry.
Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the
install-config.yaml
file by using theYAML
editor. A sample entry is shown in the following example:- mirrors: - registry.example.com:5000/rhacm2 source: registry.redhat.io/rhacm2
If you want to enable a proxy, enter the proxy information:
-
HTTP proxy URL: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy URL: The secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
-
HTTP proxy URL: The URL that should be used as a proxy for
- Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
- Add your SSH Private Key and SSH Public Key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program.
- Click Create.
- Review the new credential information, then click Add. When you add the credential, it is added to the list of credentials.
You can create a cluster that uses this credential by completing the steps in Creating a cluster on Red Hat OpenStack Platform.
You can edit your credential in the console.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.4.5.3. Creating an opaque secret by using the API
To create an opaque secret for Red Hat OpenStack Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:
kind: Secret metadata: name: <managed-cluster-name>-osp-creds namespace: <managed-cluster-namespace> type: Opaque data: clouds.yaml: $(base64 -w0 "${OSP_CRED_YAML}") cloud: $(echo -n "openstack" | base64 -w0)
Notes:
- Opaque secrets are not visible in the console.
- Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.
1.4.5.4. Additional resources
1.4.6. Creating a credential for Red Hat Virtualization
Deprecated: The Red Hat Virtualization credential and cluster create feature is deprecated and no longer supported.
You need a credential to use multicluster engine operator console to deploy and manage a Red Hat OpenShift Container Platform cluster on Red Hat Virtualization.
Note: This procedure must be done before you can create a cluster with multicluster engine operator.
1.4.6.1. Prerequisites
You must have the following prerequisites before you create a credential:
- A deployed hub cluster on OpenShift Container Platform version 4.12 or later.
- Internet access for your hub cluster so it can create the Kubernetes cluster on Red Hat Virtualization.
Red Hat Virtualization login credentials for a configured Red Hat Virtualization environment. See Installation Guide in the Red Hat Virtualization documentation. The following list shows the required information:
- oVirt URL
- oVirt fully-qualified domain name (FQDN)
- oVirt username
- oVirt password
- oVirt CA/Certificate
- Optional: Proxy information, if you are enabling a proxy.
- Red Hat OpenShift Container Platform pull secret information. You can download your pull secret from Pull secret.
- SSH private and public keys for transferring information for the final cluster.
- Account permissions that allow installing clusters on oVirt.
1.4.6.2. Managing a credential by using the console
To create a credential from the multicluster engine operator console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, for both convenience and added security.
- Add the basic information for your new credential. You can optionally add a Base DNS domain, which is automatically populated in the correct field when you create a cluster with this credential. If you do not add it to the credential, you can add it when you create the cluster.
- Add the required information for your Red Hat Virtualization environment.
If you want to enable a proxy, enter the proxy information:
-
HTTP Proxy URL: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS Proxy URL: The secure proxy URL that should be used when using
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No Proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations.
-
HTTP Proxy URL: The URL that should be used as a proxy for
- Enter your Red Hat OpenShift Container Platform pull secret. You can download your pull secret from Pull secret.
- Add your SSH Private Key and SSH Public Key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with a key generation program. See Generating a key pair for cluster node SSH access for more information.
- Review the new credential information, then click Add. When you add the credential, it is added to the list of credentials.
You can create a cluster that uses this credential by completing the steps in Creating a cluster on Red Hat Virtualization (deprecated).
You can edit your credential in the console.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.4.7. Creating a credential for Red Hat OpenShift Cluster Manager
Add an OpenShift Cluster Manager credential so that you can discover clusters.
Required access: Administrator
1.4.7.1. Prerequisites
You need access to a console.redhat.com account. Later you will need the value that can be obtained from console.redhat.com/openshift/token.
1.4.7.2. Managing a credential by using the console
You need to add your credential to discover clusters. To create a credential from the multicluster engine operator console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.
Your OpenShift Cluster Manager API token can be obtained from console.redhat.com/openshift/token.
You can edit your credential in the console.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
If your credential is removed, or your OpenShift Cluster Manager API token expires or is revoked, then the associated discovered clusters are removed.
1.4.8. Creating a credential for Ansible Automation Platform
You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster that is using Red Hat Ansible Automation Platform.
Required access: Edit
Note: This procedure must be done before you can create an Automation template to enable automation on a cluster.
1.4.8.1. Prerequisites
You must have the following prerequisites before creating a credential:
- A deployed multicluster engine operator hub cluster
- Internet access for your multicluster engine operator hub cluster
- Ansible login credentials, which includes Ansible Automation Platform hostname and OAuth token; see Credentials for Ansible Automation Platform.
- Account permissions that allow you to install hub clusters and work with Ansible. Learn more about Ansible users.
1.4.8.2. Managing a credential by using the console
To create a credential from the multicluster engine operator console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.
The Ansible Token and host URL that you provide when you create your Ansible credential are automatically updated for the automations that use that credential when you edit the credential. The updates are copied to any automations that use that Ansible credential, including those related to cluster lifecycle, governance, and application management automations. This ensures that the automations continue to run after the credential is updated.
You can edit your credential in the console. Ansible credentials are automatically updated in your automation that use that credential when you update them in the credential.
You can create an Ansible Job that uses this credential by completing the steps in Configuring Ansible Automation Platform tasks to run on managed clusters.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.4.9. Creating a credential for an on-premises environment
You need a credential to use the console to deploy and manage a Red Hat OpenShift Container Platform cluster in an on-premises environment. The credential specifies the connections that are used for the cluster.
Required access: Edit
1.4.9.1. Prerequisites
You need the following prerequisites before creating a credential:
- A hub cluster that is deployed.
- Internet access for your hub cluster so it can create the Kubernetes cluster on your infrastructure environment.
- For a disconnected environment, you must have a configured mirror registry where you can copy the release images for your cluster creation. See Disconnected installation mirroring in the OpenShift Container Platform documentation for more information.
- Account permissions that support installing clusters on the on-premises environment.
1.4.9.2. Managing a credential by using the console
To create a credential from the console, complete the steps in the console.
Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.
- Select Host inventory for your credential type.
- You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. If you do not add the DNS domain, you can add it when you create your cluster.
- Enter your Red Hat OpenShift pull secret. This pull secret is automatically entered when you create a cluster and specify this credential. You can download your pull secret from Pull secret. See Using image pull secrets for more information about pull secrets.
-
Enter your
SSH public key
. ThisSSH public key
is also automatically entered when you create a cluster and specify this credential. - Select Add to create your credential.
You can create a cluster that uses this credential by completing the steps in Creating a cluster in an on-premises environment.
When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.
1.5. Cluster lifecycle introduction
The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. The multicluster engine operator is a software operator that enhances cluster fleet management and supports OpenShift Container Platform cluster lifecycle management across clouds and data centers. You can use multicluster engine operator with or without Red Hat Advanced Cluster Management. Red Hat Advanced Cluster Management also installs multicluster engine operator automatically and offers further multicluster capabilities.
See the following documentation:
- Cluster lifecycle architecture
- Managing credentials overview
- Host inventory introduction
- Creating a cluster with the CLI
- Configuring additional manifests during cluster creation
- Creating a cluster on Amazon Web Services
- Creating a cluster on Amazon Web Services GovCloud
- Creating a cluster on Microsoft Azure
- Creating a cluster on Google Cloud Platform
- Creating a cluster on VMware vSphere
- Creating a cluster on Red Hat OpenStack Platform
- Creating a cluster on Red Hat Virtualization (deprecated)
- Creating a cluster in an on-premise environment
- Creating a cluster in a proxy environment
- Accessing your cluster
- Scaling managed clusters
- Hibernating a created cluster
- Enabling cluster proxy add-ons
- Configuring Ansible Automation Platform tasks to run on managed clusters
- Placement
- Enabling ManagedServiceAccount
- Cluster lifecycle advanced configuration
- Removing a cluster from management
1.5.1. Cluster lifecycle architecture
Cluster lifecycle requires two types of clusters: hub clusters and managed clusters.
The hub cluster is the OpenShift Container Platform (or Red Hat Advanced Cluster Management) main cluster with the multicluster engine operator automatically installed. You can create, manage, and monitor other Kubernetes clusters with the hub cluster. You can create clusters by using the hub cluster, while you can also import existing clusters to be managed by the hub cluster.
When you create a managed cluster, the cluster is created using the Red Hat OpenShift Container Platform cluster installer with the Hive resource. You can find more information about the process of installing clusters with the OpenShift Container Platform installer by reading Installing and configuring OpenShift Container Platform clusters in the OpenShift Container Platform documentation.
The following diagram shows the components that are installed with the multicluster engine for Kubernetes operator for cluster management:
The components of the cluster lifecycle management architecture include the following items:
1.5.1.1. Hub cluster
- The managed cluster import controller deploys the klusterlet operator to the managed clusters.
- The Hive controller provisions the clusters that you create by using the multicluster engine for Kubernetes operator. The Hive Controller also destroys managed clusters that were created by the multicluster engine for Kubernetes operator.
- The cluster curator controller creates the Ansible jobs as the pre-hook or post-hook to configure the cluster infrastructure environment when creating or upgrading managed clusters.
- When a managed cluster add-on is enabled on the hub cluster, its add-on hub controller is deployed on the hub cluster. The add-on hub controller deploys the add-on agent to the managed clusters.
1.5.1.2. Managed cluster
- The klusterlet operator deploys the registration and work controllers on the managed cluster.
The Registration Agent registers the managed cluster and the managed cluster add-ons with the hub cluster. The Registration Agent also maintains the status of the managed cluster and the managed cluster add-ons. The following permissions are automatically created within the Clusterrole to allow the managed cluster to access the hub cluster:
- Allows the agent to get or update its owned cluster that the hub cluster manages
- Allows the agent to update the status of its owned cluster that the hub cluster manages
- Allows the agent to rotate its certificate
-
Allows the agent to
get
orupdate
thecoordination.k8s.io
lease -
Allows the agent to
get
its managed cluster add-ons - Allows the agent to update the status of its managed cluster add-ons
- The work agent applies the Add-on Agent to the managed cluster. The permission to allow the managed cluster to access the hub cluster is automatically created within the Clusterrole and allows the agent to send events to the hub cluster.
To continue adding and managing clusters, see the Cluster lifecycle introduction.
1.5.2. Release images
When you build your cluster, use the version of Red Hat OpenShift Container Platform that the release image specifies. By default, OpenShift Container Platform uses the clusterImageSets
resources to get the list of supported release images.
Continue reading to learn more about release images:
1.5.2.1. Specifying release images
When you create a cluster on a provider by using multicluster engine for Kubernetes operator, specify a release image to use for your new cluster. To specify a release image, see the following topics:
1.5.2.1.1. Locating ClusterImageSets
The YAML files referencing the release images are maintained in the acm-hive-openshift-releases GitHub repository. The files are used to create the list of the available release images in the console. This includes the latest fast channel images from OpenShift Container Platform.
The console only displays the latest release images for the three latest versions of OpenShift Container Platform. For example, you might see the following release image displayed in the console options:
quay.io/openshift-release-dev/ocp-release:4.12.1-x86_64
The console displays the latest versions to help you create a cluster with the latest release images. If you need to create a cluster that is a specific version, older release image versions are also available.
Note: You can only select images with the visible: 'true'
label when creating clusters in the console. An example of this label in a ClusterImageSet
resource is provided in the following content. Replace 4.x.1
with the current version of the product:
apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: 'true' name: img4.x.1-x86-64-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64
Additional release images are stored, but are not visible in the console. To view all of the available release images, run the following command:
oc get clusterimageset
The repository has the clusterImageSets
directory, which is the directory that you use when working with the release images. The clusterImageSets
directory has the following directories:
- Fast: Contains files that reference the latest versions of the release images for each supported OpenShift Container Platform version. The release images in this folder are tested, verified, and supported.
Releases: Contains files that reference all of the release images for each OpenShift Container Platform version (stable, fast, and candidate channels)
Note: These releases have not all been tested and determined to be stable.
Stable: Contains files that reference the latest two stable versions of the release images for each supported OpenShift Container Platform version..
Note: By default, the current list of release images updates one time every hour. After upgrading the product, it might take up to one hour for the list to reflect the recommended release image versions for the new version of the product.
1.5.2.1.2. Configuring ClusterImageSets
You can configure your ClusterImageSets
with the following options:
Option 1: To create a cluster in the console, specify the image reference for the specific
ClusterImageSet
that you want to us. Each new entry you specify persists and is available for all future cluster provisions See the following example entry:quay.io/openshift-release-dev/ocp-release:4.6.8-x86_64
-
Option 2: Manually create and apply a
ClusterImageSets
YAML file from theacm-hive-openshift-releases
GitHub repository. -
Option 3: To enable automatic updates of
ClusterImageSets
from a forked GitHub repository, follow theREADME.md
in the cluster-image-set-controller GitHub repository.
1.5.2.1.3. Creating a release image to deploy a cluster on a different architecture
You can create a cluster on an architecture that is different from the architecture of the hub cluster by manually creating a release image that has the files for both architectures.
For example, you might need to create an x86_64
cluster from a hub cluster that is running on the ppc64le
, aarch64
, or s390x
architecture. If you create the release image with both sets of files, the cluster creation succeeds because the new release image enables the OpenShift Container Platform release registry to provide a multi-architecture image manifest.
OpenShift Container Platform 4.12 and later supports multiple architectures by default. You can use the following clusterImageSet
to provision a cluster. Replace 4.x.0
with the current version:
apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: 'true' name: img4.x.0-multi-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.0-multi
To create the release image for OpenShift Container Platform images that do not support multiple architectures, complete steps similar to the following example for your architecture type:
From the OpenShift Container Platform release registry, create a manifest list that includes
x86_64
,s390x
,aarch64
, andppc64le
release images.Pull the manifest lists for both architectures in your environment from the Quay repository by running the following example commands. Replace
4.x.1
with the current version of the product:podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64 podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-ppc64le podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-s390x podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-aarch64
Log in to your private repository where you maintain your images by running the following command. Replace
<private-repo>
with the path to your repository.:podman login <private-repo>
Add the release image manifest to your private repository by running the following commands that apply to your environment. Replace
4.x.1
with the current version of the product. Replace<private-repo>
with the path to your repository:podman push quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64 <private-repo>/ocp-release:4.x.1-x86_64 podman push quay.io/openshift-release-dev/ocp-release:4.x.1-ppc64le <private-repo>/ocp-release:4.x.1-ppc64le podman push quay.io/openshift-release-dev/ocp-release:4.x.1-s390x <private-repo>/ocp-release:4.x.1-s390x podman push quay.io/openshift-release-dev/ocp-release:4.x.1-aarch64 <private-repo>/ocp-release:4.x.1-aarch64
Create a manifest for the new information by running the following command:
podman manifest create mymanifest
Add references to both release images to the manifest list by running the following commands. Replace
4.x.1
with the current version of the product. Replace<private-repo>
with the path to your repository:podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-x86_64 podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-ppc64le podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-s390x podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-aarch64
Merge the list in your manifest list with the existing manifest by running the following command. Replace
<private-repo>
with the path to your repository. Replace4.x.1
with the current version:podman manifest push mymanifest docker://<private-repo>/ocp-release:4.x.1
On the hub cluster, create a release image that references the manifest in your repository.
Create a YAML file that contains information that is similar to the following example. Replace
<private-repo>
with the path to your repository. Replace4.x.1
with the current version:apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast visible: "true" name: img4.x.1-appsub spec: releaseImage: <private-repo>/ocp-release:4.x.1
Run the following command on your hub cluster to apply the changes. Replace
<file-name>
with the name of the YAML file that you created in the previous step:oc apply -f <file-name>.yaml
- Select the new release image when you create your OpenShift Container Platform cluster.
- If you deploy the managed cluster by using the Red Hat Advanced Cluster Management console, specify the architecture for the managed cluster in the Architecture field during the cluster creation process.
The creation process uses the merged release images to create the cluster.
1.5.2.1.4. Additional resources
- See the acm-hive-openshift-releases GitHub repository for the YAML files that reference the release images.
-
See the cluster-image-set-controller GitHub repository GitHub repository to learn how to enable enable automatic updates of
ClusterImageSets
from a forked GitHub repository.
1.5.2.2. Maintaining a custom list of release images when connected
You might want to use the same release image for all of your clusters. To simplify, you can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images:
- Fork the acm-hive-openshift-releases GitHub.
-
Add the YAML files for the images that you want available when you create a cluster. Add the images to the
./clusterImageSets/stable/
or./clusterImageSets/fast/
directory by using the Git console or the terminal. -
Create a
ConfigMap
in themulticluster-engine
namespace namedcluster-image-set-git-repo
. See the following example, but replace2.x
with 2.4:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-image-set-git-repo namespace: multicluster-engine data: gitRepoUrl: <forked acm-hive-openshift-releases repository URL> gitRepoBranch: backplane-<2.x> gitRepoPath: clusterImageSets channel: <fast or stable>
You can retrieve the available YAML files from the main repository by merging changes in to your forked repository with the following procedure:
- Commit and merge your changes to your forked repository.
-
To synchronize your list of fast release images after you clone the
acm-hive-openshift-releases
repository, update the value of channel field in thecluster-image-set-git-repo
ConfigMap
tofast
. -
To synchronize and display the stable release images, update the value of channel field in the
cluster-image-set-git-repo
ConfigMap
tostable
.
After updating the ConfigMap
, the list of available stable release images updates with the currently available images in about one minute.
You can use the following commands to list what is available and remove the defaults. Replace
<clusterImageSet_NAME>
with the correct name:oc get clusterImageSets oc delete clusterImageSet <clusterImageSet_NAME>
View the list of currently available release images in the console when you are creating a cluster.
For information regarding other fields available through the ConfigMap
, view the cluster-image-set-controller GitHub repository README.
1.5.2.3. Maintaining a custom list of release images while disconnected
In some cases, you need to maintain a custom list of release images when the hub cluster has no Internet connection. You can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images while disconnected:
- While you are on a connected system, navigate to the acm-hive-openshift-releases GitHub repository to access the cluster image sets that are available.
-
Copy the
clusterImageSets
directory to a system that can access the disconnected multicluster engine operator cluster. Add the mapping between the managed cluster and the disconnected repository with your cluster image sets by completing the following steps that fits your managed cluster:
-
For an OpenShift Container Platform managed cluster, see Configuring image registry repository mirroring for information about using your
ImageContentSourcePolicy
object to complete the mapping. -
For a managed cluster that is not an OpenShift Container Platform cluster, use the
ManageClusterImageRegistry
custom resource definition to override the location of the image sets. See Specifying registry images on managed clusters for import for information about how to override the cluster for the mapping.
-
For an OpenShift Container Platform managed cluster, see Configuring image registry repository mirroring for information about using your
-
Add the YAML files for the images that you want available when you create a cluster by using the console or CLI to manually add the
clusterImageSet
YAML content. Modify the
clusterImageSet
YAML files for the remaining OpenShift Container Platform release images to reference the correct offline repository where you store the images. Your updates resemble the following example wherespec.releaseImage
uses your offline image registry of the release image, and the release image is referenced by digest:apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: fast name: img<4.x.x>-x86-64-appsub spec: releaseImage: IMAGE_REGISTRY_IPADDRESS_or__DNSNAME/REPO_PATH/ocp-release@sha256:073a4e46289be25e2a05f5264c8f1d697410db66b960c9ceeddebd1c61e58717
- Ensure that the images are loaded in the offline image registry that is referenced in the YAML file.
Obtain the image digest by running the following command:
oc adm release info <tagged_openshift_release_image> | grep "Pull From"
Replace
<tagged_openshift_release_image>
with the tagged image for the supported OpenShift Container Platform version. See the following example output:Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe
To learn more about the image tag and digest, see Referencing images in imagestreams.
Create each of the
clusterImageSets
by entering the following command for each YAML file:oc create -f <clusterImageSet_FILE>
Replace
clusterImageSet_FILE
with the name of the cluster image set file. For example:oc create -f img4.11.9-x86_64.yaml
After running this command for each resource you want to add, the list of available release images are available.
-
Alternately you can paste the image URL directly in the create cluster console. Adding the image URL creates new
clusterImageSets
if they do not exist. - View the list of currently available release images in the console when you are creating a cluster.
1.5.3. Host inventory introduction
The host inventory management and on-premises cluster installation are available using the multicluster engine operator central infrastructure management feature. Central infrastructure management runs the Assisted Installer (also called infrastructure operator) as an operator on the hub cluster.
You can use the console to create a host inventory, which is a pool of bare metal or virtual machines that you can use to create on-premises OpenShift Container Platform clusters. These clusters can be standalone, with dedicated machines for the control plane, or hosted control planes, where the control plane runs as pods on a hub cluster.
You can install standalone clusters by using the console, API, or GitOps by using Zero Touch Provisioning (ZTP). See Installing GitOps ZTP in a disconnected environment in the Red Hat OpenShift Container Platform documentation for more information on ZTP.
A machine joins the host inventory after booting with a Discovery Image. The Discovery Image is a Red Hat CoreOS live image that contains the following:
- An agent that performs discovery, validation, and installation tasks.
- The necessary configuration for reaching the service on the hub cluster, including the endpoint, token, and static network configuration, if applicable.
You generally have a single Discovery Image for each infrastructure environment, which is a set of hosts sharing a common set of properties. The InfraEnv
custom resource definition represents this infrastructure environment and associated Discovery Image. The image used is based on your OpenShift Container Platform version, which determines the operating system version that is selected.
After the host boots and the agent contacts the service, the service creates a new Agent
custom resource on the hub cluster representing that host. The Agent
resources make up the host inventory.
You can install hosts in the inventory as OpenShift nodes later. The agent writes the operating system to the disk, along with the necessary configuration, and reboots the host.
Continue reading to learn more about host inventories and central infrastructure management:
- Enabling the central infrastructure management service
- Enabling central infrastructure management on Amazon Web Services
- Creating a host inventory by using the console
- Creating a host inventory by using the command line interface
- Configuring advanced networking for an infrastructure environment
- Adding hosts to the host inventory by using the Discovery Image
- Automatically adding bare metal hosts to the host inventory
- Managing your host inventory
- Creating a cluster in an on-premises environment
1.5.3.1. Enabling the central infrastructure management service
The central infrastructure management service is provided with the multicluster engine operator and deploys OpenShift Container Platform clusters. Central infrastructure management is deployed automatically when you enable the MultiClusterHub Operator on the hub cluster, but you have to enable the service manually.
1.5.3.1.1. Prerequisites
See the following prerequisites before enabling the central infrastructure management service:
- You must have a deployed hub cluster on OpenShift Container Platform 4.12 or later with the supported Red Hat Advanced Cluster Management for Kubernetes version.
- You need internet access for your hub cluster (connected), or a connection to an internal or mirror registry that has a connection to the internet (disconnected) to retrieve the required images for creating the environment.
- You must open the required ports for bare metal provisioning. See Ensuring required ports are open in the OpenShift Container Platform documentation.
- You need a bare metal host custom resource definition.
- You need an OpenShift Container Platform pull secret. See Using image pull secrets for more information.
- You need a configured default storage class.
- For disconnected environments only, complete the procedure for Clusters at the network far edge in the OpenShift Container Platform documentation.
1.5.3.1.2. Creating a bare metal host custom resource definition
You need a bare metal host custom resource definition before enabling the central infrastructure management service.
Check if you already have a bare metal host custom resource definition by running the following command:
oc get crd baremetalhosts.metal3.io
- If you have a bare metal host custom resource definition, the output shows the date when the resource was created.
If you do not have the resource, you receive an error that resembles the following:
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "baremetalhosts.metal3.io" not found
If you do not have a bare metal host custom resource definition, download the metal3.io_baremetalhosts.yaml[metal3.io_baremetalhosts.yaml file and apply the content by running the following command to create the resource:
oc apply -f
1.5.3.1.3. Creating or modifying the Provisioning resource
You need a Provisioning
resource before enabling the central infrastructure management service.
Check if you have the
Provisioning
resource by running the following command:oc get provisioning
-
If you already have a
Provisioning
resource, continue by Modifying theProvisioning
resource. -
If you do not have a
Provisioning
resource, you receive aNo resources found
error. Continue by Creating theProvisioning
resource.
-
If you already have a
1.5.3.1.3.1. Modifying the Provisioning resource
If you already have a Provisioning
resource, you must modify the resource if your hub cluster is installed on one of the following platforms:
- Bare metal
- Red Hat OpenStack Platform
- VMware vSphere
-
User-provisioned infrastructure (UPI) method and the platform is
None
If your hub cluster is installed on a different platform, continue at Enabling central infrastructure management in disconnected environments or Enabling central infrastructure management in connected environments.
Modify the
Provisioning
resource to allow the Bare Metal Operator to watch all namespaces by running the following command:oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
1.5.3.1.3.2. Creating the Provisioning resource
If you do not have a Provisioning
resource, complete the following steps:
Create the
Provisioning
resource by adding the following YAML content:apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: "Disabled" watchAllNamespaces: true
Apply the content by running the following command:
oc apply -f
1.5.3.1.4. Enabling central infrastructure management in disconnected environments
To enable central infrastructure management in disconnected environments, complete the following steps:
Create a
ConfigMap
in the same namespace as your infrastructure operator to specify the values forca-bundle.crt
andregistries.conf
for your mirror registry. Your fileConfigMap
might resemble the following example:apiVersion: v1 kind: ConfigMap metadata: name: <mirror-config> namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | <certificate-content> registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/multicluster-engine" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:5000/multicluster-engine"
Note: You must set
mirror-by-digest-only
totrue
because release images are specified by using a digest.Registries in the list of
unqualified-search-registries
are automatically added to an authentication ignore list in thePUBLIC_CONTAINER_REGISTRIES
environment variable. The specified registries do not require authentication when the pull secret of the managed cluster is validated.Create the
AgentServiceConfig
custom resource by saving the following YAML content in theagent_service_config.yaml
file:apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> mirrorRegistryRef: name: <mirror_config> 1 unauthenticatedRegistries: - <unauthenticated_registry> 2 imageStorage: accessModes: - ReadWriteOnce resources: requests: storage: <img_volume_size> 3 osImages: - openshiftVersion: "<ocp_version>" 4 version: "<ocp_release_version>" 5 url: "<iso_url>" 6 cpuArchitecture: "x86_64"
- 1
- Replace
mirror_config
with the name of theConfigMap
that contains your mirror registry configuration details. - 2
- Include the optional
unauthenticated_registry
parameter if you are using a mirror registry that does not require authentication. Entries on this list are not validated or required to have an entry in the pull secret. - 3
- Replace
img_volume_size
with the size of the volume for theimageStorage
field, for example10Gi
per operating system image. The minimum value is10Gi
, but the recommended value is at least50Gi
. This value specifies how much storage is allocated for the images of the clusters. You need to allow 1 GB of image storage for each instance of Red Hat Enterprise Linux CoreOS that is running. You might need to use a higher value if there are many clusters and instances of Red Hat Enterprise Linux CoreOS. - 4
- Replace
ocp_version
with the OpenShift Container Platform version to install, for example,4.13
. - 5
- Replace
ocp_release_version
with the specific install version, for example,49.83.202103251640-0
. - 6
- Replace
iso_url
with the ISO url, for example,https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/4.12.3/rhcos-4.12.3-x86_64-live.x86_64.iso
. You can find other values at the 4.12.3 dependencies.
Important: If you are using the late binding feature and the spec.osImages
releases in the AgentServiceConfig
custom resource are version 4.13 or later, the OpenShift Container Platform release images that you use when creating your clusters must be version 4.13 or later. The Red Hat Enterprise Linux CoreOS images for version 4.13 and later are not compatible with images earlier than version 4.13.
You can verify that your central infrastructure management service is healthy by checking the assisted-service
and assisted-image-service
deployments and ensuring that their pods are ready and running.
1.5.3.1.5. Enabling central infrastructure management in connected environments
To enable central infrastructure management in connected environments, create the AgentServiceConfig
custom resource by saving the following YAML content in the agent_service_config.yaml
file:
apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: accessModes: - ReadWriteOnce resources: requests: storage: <db_volume_size> 1 filesystemStorage: accessModes: - ReadWriteOnce resources: requests: storage: <fs_volume_size> 2 imageStorage: accessModes: - ReadWriteOnce resources: requests: storage: <img_volume_size> 3
- 1
- Replace
db_volume_size
with the volume size for thedatabaseStorage
field, for example10Gi
. This value specifies how much storage is allocated for storing files such as database tables and database views for the clusters. The minimum value that is required is1Gi
. You might need to use a higher value if there are many clusters. - 2
- Replace
fs_volume_size
with the size of the volume for thefilesystemStorage
field, for example200M
per cluster and2-3Gi
per supported OpenShift Container Platform version. The minimum value that is required is1Gi
, but the recommended value is at least100Gi
. This value specifies how much storage is allocated for storing logs, manifests, andkubeconfig
files for the clusters. You might need to use a higher value if there are many clusters. - 3
- Replace
img_volume_size
with the size of the volume for theimageStorage
field, for example10Gi
per operating system image. The minimum value is10Gi
, but the recommended value is at least50Gi
. This value specifies how much storage is allocated for the images of the clusters. You need to allow 1 GB of image storage for each instance of Red Hat Enterprise Linux CoreOS that is running. You might need to use a higher value if there are many clusters and instances of Red Hat Enterprise Linux CoreOS.
Your central infrastructure management service is configured. You can verify that it is healthy by checking the assisted-service
and assisted-image-service
deployments and ensuring that their pods are ready and running.
1.5.3.1.6. Additional resources
- For additional information about zero touch provisioning, see Clusters at the network far edge in the OpenShift Container Platform documentation.
- See Using image pull secrets
1.5.3.2. Enabling central infrastructure management on Amazon Web Services
If you are running your hub cluster on Amazon Web Services and want to enable the central infrastructure management service, complete the following steps after Enabling the central infrastructure management service:
Make sure you are logged in at the hub cluster and find the unique domain configured on the
assisted-image-service
by running the following command:oc get routes --all-namespaces | grep assisted-image-service
Your domain might resemble the following example:
assisted-image-service-multicluster-engine.apps.<yourdomain>.com
Make sure you are logged in at the hub cluster and create a new
IngressController
with a unique domain using theNLB
type
parameter. See the following example:apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: ingress-controller-with-nlb namespace: openshift-ingress-operator spec: domain: nlb-apps.<domain>.com routeSelector: matchLabels: router-type: nlb endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB
-
Add
<yourdomain>
to thedomain
parameter inIngressController
by replacing<domain>
innlb-apps.<domain>.com
with<yourdomain>
. Apply the new
IngressController
by running the following command:oc apply -f ingresscontroller.yaml
Make sure that the value of the
spec.domain
parameter of the newIngressController
is not in conflict with an existingIngressController
by completing the following steps:List all
IngressControllers
by running the following command:oc get ingresscontroller -n openshift-ingress-operator
Run the following command on each of the
IngressControllers
, except theingress-controller-with-nlb
that you just created:oc edit ingresscontroller <name> -n openshift-ingress-operator
If the
spec.domain
report is missing, add a default domain that matches all of the routes that are exposed in the cluster exceptnlb-apps.<domain>.com
.If the
spec.domain
report is provided, make sure that thenlb-apps.<domain>.com
route is excluded from the specified range.
Run the following command to edit the
assisted-image-service
route to use thenlb-apps
location:oc edit route assisted-image-service -n <namespace>
The default namespace is where you installed the multicluster engine operator.
Add the following lines to the
assisted-image-service
route:metadata: labels: router-type: nlb name: assisted-image-service
In the
assisted-image-service
route, find the URL value ofspec.host
. The URL might resemble the following example:assisted-image-service-multicluster-engine.apps.<yourdomain>.com
-
Replace
apps
in the URL withnlb-apps
to match the domain configured in the newIngressController
. To verify that the central infrastructure management service is enabled on Amazon Web Services, run the following command to verify that the pods are healthy:
oc get pods -n multicluster-engine | grep assist
-
Create a new host inventory and ensure that the download URL uses the new
nlb-apps
URL.
1.5.3.3. Creating a host inventory by using the console
You can create a host inventory (infrastructure environment) to discover physical or virtual machines that you can install your OpenShift Container Platform clusters on.
1.5.3.3.1. Prerequisites
- You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information.
1.5.3.3.2. Creating a host inventory
Complete the following steps to create a host inventory by using the console:
- From the console, navigate to Infrastructure > Host inventory and click Create infrastructure environment.
Add the following information to your host inventory settings:
-
Name: A unique name for your infrastructure environment. Creating an infrastructure environment by using the console also creates a new namespace for the
InfraEnv
resource with the name you chose. If you createInfraEnv
resources by using the command line interface and want to monitor the resources in the console, use the same name for your namespace and theInfraEnv
. - Network type: Specifies if the hosts you add to your infrastructure environment use DHCP or static networking. Static networking configuration requires additional steps.
- Location: Specifies the geographic location of the hosts. The geographic location can be used to define which data center the hosts are located.
- Labels: Optional field where you can add labels to the hosts that are discovered with this infrastructure environment. The specified location is automatically added to the list of labels.
- Infrastructure provider credentials: Selecting an infrastructure provider credential automatically populates the pull secret and SSH public key fields with information in the credential. For more information, see Creating a credential for an on-premises environment.
- Pull secret: Your OpenShift Container Platform pull secret that enables you to access the OpenShift Container Platform resources. This field is automatically populated if you selected an infrastructure provider credential.
-
SSH public key: The SSH key that enables the secure communication with the hosts. You can use it to connect to the host for troubleshooting. After installing a cluster, you can no longer connect to the host with the SSH key. The key is generally in your
id_rsa.pub
file. The default file path is~/.ssh/id_rsa.pub
. This field is automatically populated if you selected an infrastructure provider credential that contains the value of a SSH public key. If you want to enable proxy settings for your hosts, select the setting to enable it and enter the following information:
- HTTP Proxy URL: The URL of the proxy for HTTP requests.
- HTTPS Proxy URL: The URL of the proxy for HTTP requests. The URL must start with HTTP. HTTPS is not supported. If you do not provide a value, your HTTP proxy URL is used by default for both HTTP and HTTPS connections.
-
No Proxy domains: A list of domains separated by commas that you do not want to use the proxy with. Start a domain name with a period (
.
) to include all of the subdomains that are in that domain. Add an asterisk (*
) to bypass the proxy for all destinations.
- Optionally add your own Network Time Protocol (NTP) sources by providing a comma separated list of IP or domain names of the NTP pools or servers.
-
Name: A unique name for your infrastructure environment. Creating an infrastructure environment by using the console also creates a new namespace for the
If you need advanced configuration options that are not available in the console, continue to Creating a host inventory by using the command line interface.
If you do not need advanced configuration options, you can continue by configuring static networking, if required, and begin adding hosts to your infrastructure environment.
1.5.3.3.3. Accessing a host inventory
To access a host inventory, select Infrastructure > Host inventory in the console. Select your infrastructure environment from the list to view the details and hosts.
1.5.3.3.4. Additional resources
- See Enabling the central infrastructure management service
- See Creating a credential for an on-premises environment
- See Creating a host inventory by using the command line interface
If you completed this procedure as part of the process to configure hosted control planes on bare metal, your next steps are to complete the following procedures:
1.5.3.4. Creating a host inventory by using the command line interface
You can create a host inventory (infrastructure environment) to discover physical or virtual machines that you can install your OpenShift Container Platform clusters on. Use the command line interface instead of the console for automated deployments or for the following advanced configuration options:
- Automatically bind discovered hosts to an existing cluster definition
- Override the ignition configuration of the Discovery Image
- Control the iPXE behavior
- Modify kernel arguments for the Discovery Image
- Pass additional certificates that you want the host to trust during the discovery phase
- Select a Red Hat CoreOS version to boot for testing that is not the default option of the newest version
1.5.3.4.1. Prerequisite
- You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information.
1.5.3.4.2. Creating a host inventory
Complete the following steps to create a host inventory (infrastructure environment) by using the command line interface:
Log in to your hub cluster by running the following command:
oc login
Create a namespace for your resource.
Create the
namespace.yaml
file and add the following content:apiVersion: v1 kind: Namespace metadata: name: <your_namespace> 1
- 1
- Use the same name for your namespace and your infrastructure environment to monitor your inventory in the console.
Apply the YAML content by running the following command:
oc apply -f namespace.yaml
Create a
Secret
custom resource containing your OpenShift Container Platform pull secret.Create the
pull-secret.yaml
file and add the following content:apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret 1 namespace: <your_namespace> stringData: .dockerconfigjson: <your_pull_secret> 2
Apply the YAML content by running the following command:
oc apply -f pull-secret.yaml
Create the infrastructure environment.
Create the
infra-env.yaml
file and add the following content. Replace values where needed:apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: <your_namespace> spec: proxy: httpProxy: <http://user:password@ipaddr:port> httpsProxy: <http://user:password@ipaddr:port> noProxy: additionalNTPSources: sshAuthorizedKey: pullSecretRef: name: <name> agentLabels: <key>: <value> nmStateConfigLabelSelector: matchLabels: <key>: <value> clusterRef: name: <cluster_name> namespace: <project_name> ignitionConfigOverride: '{"ignition": {"version": "3.1.0"}, …}' cpuArchitecture: x86_64 ipxeScriptType: DiscoveryImageAlways kernelArguments: - operation: append value: audit=0 additionalTrustBundle: <bundle> osImageVersion: <version>
Field | Optional or required | Description |
---|---|---|
| Optional |
Defines the proxy settings for agents and clusters that use the |
| Optional |
The URL of the proxy for HTTP requests. The URL must start with |
| Optional |
The URL of the proxy for HTTP requests. The URL must start with |
| Optional | A list of domains and CIDRs separated by commas that you do not want to use the proxy with. |
| Optional | A list of Network Time Protocol (NTP) sources (hostname or IP) to add to all hosts. They are added to NTP sources that are configured by using other options, such as DHCP. |
| Optional | SSH public keys that are added to all hosts for use in debugging during the discovery phase. The discovery phase is when the host boots the Discovery Image. |
| Required | The name of the Kubernetes secret containing your pull secret. |
| Optional |
Labels that are automatically added to the |
| Optional |
Consolidates advanced network configuration such as static IPs, bridges, and bonds for the hosts. The host network configuration is specified in one or more |
| Optional |
References an existing |
| Optional |
Modifies the ignition configuration of the Red Hat CoreOS live image, such as adding files. Make sure to only use |
| Optional | Choose one of the following supported CPU architectures: x86_64, aarch64, ppc64le, or s390x. The default value is x86_64. |
| Optional |
Causes the image service to always serve the iPXE script when set to the default value of |
| Optional |
Allows modifying the kernel arguments for when the Discovery Image boots. Possible values for |
| Optional |
A PEM-encoded X.509 certificate bundle, usually needed if the hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the hosts need to trust certificates for other purposes, such as container image registries. Hosts discovered by your |
| Optional |
The Red Hat CoreOS image version to use for your |
Apply the YAML content by running the following command:
oc apply -f infra-env.yaml
To verify that your host inventory is created, check the status with the following command:
oc describe infraenv myinfraenv -n <your_namespace>
See the following list of notable properties:
-
conditions
: The standard Kubernetes conditions indicating if the image was created succesfully. -
isoDownloadURL
: The URL to download the Discovery Image. -
createdTime
: The time at which the image was last created. If you modify theInfraEnv
, make sure that the timestamp has been updated before downloading a new image.
Note: If you modify the InfraEnv
resource, make sure that the InfraEnv
has created a new Discovery Image by looking at the createdTime
property. If you already booted hosts, boot them again with the latest Discovery Image.
You can continue by configuring static networking, if required, and begin adding hosts to your infrastructure environment.
1.5.3.4.3. Additional resources
1.5.3.5. Configuring advanced networking for an infrastructure environment
For hosts that require networking beyond DHCP on a single interface, you must configure advanced networking. The required configuration includes creating one or more instances of the NMStateConfig
resource that describes the networking for one or more hosts.
Each NMStateConfig
resource must contain a label that matches the nmStateConfigLabelSelector
on your InfraEnv
resource. See Creating a host inventory by using the command line interface to learn more about the nmStateConfigLabelSelector
.
The Discovery Image contains the network configurations defined in all referenced NMStateConfig
resources. After booting, each host compares each configuration to its network interfaces and applies the appropriate configuration.
1.5.3.5.1. Prerequisites
- You must enable the central infrastructure management service.
- You must create a host inventory.
1.5.3.5.2. Configuring advanced networking by using the command line interface
To configure advanced networking for your infrastructure environment by using the command line interface, complete the following steps:
Create a file named
nmstateconfig.yaml
and add content that is similar to the following template. Replace values where needed:apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mynmstateconfig namespace: <your-infraenv-namespace> labels: some-key: <some-value> spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 02:00:00:80:12:14 ipv4: enabled: true address: - ip: 192.168.111.30 prefix-length: 24 dhcp: false - name: eth1 type: ethernet state: up mac-address: 02:00:00:80:12:15 ipv4: enabled: true address: - ip: 192.168.140.30 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.126.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 next-hop-interface: eth1 table-id: 254 - destination: 0.0.0.0/0 next-hop-address: 192.168.140.1 next-hop-interface: eth1 table-id: 254 interfaces: - name: "eth0" macAddress: "02:00:00:80:12:14" - name: "eth1" macAddress: "02:00:00:80:12:15"
Field | Optional or required | Description |
---|---|---|
| Required | Use a name that is relevant to the host or hosts you are configuring. |
| Required |
The namespace must match the namespace of your |
| Required |
Add one or more labels that match the |
| Optional |
Describes the network settings in |
| Optional |
Describes the mapping between interface names found in the specified |
Note: The Image Service automatically creates a new image when you update any InfraEnv
properties or change the NMStateConfig
resources that match its label selector. If you add NMStateConfig
resources after creating the InfraEnv
resource, make sure that the InfraEnv
creates a new Discovery Image by checking the createdTime
property in your InfraEnv
. If you already booted hosts, boot them again with the latest Discovery Image.
Apply the YAML content by running the following command:
oc apply -f nmstateconfig.yaml
1.5.3.5.3. Additional resources
1.5.3.6. Adding hosts to the host inventory by using the Discovery Image
After creating your host inventory (infrastructure environment) you can discover your hosts and add them to your inventory. To add hosts to your inventory, choose a method to download an ISO and attach it to each server. For example, you can download ISOs by using a virtual media or writing the ISO to a USB drive.
Important: To prevent the installation from failing, keep the Discovery ISO media connected to the device during the installation process and set each host to boot from the device one time.
1.5.3.6.1. Prerequisites
- You must enable the central infrastructure management service. See Enabling the central infrastructure management service for more information.
- You must create a host inventory. See Creating a host inventory by using the console for more information.
1.5.3.6.2. Adding hosts by using the console
Download the ISO by completing the following steps:
- Select Infrastructure > Host inventory in the console.
- Select your infrastructure environment from the list.
- Click Add hosts and select With Discovery ISO.
You now see a URL to download the ISO. Booted hosts appear in the host inventory table. Hosts might take a few minutes to appear. You must approve each host before you can use it. You can select hosts from the inventory table by clicking Actions and selecting Approve.
1.5.3.6.3. Adding hosts by using the command line interface
You can see the URL to download the ISO in the isoDownloadURL
property in the status of your InfraEnv
resource. See Creating a host inventory by using the command line interface for more information about the InfraEnv
resource.
Each booted host creates an Agent
resource in the same namespace. You must approve each host before you can use it.
1.5.3.6.4. Additional resources
1.5.3.7. Automatically adding bare metal hosts to the host inventory
After creating your host inventory (infrastructure environment) you can discover your hosts and add them to your inventory. You can automate booting the Discovery Image of your infrastructure environment by making the bare metal operator communicate with the Baseboard Management Controller (BMC) of each bare metal host by creating a BareMetalHost
resource and associated BMC secret for each host. The automation is set by a label on the BareMetalHost
that references your infrastructure environment.
The automation performs the following actions:
- Boots each bare metal host with the Discovery Image represented by the infrastructure environment
- Reboots each host with the latest Discovery Image in case the infrastructure environment or any associated network configurations is updated
-
Associates each
Agent
resource with its correspondingBareMetalHost
resource upon discovery -
Updates
Agent
resource properties based on information from theBareMetalHost
, such as hostname, role, and installation disk -
Approves the
Agent
for use as a cluster node
1.5.3.7.1. Prerequisites
- You must enable the central infrastructure management service.
- You must create a host inventory.
1.5.3.7.2. Adding bare metal hosts by using the console
Complete the following steps to automatically add bare metal hosts to your host inventory by using the console:
- Select Infrastructure > Host inventory in the console.
- Select your infrastructure environment from the list.
- Click Add hosts and select With BMC Form.
- Add the required information and click Create.
1.5.3.7.3. Adding bare metal hosts by using the command line interface
Complete the following steps to automatically add bare metal hosts to your host inventory by using the command line interface.
Create a BMC secret by applying the following YAML content and replacing values where needed:
apiVersion: v1 kind: Secret metadata: name: <bmc-secret-name> namespace: <your_infraenv_namespace> 1 type: Opaque data: username: <username> password: <password>
- 1
- The namespace must be the same as the namespace of your
InfraEnv
.
Create a bare metal host by applying the following YAML content and replacing values where needed:
apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bmh-name> namespace: <your_infraenv_namespace> 1 annotations: inspect.metal3.io: disabled labels: infraenvs.agent-install.openshift.io: <your-infraenv> 2 spec: online: true automatedCleaningMode: disabled 3 bootMACAddress: <your-mac-address> 4 bmc: address: <machine-address> 5 credentialsName: <bmc-secret-name> 6 rootDeviceHints: deviceName: /dev/sda 7
- 1
- The namespace must be the same as the namespace of your
InfraEnv
. - 2
- The name must match the name of your
InfrEnv
and exist in the same namespace. - 3
- If you do not set a value, the
metadata
value is automatically used. - 4
- Make sure the MAC address matches the MAC address of one of the interaces on your host.
- 5
- Use the address of the BMC. See Port access for the out-of-band management IP address for more information.
- 6
- Make sure that the
credentialsName
value matches the name of the BMC secret you created. - 7
- Optional: Select the installation disk. See The BareMetalHost spec for the available root device hints. After the host is booted with the Discovery Image and the corresponding
Agent
resource is created, the installation disk is set according to this hint.
After turning on the host, the image starts downloading. This might take a few minutes. When the host is discovered, an Agent
custom resource is created automatically.
1.5.3.7.4. Disabling converged flow
Converged flow is enabled by default. If your hosts do not appear, you might need to temporarily disable converged flow. To disable converged flow, complete the following steps:
Create the following config map on your hub cluster:
apiVersion: v1 kind: ConfigMap metadata: name: my-assisted-service-config namespace: multicluster-engine data: ALLOW_CONVERGED_FLOW: "false"
Note: When you set
ALLOW_CONVERGED_FLOW
to"false"
, you also disable any features enabled by the Ironic Python Agent.Apply the config map by running the following command:
oc annotate --overwrite AgentServiceConfig agent unsupported.agent-install.openshift.io/assisted-service-configmap=my-assisted-service-config
1.5.3.7.5. Additional resources
- For additional information about zero touch provisioning, see Clusters at the network far edge in the OpenShift Container Platform documentation.
- To learn about the required ports for using a bare metal host, see Port access for the out-of-band management IP address in the OpenShift Container Platform documentation.
- To learn about root device hints, see The BareMetalHost spec in the OpenShift Container Platform documentation.
- See Using image pull secrets
- See Creating a credential for an on-premises environment
- To learn more about converged flow, see Managed cluster stuck in Pending status after deployment.
1.5.3.8. Managing your host inventory
You can manage your host inventory and edit existing hosts by using the console, or by using the command line interface and editing the Agent
resource.
1.5.3.8.1. Managing your host inventory by using the console
Each host that you successfully boot with the Discovery ISO appears as a row in your host inventory. You can use the console to edit and manage your hosts. If you booted the host manually and are not using the bare metal operator automation, you must approve the host in the console before you can use it. Hosts that are ready to be installed as OpenShift nodes have the Available
status.
1.5.3.8.2. Managing your host inventory by using the command line interface
An Agent
resource represents each host. You can set the following properties in an Agent
resource:
clusterDeploymentName
Set this property to the namespace and name of the
ClusterDeployment
you want to use if you want to install the host as a node in a cluster.Optional:
role
Sets the role for the host in the cluster. Possible values are
master
,worker
, andauto-assign
. The default value isauto-assign
.hostname
Sets the host name for the host. Optional if the host is automatically assigned a valid host name, for example by using DHCP.
approved
Indicates if the host can be installed as an OpenShift node. This property is a boolean with a default value of
False
. If you booted the host manually and are not using the bare metal operator automation, you must set this property toTrue
before installing the host.installation_disk_id
The ID of the installation disk you chose that is visible in the inventory of the host.
installerArgs
A JSON-formatted string containing overrides for the coreos-installer arguments of the host. You can use this property to modify kernel arguments. See the following example syntax:
["--append-karg", "ip=192.0.2.2::192.0.2.254:255.255.255.0:core0.example.com:enp1s0:none", "--save-partindex", "4"]
ignitionConfigOverrides
A JSON-formatted string containing overrides for the ignition configuration of the host. You can use this property to add files to the host by using ignition. See the following example syntax:
{"ignition": "version": "3.1.0"}, "storage": {"files": [{"path": "/tmp/example", "contents": {"source": "data:text/plain;base64,aGVscGltdHJhcHBlZGluYXN3YWdnZXJzcGVj"}}]}}
nodeLabels
A list of labels that are applied to the node after the host is installed.
The status
of an Agent
resource has the following properties:
role
Sets the role for the host in the cluster. If you previously set a
role
in theAgent
resource, the value appears in thestatus
.inventory
Contains host properties that the agent running on the host discovers.
progress
The host installation progress.
ntpSources
The configured Network Time Protocol (NTP) sources of the host.
conditions
Contains the following standard Kubernetes conditions with a
True
orFalse
value:-
SpecSynced:
True
if all specified properties are successfully applied.False
if some error was encountered. -
Connected:
True
if the agent connection to the installation service is not obstructed.False
if the agent has not contacted the installation service in some time. -
RequirementsMet:
True
if the host is ready to begin the installation. -
Validated:
True
if all host validations pass. -
Installed:
True
if the host is installed as an OpenShift node. -
Bound:
True
if the host is bound to a cluster. -
Cleanup:
False
if the request to delete theAgent
resouce fails.
-
SpecSynced:
debugInfo
Contains URLs for downloading installation logs and events.
validationsInfo
Contains information about validations that the agent runs after the host is discovered to ensure that the installation is successful. Troubleshoot if the value is
False
.installation_disk_id
The ID of the installation disk you chose that is visible in the inventory of the host.
1.5.3.8.3. Additional resources
1.5.4. Cluster creation
Learn how to create Red Hat OpenShift Container Platform clusters across cloud providers with multicluster engine operator.
multicluster engine operator uses the Hive operator that is provided with OpenShift Container Platform to provision clusters for all providers except the on-premises clusters and hosted control planes. When provisioning the on-premises clusters, multicluster engine operator uses the central infrastructure management and Assisted Installer function that are provided with OpenShift Container Platform. The hosted clusters for hosted control planes are provisioned by using the HyperShift operator.
- Configuring additional manifests during cluster creation
- Creating a cluster on Amazon Web Services
- Creating a cluster on Amazon Web Services GovCloud
- Creating a cluster on Microsoft Azure
- Creating a cluster on Google Cloud Platform
- Creating a cluster on VMware vSphere
- Creating a cluster on Red Hat OpenStack Platform
- Creating a cluster on Red Hat Virtualization (deprecated)
- Creating a cluster in an on-premises environment
- Configuring AgentClusterInstall proxy
- Hosted control planes
1.5.4.1. Creating a cluster with the CLI
The multicluster engine for Kubernetes operator uses internal Hive components to create Red Hat OpenShift Container Platform clusters. See the following information to learn how to create clusters.
1.5.4.1.1. Prerequisites
Before creating a cluster, you must clone the clusterImageSets repository and apply it to your hub cluster. See the following steps:
Run the following command to clone, but replace
2.x
with 2.4:git clone https://github.com/stolostron/acm-hive-openshift-releases.git cd acm-hive-openshift-releases git checkout origin/backplane-<2.x>
Run the following command to apply it to your hub cluster:
find clusterImageSets/fast -type d -exec oc apply -f {} \; 2> /dev/null
Select the Red Hat OpenShift Container Platform release images when you create a cluster.
Note: If you use the Nutanix platform, be sure to use x86_64
architecture for the releaseImage
in the ClusterImageSet
resource and set the visible
label value to 'true'
. See the following example:
apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: labels: channel: stable visible: 'true' name: img4.x.47-x86-64-appsub spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.47-x86_64
1.5.4.1.2. Create a cluster with ClusterDeployment
A ClusterDeployment
is a Hive custom resource that is used to control the lifecycle of a cluster.
Follow the Using Hive documentation to create the ClusterDeployment
custom resource and create an individual cluster.
1.5.4.1.3. Create a cluster with ClusterPool
A ClusterPool
is also a Hive custom resource that is used to create multiple clusters.
Follow the Cluster Pools documentation to create a cluster with the Hive ClusterPool
API.
1.5.4.2. Configuring additional manifests during cluster creation
You can configure additional Kubernetes resource manifests during the installation process of creating your cluster. This can help if you need to configure additional manifests for scenarios such as configuring networking or setting up a load balancer.
Before you create your cluster, you need to add a reference to the ClusterDeployment
resource that specifies a ConfigMap
that contains the additional resource manifests.
Note: The ClusterDeployment
resource and the ConfigMap
must be in the same namespace. The following examples show how your content might look.
ConfigMap with resource manifests
ConfigMap
that contains a manifest with anotherConfigMap
resource. The resource manifestConfigMap
can contain multiple keys with resource configurations added in adata.<resource_name>\.yaml
pattern.kind: ConfigMap apiVersion: v1 metadata: name: <my-baremetal-cluster-install-manifests> namespace: <mynamespace> data: 99_metal3-config.yaml: | kind: ConfigMap apiVersion: v1 metadata: name: metal3-config namespace: openshift-machine-api data: http_port: "6180" provisioning_interface: "enp1s0" provisioning_ip: "172.00.0.3/24" dhcp_range: "172.00.0.10,172.00.0.100" deploy_kernel_url: "http://172.00.0.3:6180/images/ironic-python-agent.kernel" deploy_ramdisk_url: "http://172.00.0.3:6180/images/ironic-python-agent.initramfs" ironic_endpoint: "http://172.00.0.3:6385/v1/" ironic_inspector_endpoint: "http://172.00.0.3:5150/v1/" cache_url: "http://192.168.111.1/images" rhcos_image_url: "https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.3/43.81.201911192044.0/x86_64/rhcos-43.81.201911192044.0-openstack.x86_64.qcow2.gz"
ClusterDeployment with resource manifest
ConfigMap
referencedThe resource manifest
ConfigMap
is referenced underspec.provisioning.manifestsConfigMapRef
.apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: <my-baremetal-cluster> namespace: <mynamespace> annotations: hive.openshift.io/try-install-once: "true" spec: baseDomain: test.example.com clusterName: <my-baremetal-cluster> controlPlaneConfig: servingCertificates: {} platform: baremetal: libvirtSSHPrivateKeySecretRef: name: provisioning-host-ssh-private-key provisioning: installConfigSecretRef: name: <my-baremetal-cluster-install-config> sshPrivateKeySecretRef: name: <my-baremetal-hosts-ssh-private-key> manifestsConfigMapRef: name: <my-baremetal-cluster-install-manifests> imageSetRef: name: <my-clusterimageset> sshKnownHosts: - "10.1.8.90 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXvVVVKUYVkuyvkuygkuyTCYTytfkufTYAAAAIbmlzdHAyNTYAAABBBKWjJRzeUVuZs4yxSy4eu45xiANFIIbwE3e1aPzGD58x/NX7Yf+S8eFKq4RrsfSaK2hVJyJjvVIhUsU9z2sBJP8=" pullSecretRef: name: <my-baremetal-cluster-pull-secret>
1.5.4.3. Creating a cluster on Amazon Web Services
You can use the multicluster engine operator console to create a Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS).
When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on AWS in the OpenShift Container Platform documentation for more information about the process.
1.5.4.3.1. Prerequisites
See the following prerequisites before creating a cluster on AWS:
- You must have a deployed hub cluster.
- You need an AWS credential. See Creating a credential for Amazon Web Services for more information.
- You need a configured domain in AWS. See Configuring an AWS account for instructions on how to configure a domain.
- You must have Amazon Web Services (AWS) login credentials, which include user name, password, access key ID, and secret access key. See Understanding and Getting Your Security Credentials.
You must have an OpenShift Container Platform image pull secret. See Using image pull secrets.
Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.
1.5.4.3.2. Creating your AWS cluster
See the following important information about creating an AWS cluster:
-
When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the
install-config.yaml
file content in the panel. You can edit the YAML file with your custom settings, if you have any updates. - When you create a cluster, the controller creates a namespace for the cluster and the resources. Ensure that you include only resources for that cluster instance in that namespace.
- Destroying the cluster deletes the namespace and all of the resources in it.
-
If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have
cluster-admin
privileges when you are creating the cluster, you must select a cluster set on which you haveclusterset-admin
permissions. -
If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with
clusterset-admin
permissions to a cluster set if you do not have any cluster set options to select. -
Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a
ManagedClusterSet
, it is automatically added to thedefault
managed cluster set. - If there is already a base DNS domain that is associated with the selected credential that you configured with your AWS account, that value is populated in the field. You can change the value by overwriting it. This name is used in the hostname of the cluster.
- The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. Select the image from the list of images that are available. If the image that you want to use is not available, you can enter the URL to the image that you want to use.
The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields:
- Region: Specify the region where you want the node pool.
- CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.
- Zones: Specify where you want to run your control plane pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
- Instance type: Specify the instance type for your control plane node. You can change the type and size of your instance after it is created.
- Root storage: Specify the amount of root storage to allocate for the cluster.
You can create zero or more worker nodes in a worker pool to run the container workloads for the cluster. This can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The optional information includes the following fields:
- Zones: Specify where you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
- Instance type: Specify the instance type of your worker pools. You can change the type and size of your instance after it is created.
- Node count: Specify the node count of your worker pool. This setting is required when you define a worker pool.
- Root storage: Specify the amount of root storage allocated for your worker pool. This setting is required when you define a worker pool.
- Networking details are required for your cluster, and multiple networks are required for using IPv6. You can add an additional network by clicking Add network.
Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:
-
HTTP proxy: Specify the URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy: Specify the secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy sites: A comma-separated list of sites that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
-
HTTP proxy: Specify the URL that should be used as a proxy for
1.5.4.3.3. Creating your cluster with the console
To create a new cluster, see the following procedure. If you have an existing cluster that you want to import instead, see Cluster import.
Note: You do not have to run the oc
command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.
- Navigate to Infrastructure > Clusters.
- On the Clusters page. Click Cluster > Create cluster and complete the steps in the console.
- Optional: Select YAML: On to view content updates as you enter the information in the console.
If you need to create a credential, see Creating a credential for Amazon Web Services for more information.
The name of the cluster is used in the hostname of the cluster.
If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.
1.5.4.3.4. Additional resources
- The AWS private configuration information is used when you are creating an AWS GovCloud cluster. See Creating a cluster on Amazon Web Services GovCloud for information about creating a cluster in that environment.
- See Configuring an AWS account for more information.
- See Release images for more information about release images.
- Find more information about supported instant types by visiting your cloud provider sites, such as AWS General purpose instances.
1.5.4.4. Creating a cluster on Amazon Web Services GovCloud
You can use the console to create a Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS) or on AWS GovCloud. This procedure explains how to create a cluster on AWS GovCloud. See Creating a cluster on Amazon Web Services for the instructions for creating a cluster on AWS.
AWS GovCloud provides cloud services that meet additional requirements that are necessary to store government documents on the cloud. When you create a cluster on AWS GovCloud, you must complete additional steps to prepare your environment.
When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing a cluster on AWS into a government region in the OpenShift Container Platform documentation for more information about the process. The following sections provide the steps for creating a cluster on AWS GovCloud:
1.5.4.4.1. Prerequisites
You must have the following prerequisites before creating an AWS GovCloud cluster:
- You must have AWS login credentials, which include user name, password, access key ID, and secret access key. See Understanding and Getting Your Security Credentials.
- You need an AWS credential. See Creating a credential for Amazon Web Services for more information.
- You need a configured domain in AWS. See Configuring an AWS account for instructions on how to configure a domain.
- You must have an OpenShift Container Platform image pull secret. See Using image pull secrets.
- You must have an Amazon Virtual Private Cloud (VPC) with an existing Red Hat OpenShift Container Platform cluster for the hub cluster. This VPC must be different from the VPCs that are used for the managed cluster resources or the managed cluster service endpoints.
- You need a VPC where the managed cluster resources are deployed. This cannot be the same as the VPCs that are used for the hub cluster or the managed cluster service endpoints.
- You need one or more VPCs that provide the managed cluster service endpoints. This cannot be the same as the VPCs that are used for the hub cluster or the managed cluster resources.
- Ensure that the IP addresses of the VPCs that are specified by Classless Inter-Domain Routing (CIDR) do not overlap.
-
You need a
HiveConfig
custom resource that references a credential within the Hive namespace. This custom resource must have access to create resources on the VPC that you created for the managed cluster service endpoints.
Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the multicluster engine operator console. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.
1.5.4.4.2. Configure Hive to deploy on AWS GovCloud
While creating a cluster on AWS GovCloud is almost identical to creating a cluster on standard AWS, you have to complete some additional steps to prepare an AWS PrivateLink for the cluster on AWS GovCloud.
1.5.4.4.2.1. Create the VPCs for resources and endpoints
As listed in the prerequisites, two VPCs are required in addition to the VPC that contains the hub cluster. See Create a VPC in the Amazon Web Services documentation for specific steps for creating a VPC.
- Create a VPC for the managed cluster with private subnets.
- Create one or more VPCs for the managed cluster service endpoints with private subnets. Each VPC in a region has a limit of 255 VPC endpoints, so you need multiple VPCs to support more than 255 clusters in that region.
For each VPC, create subnets in all of the supported availability zones of the region. Each subnet must have at least 255 usable IP addresses because of the controller requirements.
The following example shows how you might structure subnets for VPCs that have 6 availability zones in the
us-gov-east-1
region:vpc-1 (us-gov-east-1) : 10.0.0.0/20 subnet-11 (us-gov-east-1a): 10.0.0.0/23 subnet-12 (us-gov-east-1b): 10.0.2.0/23 subnet-13 (us-gov-east-1c): 10.0.4.0/23 subnet-12 (us-gov-east-1d): 10.0.8.0/23 subnet-12 (us-gov-east-1e): 10.0.10.0/23 subnet-12 (us-gov-east-1f): 10.0.12.0/2
vpc-2 (us-gov-east-1) : 10.0.16.0/20 subnet-21 (us-gov-east-1a): 10.0.16.0/23 subnet-22 (us-gov-east-1b): 10.0.18.0/23 subnet-23 (us-gov-east-1c): 10.0.20.0/23 subnet-24 (us-gov-east-1d): 10.0.22.0/23 subnet-25 (us-gov-east-1e): 10.0.24.0/23 subnet-26 (us-gov-east-1f): 10.0.28.0/23
- Ensure that all of the hub environments (hub cluster VPCs) have network connectivity to the VPCs that you created for VPC endpoints that use peering, transit gateways, and that all DNS settings are enabled.
- Collect a list of VPCs that are needed to resolve the DNS setup for the AWS PrivateLink, which is required for the AWS GovCloud connectivity. This includes at least the VPC of the multicluster engine operator instance that you are configuring, and can include the list of all of the VPCs where various Hive controllers exist.
1.5.4.4.2.2. Configure the security groups for the VPC endpoints
Each VPC endpoint in AWS has a security group attached to control access to the endpoint. When Hive creates a VPC endpoint, it does not specify a security group. The default security group of the VPC is attached to the VPC endpoint. The default security group of the VPC must have rules to allow traffic where VPC endpoints are created from the Hive installer pods. See Control access to VPC endpoints using endpoint policies in the AWS documentation for details.
For example, if Hive is running in hive-vpc(10.1.0.0/16)
, there must be a rule in the default security group of the VPC where the VPC endpoint is created that allows ingress from 10.1.0.0/16
.
1.5.4.4.2.3. Set permissions for AWS PrivateLink
You need multiple credentials to configure the AWS PrivateLink. The required permissions for these credentials depend on the type of credential.
The credentials for ClusterDeployment require the following permissions:
ec2:CreateVpcEndpointServiceConfiguration ec2:DescribeVpcEndpointServiceConfigurations ec2:ModifyVpcEndpointServiceConfiguration ec2:DescribeVpcEndpointServicePermissions ec2:ModifyVpcEndpointServicePermissions ec2:DeleteVpcEndpointServiceConfigurations
The credentials for HiveConfig for endpoint VPCs account
.spec.awsPrivateLink.credentialsSecretRef
require the following permissions:ec2:DescribeVpcEndpointServices ec2:DescribeVpcEndpoints ec2:CreateVpcEndpoint ec2:CreateTags ec2:DescribeNetworkInterfaces ec2:DescribeVPCs ec2:DeleteVpcEndpoints route53:CreateHostedZone route53:GetHostedZone route53:ListHostedZonesByVPC route53:AssociateVPCWithHostedZone route53:DisassociateVPCFromHostedZone route53:CreateVPCAssociationAuthorization route53:DeleteVPCAssociationAuthorization route53:ListResourceRecordSets route53:ChangeResourceRecordSets route53:DeleteHostedZone
The credentials specified in the
HiveConfig
custom resource for associating VPCs to the private hosted zone (.spec.awsPrivateLink.associatedVPCs[$idx].credentialsSecretRef
). The account where the VPC is located requires the following permissions:route53:AssociateVPCWithHostedZone route53:DisassociateVPCFromHostedZone ec2:DescribeVPCs
Ensure that there is a credential secret within the Hive namespace on the hub cluster.
The HiveConfig
custom resource needs to reference a credential within the Hive namespace that has permissions to create resources in a specific provided VPC. If the credential that you are using to provision an AWS cluster in AWS GovCloud is already in the Hive namespace, then you do not need to create another one. If the credential that you are using to provision an AWS cluster in AWS GovCloud is not already in the Hive namespace, you can either replace your current credential or create an additional credential in the Hive namespace.
The HiveConfig
custom resource needs to include the following content:
- An AWS GovCloud credential that has the required permissions to provision resources for the given VPC.
The addresses of the VPCs for the OpenShift Container Platform cluster installation, as well as the service endpoints for the managed cluster.
Best practice: Use different VPCs for the OpenShift Container Platform cluster installation and the service endpoints.
The following example shows the credential content:
spec: awsPrivateLink: ## The list of inventory of VPCs that can be used to create VPC ## endpoints by the controller. endpointVPCInventory: - region: us-east-1 vpcID: vpc-1 subnets: - availabilityZone: us-east-1a subnetID: subnet-11 - availabilityZone: us-east-1b subnetID: subnet-12 - availabilityZone: us-east-1c subnetID: subnet-13 - availabilityZone: us-east-1d subnetID: subnet-14 - availabilityZone: us-east-1e subnetID: subnet-15 - availabilityZone: us-east-1f subnetID: subnet-16 - region: us-east-1 vpcID: vpc-2 subnets: - availabilityZone: us-east-1a subnetID: subnet-21 - availabilityZone: us-east-1b subnetID: subnet-22 - availabilityZone: us-east-1c subnetID: subnet-23 - availabilityZone: us-east-1d subnetID: subnet-24 - availabilityZone: us-east-1e subnetID: subnet-25 - availabilityZone: us-east-1f subnetID: subnet-26 ## The credentialsSecretRef points to a secret with permissions to create. ## The resources in the account where the inventory of VPCs exist. credentialsSecretRef: name: <hub-account-credentials-secret-name> ## A list of VPC where various mce clusters exists. associatedVPCs: - region: region-mce1 vpcID: vpc-mce1 credentialsSecretRef: name: <credentials-that-have-access-to-account-where-MCE1-VPC-exists> - region: region-mce2 vpcID: vpc-mce2 credentialsSecretRef: name: <credentials-that-have-access-to-account-where-MCE2-VPC-exists>
You can include a VPC from all the regions where AWS PrivateLink is supported in the endpointVPCInventory
list. The controller selects a VPC that meets the requirements for the ClusterDeployment.
For more information, refer to the Hive documentation.
1.5.4.4.3. Creating your cluster with the console
To create a cluster from the console, navigate to Infrastructure > Clusters > Create cluster AWS > Standalone and complete the steps in the console.
Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.
The credential that you select must have access to the resources in an AWS GovCloud region, if you create an AWS GovCloud cluster. You can use an AWS GovCloud secret that is already in the Hive namespace if it has the required permissions to deploy a cluster. Existing credentials are displayed in the console. If you need to create a credential, see Creating a credential for Amazon Web Services for more information.
The name of the cluster is used in the hostname of the cluster.
Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.
Tip: Select YAML: On to view content updates as you enter the information in the console.
If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin
privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin
permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin
permissions to a cluster set if you do not have any cluster set options to select.
Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet
, it is automatically added to the default
managed cluster set.
If there is already a base DNS domain that is associated with the selected credential that you configured with your AWS or AWS GovCloud account, that value is populated in the field. You can change the value by overwriting it. This name is used in the hostname of the cluster. See Configuring an AWS account for more information.
The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images.
The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields:
-
Region: The region where you create your cluster resources. If you are creating a cluster on an AWS GovCloud provider, you must include an AWS GovCloud region for your node pools. For example,
us-gov-west-1
. - CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.
- Zones: Specify where you want to run your control plane pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
- Instance type: Specify the instance type for your control plane node, which must be the same as the CPU architecture that you previously indicated. You can change the type and size of your instance after it is created.
- Root storage: Specify the amount of root storage to allocate for the cluster.
You can create zero or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The optional information includes the following fields:
- Pool name: Provide a unique name for your pool.
- Zones: Specify where you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
- Instance type: Specify the instance type of your worker pools. You can change the type and size of your instance after it is created.
- Node count: Specify the node count of your worker pool. This setting is required when you define a worker pool.
- Root storage: Specify the amount of root storage allocated for your worker pool. This setting is required when you define a worker pool.
Networking details are required for your cluster, and multiple networks are required for using IPv6. For an AWS GovCloud cluster, enter the values of the block of addresses of the Hive VPC in the Machine CIDR field. You can add an additional network by clicking Add network.
Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:
-
HTTP proxy URL: Specify the URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy URL: Specify the secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
When creating an AWS GovCloud cluster or using a private environment, complete the fields on the AWS private configuration page with the AMI ID and the subnet values. Ensure that the value of spec:platform:aws:privateLink:enabled
is set to true
in the ClusterDeployment.yaml
file, which is automatically set when you select Use private configuration.
When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml
file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.
Note: You do not have to run the oc
command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine for Kubernetes operator.
If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.
Continue with Accessing your cluster for instructions for accessing your cluster.
1.5.4.5. Creating a cluster on Microsoft Azure
You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on Microsoft Azure or on Microsoft Azure Government.
When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on Azure in the OpenShift Container Platform documentation for more information about the process.
1.5.4.5.1. Prerequisites
See the following prerequisites before creating a cluster on Azure:
- You must have a deployed hub cluster.
- You need an Azure credential. See Creating a credential for Microsoft Azure for more information.
- You need a configured domain in Azure or Azure Government. See Configuring a custom domain name for an Azure cloud service for instructions on how to configure a domain.
- You need Azure login credentials, which include user name and password. See the Microsoft Azure Portal.
-
You need Azure service principals, which include
clientId
,clientSecret
, andtenantId
. See azure.microsoft.com. - You need an OpenShift Container Platform image pull secret. See Using image pull secrets.
Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.
1.5.4.5.2. Creating your cluster with the console
To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters. On the Clusters page, click Create cluster and complete the steps in the console.
Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.
If you need to create a credential, see Creating a credential for Microsoft Azure for more information.
The name of the cluster is used in the hostname of the cluster.
Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.
Tip: Select YAML: On to view content updates as you enter the information in the console.
If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin
privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin
permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin
permissions to a cluster set if you do not have any cluster set options to select.
Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet
, it is automatically added to the default
managed cluster set.
If there is already a base DNS domain that is associated with the selected credential that you configured for your Azure account, that value is populated in that field. You can change the value by overwriting it. See Configuring a custom domain name for an Azure cloud service for more information. This name is used in the hostname of the cluster.
The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images.
The Node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following optional fields:
- Region: Specify a region where you want to run your node pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
- CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.
You can change the type and size of the Instance type and Root storage allocation (required) of your control plane pool after your cluster is created.
You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields:
- Zones: Specifies here you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
- Instance type: You can change the type and size of your instance after it is created.
You can add an additional network by clicking Add network. You must have more than one network if you are using IPv6 addresses.
Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:
-
HTTP proxy: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy: The secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
When you review your information and optionally customize it before creating the cluster, you can click the YAML switch On to view the install-config.yaml
file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.
If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.
Note: You do not have to run the oc
command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.
Continue with Accessing your cluster for instructions for accessing your cluster.
1.5.4.6. Creating a cluster on Google Cloud Platform
Follow the procedure to create a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP). For more information about GCP, see Google Cloud Platform.
When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on GCP in the OpenShift Container Platform documentation for more information about the process.
1.5.4.6.1. Prerequisites
See the following prerequisites before creating a cluster on GCP:
- You must have a deployed hub cluster.
- You must have a GCP credential. See Creating a credential for Google Cloud Platform for more information.
- You must have a configured domain in GCP. See Setting up a custom domain for instructions on how to configure a domain.
- You need your GCP login credentials, which include user name and password.
- You must have an OpenShift Container Platform image pull secret. See Using image pull secrets.
Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.
1.5.4.6.2. Creating your cluster with the console
To create clusters from the multicluster engine operator console, navigate to Infrastructure > Clusters. On the Clusters page, click Create cluster and complete the steps in the console.
Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.
If you need to create a credential, see Creating a credential for Google Cloud Platform for more information.
The name of your cluster is used in the hostname of the cluster. There are some restrictions that apply to naming your GCP cluster. These restrictions include not beginning the name with goog
or containing a group of letters and numbers that resemble google
anywhere in the name. See Bucket naming guidelines for the complete list of restrictions.
Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.
Tip: Select YAML: On to view content updates as you enter the information in the console.
If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin
privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin
permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin
permissions to a cluster set if you do not have any cluster set options to select.
Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet
, it is automatically added to the default
managed cluster set.
If there is already a base DNS domain that is associated with the selected credential for your GCP account, that value is populated in the field. You can change the value by overwriting it. See Setting up a custom domain for more information. This name is used in the hostname of the cluster.
The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images.
The Node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields:
- Region: Specify a region where you want to run your control plane pools. A closer region might provide faster performance, but a more distant region might be more distributed.
- CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.
You can specify the instance type of your control plane pool. You can change the type and size of your instance after it is created.
You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields:
- Instance type: You can change the type and size of your instance after it is created.
- Node count: This setting is required when you define a worker pool.
The networking details are required, and multiple networks are required for using IPv6 addresses. You can add an additional network by clicking Add network.
Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:
-
HTTP proxy: The URL that should be used as a proxy for
HTTP
traffic. -
HTTPS proxy: The secure proxy URL that should be used for
HTTPS
traffic. If no value is provided, the same value as theHTTP Proxy URL
is used for bothHTTP
andHTTPS
. -
No proxy sites: A comma-separated list of sites that should bypass the proxy. Begin a domain name with a period
.
to include all of the subdomains that are in that domain. Add an asterisk*
to bypass the proxy for all destinations. - Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml
file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.
If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.
Note: You do not have to run the oc
command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.
Continue with Accessing your cluster for instructions for accessing your cluster.
1.5.4.7. Creating a cluster on VMware vSphere
You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on VMware vSphere.
When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on vSphere in the OpenShift Container Platform documentation for more information about the process.
1.5.4.7.1. Prerequisites
See the following prerequisites before creating a cluster on vSphere:
- You must have a hub cluster that is deployed on OpenShift Container Platform version 4.12 or later.
- You need a vSphere credential. See Creating a credential for VMware vSphere for more information.
- You need an OpenShift Container Platform image pull secret. See Using image pull secrets.
You must have the following information for the VMware instance where you are deploying:
- Required static IP addresses for API and Ingress instances
DNS records for:
The following API base domain must point to the static API VIP:
api.<cluster_name>.<base_domain>
The following application base domain must point to the static IP address for Ingress VIP:
*.apps.<cluster_name>.<base_domain>
1.5.4.7.2. Creating your cluster with the console
To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters. On the Clusters page, click Create cluster and complete the steps in the console.
Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.
If you need to create a credential, see Creating a credential for VMware vSphere for more information about creating a credential.
The name of your cluster is used in the hostname of the cluster.
Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.
Tip: Select YAML: On to view content updates as you enter the information in the console.
If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin
privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin
permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin
permissions to a cluster set if you do not have any cluster set options to select.
Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet
, it is automatically added to the default
managed cluster set.
If there is already a base domain associated with the selected credential that you configured for your vSphere account, that value is populated in the field. You can change the value by overwriting it. See Installing a cluster on vSphere with customizations for more information. This value must match the name that you used to create the DNS records listed in the prerequisites section. This name is used in the hostname of the cluster.
The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See