Chapter 1. Cluster lifecycle with multicluster engine operator overview


The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. From the hub cluster, you can create and manage clusters, as well as destroy any clusters that you created. You can also hibernate, resume, and detach clusters. Learn more about the cluster lifecycle capabilities from the following documentation.

Access the multicluster engine operator Support matrix to learn about hub cluster and managed cluster requirements and support.

Notes:

The components of the cluster lifecycle management architecture are included in the Cluster lifecycle architecture.

1.1. Release notes

Learn about new features, bug fixes, and more for cluster lifecycle and the 2.6 version of multicluster engine operator.

Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.

Best practice: Upgrade to the most recent version.

If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your credentials.

You can also learn more about the Customer Portal documentation at Red Hat Customer Portal FAQ.

The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform.

For full support information, see the multicluster engine operator Support matrix. For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy.

1.1.1. What’s new in Cluster lifecycle with the multicluster engine operator

Learn about new features for creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers.

Important: Cluster lifecycle now supports all providers that are certified through the Cloud Native Computing Foundation (CNCF) Kubernetes Conformance Program. Choose a vendor that is recognized by CNFC for your hybrid cloud multicluster management.

See the following information about using CNFC providers:

1.1.1.1. New features and enhancements for components

Learn more about new features for specific components.

Note: Some features and components are identified and released as Technology Preview.

1.1.1.2. Cluster lifecycle

Learn about new features and enhancements for Cluster lifecycle with multicluster engine operator.

1.1.1.3. Hosted control planes

  • Starting with OpenShift Container Platform 4.16, hosted control planes supports the user-provisioned installation and attachment of logical partition (LPAR) as compute nodes on IBM Z and IBM LinuxOne. To learn more, see Adding IBM Z LPAR as agents.
  • Configuring hosted control plane clusters on AWS is now generally available. You can deploy the HyperShift Operator on an existing managed cluster by using the hypershift-addon managed cluster add-on to enable that cluster as a hosting cluster and start to create the hosted cluster. See Configuring hosted control plane clusters on AWS for details.
  • The --sts-creds and --role-arn flags replace the deprecated --aws-creds flag in the hcp command line interface. Create an Amazon Web Services (AWS) Identity and Access Management (IAM) role and Security Token Service (STS) credentials to use the --sts-creds and --role-arn flags. For more information, see Creating an AWS IAM role and STS credentials.

1.1.1.4. Red Hat Advanced Cluster Management integration

1.1.2. Cluster lifecycle known issues

Review the known issues for cluster lifecycle with multicluster engine operator. The following list contains known issues for this release, or known issues that continued from the previous release. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes.

1.1.2.1. Cluster management

Cluster lifecycle known issues and limitations are part of the Cluster lifecycle with multicluster engine operator documentation.

1.1.2.1.1. Limitation with nmstate

Develop quicker by configuring copy and paste features. To configure the copy-from-mac feature in the assisted-installer, you must add the mac-address to the nmstate definition interface and the mac-mapping interface. The mac-mapping interface is provided outside the nmstate definition interface. As a result, you must provide the same mac-address twice.

1.1.2.1.2. Prehook failure does not fail the hosted cluster creation

If you use the automation template for the hosted cluster creation and the prehook job fails, then it looks like the hosted cluster creation is still progressing. This is normal because the hosted cluster was designed with no complete failure state, and therefore, it keeps trying to create the cluster.

1.1.2.1.3. Manual removal of the VolSync CSV required on managed cluster when removing the add-on

When you remove the VolSync ManagedClusterAddOn from the hub cluster, it removes the VolSync operator subscription on the managed cluster but does not remove the cluster service version (CSV). To remove the CSV from the managed clusters, run the following command on each managed cluster from which you are removing VolSync:

oc delete csv -n openshift-operators volsync-product.v0.6.0

If you have a different version of VolSync installed, replace v0.6.0 with your installed version.

1.1.2.1.4. Deleting a managed cluster set does not automatically remove its label

After you delete a ManagedClusterSet, the label that is added to each managed cluster that associates the cluster to the cluster set is not automatically removed. Manually remove the label from each of the managed clusters that were included in the deleted managed cluster set. The label resembles the following example: cluster.open-cluster-management.io/clusterset:<ManagedClusterSet Name>.

1.1.2.1.5. ClusterClaim error

If you create a Hive ClusterClaim against a ClusterPool and manually set the ClusterClaimspec lifetime field to an invalid golang time value, the product stops fulfilling and reconciling all ClusterClaims, not just the malformed claim.

You see the following error in the clusterclaim-controller pod logs, which is a specific example with the PoolName and invalid lifetime included:

E0203 07:10:38.266841       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to watch *v1.ClusterClaim: failed to list *v1.ClusterClaim: v1.ClusterClaimList.Items: []v1.ClusterClaim: v1.ClusterClaim.v1.ClusterClaim.Spec: v1.ClusterClaimSpec.Lifetime: unmarshalerDecoder: time: unknown unit "w" in duration "1w", error found in #10 byte of ...|time":"1w"}},{"apiVe|..., bigger context ...|clusterPoolName":"policy-aas-hubs","lifetime":"1w"}},{"apiVersion":"hive.openshift.io/v1","kind":"Cl|...

You can delete the invalid claim.

If the malformed claim is deleted, claims begin successfully reconciling again without any further interaction.

1.1.2.1.6. The product channel out of sync with provisioned cluster

The clusterimageset is in fast channel, but the provisioned cluster is in stable channel. Currently the product does not sync the channel to the provisioned OpenShift Container Platform cluster.

Change to the right channel in the OpenShift Container Platform console. Click Administration > Cluster Settings > Details Channel.

1.1.2.1.7. Restoring the connection of a managed cluster with custom CA certificates to its restored hub cluster might fail

After you restore the backup of a hub cluster that managed a cluster with custom CA certificates, the connection between the managed cluster and the hub cluster might fail. This is because the CA certificate was not backed up on the restored hub cluster. To restore the connection, copy the custom CA certificate information that is in the namespace of your managed cluster to the <managed_cluster>-admin-kubeconfig secret on the restored hub cluster.

Tip: If you copy this CA certificate to the hub cluster before creating the backup copy, the backup copy includes the secret information. When the backup copy is used to restore in the future, the connection between the hub and managed clusters will automatically complete.

1.1.2.1.8. The local-cluster might not be automatically recreated

If the local-cluster is deleted while disableHubSelfManagement is set to false, the local-cluster is recreated by the MulticlusterHub operator. After you detach a local-cluster, the local-cluster might not be automatically recreated.

  • To resolve this issue, modify a resource that is watched by the MulticlusterHub operator. See the following example:

    oc delete deployment multiclusterhub-repo -n <namespace>
  • To properly detach the local-cluster, set the disableHubSelfManagement to true in the MultiClusterHub.
1.1.2.1.9. Selecting a subnet is required when creating an on-premises cluster

When you create an on-premises cluster using the console, you must select an available subnet for your cluster. It is not marked as a required field.

1.1.2.1.10. Cluster provisioning with Infrastructure Operator fails

When creating OpenShift Container Platform clusters using the Infrastructure Operator, the file name of the ISO image might be too long. The long image name causes the image provisioning and the cluster provisioning to fail. To determine if this is the problem, complete the following steps:

  1. View the bare metal host information for the cluster that you are provisioning by running the following command:

    oc get bmh -n <cluster_provisioning_namespace>
  2. Run the describe command to view the error information:

    oc describe bmh -n <cluster_provisioning_namespace> <bmh_name>
  3. An error similar to the following example indicates that the length of the filename is the problem:

    Status:
      Error Count:    1
      Error Message:  Image provisioning failed: ... [Errno 36] File name too long ...

If this problem occurs, it is typically on the following versions of OpenShift Container Platform, because the infrastructure operator was not using image service:

  • 4.8.17 and earlier
  • 4.9.6 and earlier

To avoid this error, upgrade your OpenShift Container Platform to version 4.8.18 or later, or 4.9.7 or later.

1.1.2.1.11. Cannot use host inventory to boot with the discovery image and add hosts automatically

You cannot use a host inventory, or InfraEnv custom resource, to both boot with the discovery image and add hosts automatically. If you used your previous InfraEnv resource for the BareMetalHost resource, and you want to boot the image yourself, you can work around the issue by creating a new InfraEnv resource.

1.1.2.1.12. Local-cluster status offline after reimporting with a different name

When you accidentally try to reimport the cluster named local-cluster as a cluster with a different name, the status for local-cluster and for the reimported cluster display offline.

To recover from this case, complete the following steps:

  1. Run the following command on the hub cluster to edit the setting for self-management of the hub cluster temporarily:

    oc edit mch -n open-cluster-management multiclusterhub
  2. Add the setting spec.disableSelfManagement=true.
  3. Run the following command on the hub cluster to delete and redeploy the local-cluster:

    oc delete managedcluster local-cluster
  4. Enter the following command to remove the local-cluster management setting:

    oc edit mch -n open-cluster-management multiclusterhub
  5. Remove spec.disableSelfManagement=true that you previously added.
1.1.2.1.13. Cluster provision with Ansible automation fails in proxy environment

An Automation template that is configured to automatically provision a managed cluster might fail when both of the following conditions are met:

  • The hub cluster has cluster-wide proxy enabled.
  • The Ansible Automation Platform can only be reached through the proxy.
1.1.2.1.14. Cannot delete managed cluster namespace manually

You cannot delete the namespace of a managed cluster manually. The managed cluster namespace is automatically deleted after the managed cluster is detached. If you delete the managed cluster namespace manually before the managed cluster is detached, the managed cluster shows a continuous terminating status after you delete the managed cluster. To delete this terminating managed cluster, manually remove the finalizers from the managed cluster that you detached.

1.1.2.1.15. Hub cluster and managed clusters clock not synced

Hub cluster and manage cluster time might become out-of-sync, displaying in the console unknown and eventually available within a few minutes. Ensure that the OpenShift Container Platform hub cluster time is configured correctly. See Customizing nodes.

1.1.2.1.16. Importing certain versions of IBM OpenShift Container Platform Kubernetes Service clusters is not supported

You cannot import IBM OpenShift Container Platform Kubernetes Service version 3.11 clusters. Later versions of IBM OpenShift Kubernetes Service are supported.

1.1.2.1.17. Automatic secret updates for provisioned clusters is not supported

When you change your cloud provider access key on the cloud provider side, you also need to update the corresponding credential for this cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.

1.1.2.1.19. Process to destroy a cluster does not complete

When you destroy a managed cluster, the status continues to display Destroying after one hour, and the cluster is not destroyed. To resolve this issue complete the following steps:

  1. Manually ensure that there are no orphaned resources on your cloud, and that all of the provider resources that are associated with the managed cluster are cleaned up.
  2. Open the ClusterDeployment information for the managed cluster that is being removed by entering the following command:

    oc edit clusterdeployment/<mycluster> -n <namespace>

    Replace mycluster with the name of the managed cluster that you are destroying.

    Replace namespace with the namespace of the managed cluster.

  3. Remove the hive.openshift.io/deprovision finalizer to forcefully stop the process that is trying to clean up the cluster resources in the cloud.
  4. Save your changes and verify that ClusterDeployment is gone.
  5. Manually remove the namespace of the managed cluster by running the following command:

    oc delete ns <namespace>

    Replace namespace with the namespace of the managed cluster.

1.1.2.1.20. Cannot upgrade OpenShift Container Platform managed clusters on OpenShift Container Platform Dedicated with the console

You cannot use the Red Hat Advanced Cluster Management console to upgrade OpenShift Container Platform managed clusters that are in the OpenShift Container Platform Dedicated environment.

1.1.2.1.22. Non-OpenShift Container Platform managed clusters require ManagedServiceAccount or LoadBalancer for pod logs

The ManagedServiceAccount and cluster proxy add-ons are enabled by default in Red Hat Advanced Cluster Management version 2.10 and newer. If the add-ons are disabled after upgrading, you must enable the ManagedServiceAccount and cluster proxy add-ons manually to use the pod log feature on non-OpenShift Container Platform managed clusters.

See ManagedServiceAccount add-on to learn how to enable ManagedServiceAccount and see Using cluster proxy add-ons to learn how to enable a cluster proxy add-on.

1.1.2.1.23. OpenShift Container Platform 4.10.z does not support hosted control plane clusters with proxy configuration

When you create a hosting service cluster with a cluster-wide proxy configuration on OpenShift Container Platform 4.10.z, the nodeip-configuration.service service does not start on the worker nodes.

1.1.2.1.24. Cannot provision OpenShift Container Platform 4.11 cluster on Azure

Provisioning an OpenShift Container Platform 4.11 cluster on Azure fails due to an authentication operator timeout error. To work around the issue, use a different worker node type in the install-config.yaml file or set the vmNetworkingType parameter to Basic. See the following install-config.yaml example:

compute:
- hyperthreading: Enabled
  name: 'worker'
  replicas: 3
  platform:
    azure:
      type:  Standard_D2s_v3
      osDisk:
        diskSizeGB: 128
      vmNetworkingType: 'Basic'
1.1.2.1.25. Client cannot reach iPXE script

iPXE is an open source network boot firmware. See iPXE for more details.

When booting a node, the URL length limitation in some DHCP servers cuts off the ipxeScript URL in the InfraEnv custom resource definition, resulting in the following error message in the console:

no bootable devices

To work around the issue, complete the following steps:

  1. Apply the InfraEnv custom resource definition when using an assisted installation to expose the bootArtifacts, which might resemble the following file:

    status:
      agentLabelSelector:
        matchLabels:
          infraenvs.agent-install.openshift.io: qe2
      bootArtifacts:
        initrd: https://assisted-image-service-multicluster-engine.redhat.com/images/0000/pxe-initrd?api_key=0000000&arch=x86_64&version=4.11
        ipxeScript: https://assisted-service-multicluster-engine.redhat.com/api/assisted-install/v2/infra-envs/00000/downloads/files?api_key=000000000&file_name=ipxe-script
        kernel: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-kernel-x86_64
        rootfs: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-rootfs.x86_64.img
  2. Create a proxy server to expose the bootArtifacts with short URLs.
  3. Copy the bootArtifacts and add them them to the proxy by running the following commands:

    for artifact in oc get infraenv qe2 -ojsonpath="{.status.bootArtifacts}" | jq ". | keys[]" | sed "s/\"//g"
    do curl -k oc get infraenv qe2 -ojsonpath="{.status.bootArtifacts.${artifact}}"` -o $artifact
  4. Add the ipxeScript artifact proxy URL to the bootp parameter in libvirt.xml.
1.1.2.1.26. Cannot delete ClusterDeployment after upgrading Red Hat Advanced Cluster Management

If you are using the removed BareMetalAssets API in Red Hat Advanced Cluster Management 2.6, the ClusterDeployment cannot be deleted after upgrading to Red Hat Advanced Cluster Management 2.7 because the BareMetalAssets API is bound to the ClusterDeployment.

To work around the issue, run the following command to remove the finalizers before upgrading to Red Hat Advanced Cluster Management 2.7:

oc patch clusterdeployment <clusterdeployment-name> -p '{"metadata":{"finalizers":null}}' --type=merge
1.1.2.1.27. A cluster deployed in a disconnected environment by using the central infrastructure management service might not install

When you deploy a cluster in a disconnected environment by using the central infrastructure management service, the cluster nodes might not start installing.

This issue occurs because the cluster uses a discovery ISO image that is created from the Red Hat Enterprise Linux CoreOS live ISO image that is shipped with OpenShift Container Platform versions 4.12.0 through 4.12.2. The image contains a restrictive /etc/containers/policy.json file that requires signatures for images sourcing from registry.redhat.io and registry.access.redhat.com. In a disconnected environment, the images that are mirrored might not have the signatures mirrored, which results in the image pull failing for cluster nodes at discovery. The Agent image fails to connect with the cluster nodes, which causes communication with the assisted service to fail.

To work around this issue, apply an ignition override to the cluster that sets the /etc/containers/policy.json file to unrestrictive. The ignition override can be set in the InfraEnv custom resource definition. The following example shows an InfraEnv custom resource definition with the override:

apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
  name: cluster
  namespace: cluster
spec:
  ignitionConfigOverride: '{"ignition":{"version":"3.2.0"},"storage":{"files":[{"path":"/etc/containers/policy.json","mode":420,"overwrite":true,"contents":{"source":"data:text/plain;charset=utf-8;base64,ewogICAgImRlZmF1bHQiOiBbCiAgICAgICAgewogICAgICAgICAgICAidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIgogICAgICAgIH0KICAgIF0sCiAgICAidHJhbnNwb3J0cyI6CiAgICAgICAgewogICAgICAgICAgICAiZG9ja2VyLWRhZW1vbiI6CiAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgIiI6IFt7InR5cGUiOiJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dCiAgICAgICAgICAgICAgICB9CiAgICAgICAgfQp9"}}]}}'

The following example shows the unrestrictive file that is created:

{
    "default": [
        {
            "type": "insecureAcceptAnything"
        }
    ],
    "transports": {
        "docker-daemon": {
        "": [
        {
            "type": "insecureAcceptAnything"
        }
        ]
    }
    }
}

After this setting is changed, the clusters install.

1.1.2.1.28. Managed cluster stuck in Pending status after deployment

The converged flow is the default process of provisioning. When you use the BareMetalHost resource for the Bare Metal Operator (BMO) to connect your host to a live ISO, the Ironic Python Agent does the following actions:

  • It runs the steps in the Bare Metal installer-provisioned-infrastructure.
  • It starts the Assisted Installer agent, and the agent handles the rest of the install and provisioning process.

If the Assisted Installer agent starts slowly and you deploy a managed cluster, the managed cluster might become stuck in the Pending status and not have any agent resources. You can work around the issue by disabling the converged flow.

Important: When you disable the converged flow, only the Assisted Installer agent runs in the live ISO, reducing the number of open ports and disabling any features you enabled with the Ironic Python Agent agent, including the following:

  • Pre-provisioning disk cleaning
  • iPXE boot firmware
  • BIOS configuration

To decide what port numbers you want to enable or disable without disabling the converged flow, see Network configuration.

To disable the converged flow, complete the following steps:

  1. Create the following ConfigMap on the hub cluster:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: my-assisted-service-config
      namespace: multicluster-engine
    data:
      ALLOW_CONVERGED_FLOW: "false" 1
    1
    When you set the parameter value to "false", you also disable any features enabled by the Ironic Python Agent.
  2. Apply the ConfigMap by running the following command:

    oc annotate --overwrite AgentServiceConfig agent unsupported.agent-install.openshift.io/assisted-service-configmap=my-assisted-service-config
1.1.2.1.29. ManagedClusterSet API specification limitation

The selectorType: LaberSelector setting is not supported when using the Clustersets API. The selectorType: ExclusiveClusterSetLabel setting is supported.

1.1.2.1.30. Hub cluster communication limitations

The following limitations occur if the hub cluster is not able to reach or communicate with the managed cluster:

  • You cannot create a new managed cluster by using the console. You are still able to import a managed cluster manually by using the command line interface or by using the Run import commands manually option in the console.
  • If you deploy an Application or ApplicationSet by using the console, or if you import a managed cluster into ArgoCD, the hub cluster ArgoCD controller calls the managed cluster API server. You can use AppSub or the ArgoCD pull model to work around the issue.
  • The console page for pod logs does not work, and an error message that resembles the following appears:

    Error querying resource logs:
    Service unavailable
1.1.2.1.31. installNamespace field can only have one value

When enabling the managed-serviceaccount add-on, the installNamespace field in the ManagedClusterAddOn resource must have open-cluster-management-agent-addon as the value. Other values are ignored. The managed-serviceaccount add-on agent is always deployed in the open-cluster-management-agent-addon namespace on the managed cluster.

1.1.2.1.32. tolerations and nodeSelector settings do not affect the managed-serviceaccount agent

The tolerations and nodeSelector settings configured on the MultiClusterEngine and MultiClusterHub resources do not affect the managed-serviceaccount agent deployed on the local cluster. The managed-serviceaccount add-on is not always required on the local cluster.

If the managed-serviceaccount add-on is required, you can work around the issue by completing the following steps:

  1. Create the addonDeploymentConfig custom resource.
  2. Set the tolerations and nodeSelector values for the local cluster and managed-serviceaccount agent.
  3. Update the managed-serviceaccount ManagedClusterAddon in the local cluster namespace to use the addonDeploymentConfig custom resource you created.

See Configuring nodeSelectors and tolerations for klusterlet add-ons to learn more about how to use the addonDeploymentConfig custom resource to configure tolerations and nodeSelector for add-ons.

1.1.2.1.33. Bulk destroy option on KubeVirt hosted cluster does not destroy hosted cluster

Using the bulk destroy option in the console on KubeVirt hosted clusters does not destroy the KubeVirt hosted clusters.

Use the row action drop-down menu to destroy the KubeVirt hosted cluster instead.

1.1.2.1.34. The Cluster curator does not support OpenShift Container Platform Dedicated clusters

When you upgrade an OpenShift Container Platform Dedicated cluster by using the ClusterCurator resource, the upgrade fails because the Cluster curator does not support OpenShift Container Platform Dedicated clusters.

1.1.2.1.35. Custom ingress domain is not applied correctly

You can specify a custom ingress domain by using the ClusterDeployment resource while installing a managed cluster, but the change is only applied after the installation by using the SyncSet resource. As a result, the spec field in the clusterdeployment.yaml file displays the custom ingress domain you specified, but the status still displays the default domain.

1.1.2.1.36. A single-node OpenShift cluster installation requires a matching OpenShift Container Platform with infrastructure operator for Red Hat OpenShift

If you want to install a single-node OpenShift cluster with an Red Hat OpenShift Container Platform version before 4.16, your InfraEnv custom resource and your booted host must use the same OpenShift Container Platform version that you are using to install the single-node OpenShift cluster. The installation fails if the versions do not match.

To work around the issue, edit your InfraEnv resource before you boot a host with the Discovery ISO, and include the following content:

apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
spec:
  osImageVersion: 4.15

The osImageVersion field must match the Red Hat OpenShift Container Platform cluster version that you want to install.

1.1.2.2. Hosted control planes

1.1.2.2.1. Console displays hosted cluster as Pending import

If the annotation and ManagedCluster name do not match, the console displays the cluster as Pending import. The cluster cannot be used by the multicluster engine operator. The same issue happens when there is no annotation and the ManagedCluster name does not match the Infra-ID value of the HostedCluster resource."

1.1.2.2.2. Console might list the same version multiple times when adding a node pool to a hosted cluster

When you use the console to add a new node pool to an existing hosted cluster, the same version of OpenShift Container Platform might appear more than once in the list of options. You can select any instance in the list for the version that you want.

1.1.2.2.3. The web console lists nodes even after they are removed from the cluster and returned to the infrastructure environment

When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a Ready state. You can verify the number of nodes in two ways:

  • In the console, go to the node pool and verify that it has 0 nodes.
  • On the command line interface, run the following commands:

    • Verify that 0 nodes are in the node pool by running the following command:

      oc get nodepool -A
    • Verify that 0 nodes are in the cluster by running the following command:

      oc get nodes --kubeconfig
    • Verify that 0 agents are reported as bound to the cluster by running the following command:
    oc get agents -A
1.1.2.2.4. Potential DNS issues in hosted clusters configured for a dual-stack network

When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues:

  • CrashLoopBackOff state in the service-ca-operator pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the kube-system namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve.
  • Pods stuck in ContainerCreating state: This issue occurs because the openshift-service-ca-operator cannot generate the metrics-tls secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server.

To resolve those issues, configure the DNS server settings by following the guidelines in Configuring DNS for a dual stack network.

1.1.2.2.5. On bare metal platforms, Agent resources might fail to pull ignition

On the bare metal (Agent) platform, the hosted control planes feature periodically rotates the token that the Agent uses to pull ignition. A bug causes the new token to not be propagated. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition.

As a workaround, in the Agent specification, delete the secret that the IgnitionEndpointTokenReference property refers to, and then add or modify any label on the Agent resource. The system can then detect that the Agent resource was modified and re-create the secret with the new token.

1.1.2.2.6. IBM Z hosts restart in a loop

In hosted control planes on the IBM Z platform, when you unbind the hosts with the cluster, the hosts restart in loop and are not ready to be used. For a workaround for this issue, see Destroying a hosted cluster on x86 bare metal with IBM Z compute nodes.

1.1.3. Errata updates

For multicluster engine operator, the Errata updates are automatically applied when released.

If no release notes are listed, the product does not have an Errata release at this time.

Important: For reference, Jira links and Jira numbers might be added to the content and used internally. Links that require access might not be available for the user.

1.1.3.1. Errata 2.6.3

  • Delivers updates to one or more product container images.

1.1.3.2. Errata 2.6.2

  • Delivers updates to one or more product container images.
  • Fixes an issue where validation prevented entering an AWS instance type containing hyphens during cluster creation. (ACM-13036)

1.1.3.3. Errata 2.6.1

  • Delivers updates to one or more product container images.
  • Fixes an issue where validation prevented entering an AWS instance type containing hyphens during cluster creation. (ACM-13036

1.1.3.4. Errata 2.6.1

  • Delivers updates to one or more product container images.

1.1.4. Cluster lifecycle deprecations and removals

Learn when parts of the product are deprecated or removed from multicluster engine operator. Consider the alternative actions in the Recommended action and details, which display in the tables for the current release and for two prior releases. Tables are removed if no entries are added for that section this release.

Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.

Best practice: Upgrade to the most recent version.

1.1.4.1. API deprecations and removals

multicluster engine operator follows the Kubernetes deprecation guidelines for APIs. See the Kubernetes Deprecation Policy for more details about that policy. multicluster engine operator APIs are only deprecated or removed outside of the following timelines:

  • All V1 APIs are generally available and supported for 12 months or three releases, whichever is greater. V1 APIs are not removed, but can be deprecated outside of that time limit.
  • All beta APIs are generally available for nine months or three releases, whichever is greater. Beta APIs are not removed outside of that time limit.
  • All alpha APIs are not required to be supported, but might be listed as deprecated or removed if it benefits users.
1.1.4.1.1. API deprecations
Product or categoryAffected itemVersionRecommended actionMore details and links

ManagedServiceAccount

The v1alpha1 API is upgraded to v1beta1 because v1alpha1 is deprecated.

2.4

Use v1beta1.

None

1.1.4.2. Deprecations

A deprecated component, feature, or service is supported, but no longer recommended for use and might become obsolete in future releases. Consider the alternative actions in the Recommended action and details that are provided in the following table:

Product or category

Affected item

Version

Recommended action

More details and links

Hosted control planes

The --aws-creds flag is deprecated.

2.6

Use the --sts-creds and --role-arn flags in the hcp command line interface.

None

1.1.4.3. Removals

A removed item is typically function that was deprecated in previous releases and is no longer available in the product. You must use alternatives for the removed function. Consider the alternative actions in the Recommended action and details that are provided in the following table:

Product or categoryAffected itemVersionRecommended actionMore details and links

Cluster lifecycle

Create cluster on Red Hat Virtualization

2.6

None

None

Cluster lifecycle

Klusterlet Operator Lifecycle Manager Operator

2.6

None

None

1.2. About cluster lifecycle with multicluster engine operator

The multicluster engine for Kubernetes operator is the cluster lifecycle operator that provides cluster management capabilities for Red Hat OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. If you installed Red Hat Advanced Cluster Management, you do not need to install multicluster engine operator, as it is automatically installed.

See the multicluster engine operator Support matrix to learn about hub cluster and managed cluster requirements and support. for support information, as well as the following documentation:

To continue, see the remaining cluster lifecyle documentation at Cluster lifecycle with multicluster engine operator overview.

1.2.1. Console overview

OpenShift Container Platform console plug-ins are available with the OpenShift Container Platform web console and can be integrated. To use this feature, the console plug-ins must remain enabled. The multicluster engine operator displays certain console features from Infrastructure and Credentials navigation items. If you install Red Hat Advanced Cluster Management, you see more console capability.

Note: With the plug-ins enabled, you can access Red Hat Advanced Cluster Management within the OpenShift Container Platform console from the cluster switcher by selecting All Clusters from the drop-down menu.

  1. To disable the plug-in, be sure you are in the Administrator perspective in the OpenShift Container Platform console.
  2. Find Administration in the navigation and click Cluster Settings, then click Configuration tab.
  3. From the list of Configuration resources, click the Console resource with the operator.openshift.io API group, which contains cluster-wide configuration for the web console.
  4. Click on the Console plug-ins tab. The mce plug-in is listed. Note: If Red Hat Advanced Cluster Management is installed, it is also listed as acm.
  5. Modify plug-in status from the table. In a few moments, you are prompted to refresh the console.

1.2.2. multicluster engine operator role-based access control

RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product:

1.2.2.1. Overview of roles

Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported:

1.2.2.1.1. Table of role definition
RoleDefinition

cluster-admin

This is an OpenShift Container Platform default role. A user with cluster binding to the cluster-admin role is an OpenShift Container Platform super user, who has all access.

open-cluster-management:cluster-manager-admin

A user with cluster binding to the open-cluster-management:cluster-manager-admin role is a super user, who has all access. This role allows the user to create a ManagedCluster resource.

open-cluster-management:admin:<managed_cluster_name>

A user with cluster binding to the open-cluster-management:admin:<managed_cluster_name> role has administrator access to the ManagedCluster resource named, <managed_cluster_name>. When a user has a managed cluster, this role is automatically created.

open-cluster-management:view:<managed_cluster_name>

A user with cluster binding to the open-cluster-management:view:<managed_cluster_name> role has view access to the ManagedCluster resource named, <managed_cluster_name>.

open-cluster-management:managedclusterset:admin:<managed_clusterset_name>

A user with cluster binding to the open-cluster-management:managedclusterset:admin:<managed_clusterset_name> role has administrator access to ManagedCluster resource named <managed_clusterset_name>. The user also has administrator access to managedcluster.cluster.open-cluster-management.io, clusterclaim.hive.openshift.io, clusterdeployment.hive.openshift.io, and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io and clusterset=<managed_clusterset_name>. A role binding is automatically generated when you are using a cluster set. See Creating a ManagedClusterSet to learn how to manage the resource.

open-cluster-management:managedclusterset:view:<managed_clusterset_name>

A user with cluster binding to the open-cluster-management:managedclusterset:view:<managed_clusterset_name> role has view access to the ManagedCluster resource named, <managed_clusterset_name>`. The user also has view access to managedcluster.cluster.open-cluster-management.io, clusterclaim.hive.openshift.io, clusterdeployment.hive.openshift.io, and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io, clusterset=<managed_clusterset_name>. For more details on how to manage managed cluster set resources, see Creating a ManagedClusterSet.

admin, edit, view

Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to open-cluster-management resources in a specific namespace, while cluster-wide binding to the same roles gives access to all of the open-cluster-management resources cluster-wide.

Important:

  • Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace.
  • If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol: -.

RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product.

1.2.2.2. Cluster lifecycle RBAC

View the following cluster lifecycle RBAC operations:

  • Create and administer cluster role bindings for all managed clusters. For example, create a cluster role binding to the cluster role open-cluster-management:cluster-manager-admin by entering the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>

    This role is a super user, which has access to all resources and actions. You can create cluster-scoped managedcluster resources, the namespace for the resources that manage the managed cluster, and the resources in the namespace with this role. You might need to add the username of the ID that requires the role association to avoid permission errors.

  • Run the following command to administer a cluster role binding for a managed cluster named cluster-name:

    oc create clusterrolebinding (role-binding-name) --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>

    This role has read and write access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource and not a namespace-scoped resource.

    • Create a namespace role binding to the cluster role admin by entering the following command:

      oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=admin --user=<username>

      This role has read and write access to the resources in the namespace of the managed cluster.

  • Create a cluster role binding for the open-cluster-management:view:<cluster-name> cluster role to view a managed cluster named cluster-name Enter the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>

    This role has read access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource.

  • Create a namespace role binding to the cluster role view by entering the following command:

    oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=view --user=<username>

    This role has read-only access to the resources in the namespace of the managed cluster.

  • View a list of the managed clusters that you can access by entering the following command:

    oc get managedclusters.clusterview.open-cluster-management.io

    This command is used by administrators and users without cluster administrator privileges.

  • View a list of the managed cluster sets that you can access by entering the following command:

    oc get managedclustersets.clusterview.open-cluster-management.io

    This command is used by administrators and users without cluster administrator privileges.

1.2.2.2.1. Cluster pools RBAC

View the following cluster pool RBAC operations:

  • As a cluster administrator, use cluster pool provision clusters by creating a managed cluster set and grant administrator permission to roles by adding the role to the group. View the following examples:

    • Grant admin permission to the server-foundation-clusterset managed cluster set with the following command:

      oc adm policy add-cluster-role-to-group open-cluster-management:clusterset-admin:server-foundation-clusterset
      server-foundation-team-admin
    • Grant view permission to the server-foundation-clusterset managed cluster set with the following command:

      oc adm policy add-cluster-role-to-group open-cluster-management:clusterset-view:server-foundation-clusterset server-foundation-team-user
  • Create a namespace for the cluster pool, server-foundation-clusterpool. View the following examples to grant role permissions:

    • Grant admin permission to server-foundation-clusterpool for the server-foundation-team-admin by running the following commands:

      oc adm new-project server-foundation-clusterpool
      
      oc adm policy add-role-to-group admin server-foundation-team-admin --namespace  server-foundation-clusterpool
  • As a team administrator, create a cluster pool named ocp46-aws-clusterpool with a cluster set label, cluster.open-cluster-management.io/clusterset=server-foundation-clusterset in the cluster pool namespace:

    • The server-foundation-webhook checks if the cluster pool has the cluster set label, and if the user has permission to create cluster pools in the cluster set.
    • The server-foundation-controller grants view permission to the server-foundation-clusterpool namespace for server-foundation-team-user.
  • When a cluster pool is created, the cluster pool creates a clusterdeployment. Continue reading for more details:

    • The server-foundation-controller grants admin permission to the clusterdeployment namespace for server-foundation-team-admin.
    • The server-foundation-controller grants view permission clusterdeployment namespace for server-foundation-team-user.

      Note: As a team-admin and team-user, you have admin permission to the clusterpool, clusterdeployment, and clusterclaim.

1.2.2.2.2. Console and API RBAC table for cluster lifecycle

View the following console and API RBAC tables for cluster lifecycle:

Table 1.1. Console RBAC table for cluster lifecycle
ResourceAdminEditView

Clusters

read, update, delete

-

read

Cluster sets

get, update, bind, join

edit role not mentioned

get

Managed clusters

read, update, delete

no edit role mentioned

get

Provider connections

create, read, update, and delete

-

read

Table 1.2. API RBAC table for cluster lifecycle
APIAdminEditView

managedclusters.cluster.open-cluster-management.io

You can use mcl (singular) or mcls (plural) in commands for this API.

create, read, update, delete

read, update

read

managedclusters.view.open-cluster-management.io

You can use mcv (singular) or mcvs (plural) in commands for this API.

read

read

read

managedclusters.register.open-cluster-management.io/accept

update

update

 

managedclusterset.cluster.open-cluster-management.io

You can use mclset (singular) or mclsets (plural) in commands for this API.

create, read, update, delete

read, update

read

managedclustersets.view.open-cluster-management.io

read

read

read

managedclustersetbinding.cluster.open-cluster-management.io

You can use mclsetbinding (singular) or mclsetbindings (plural) in commands for this API.

create, read, update, delete

read, update

read

klusterletaddonconfigs.agent.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusteractions.action.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusterviews.view.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusterinfos.internal.open-cluster-management.io

create, read, update, delete

read, update

read

manifestworks.work.open-cluster-management.io

create, read, update, delete

read, update

read

submarinerconfigs.submarineraddon.open-cluster-management.io

create, read, update, delete

read, update

read

placements.cluster.open-cluster-management.io

create, read, update, delete

read, update

read

1.2.2.2.3. Credentials role-based access control

The access to credentials is controlled by Kubernetes. Credentials are stored and secured as Kubernetes secrets. The following permissions apply to accessing secrets in Red Hat Advanced Cluster Management for Kubernetes:

  • Users with access to create secrets in a namespace can create credentials.
  • Users with access to read secrets in a namespace can also view credentials.
  • Users with the Kubernetes cluster roles of admin and edit can create and edit secrets.
  • Users with the Kubernetes cluster role of view cannot view secrets because reading the contents of secrets enables access to service account credentials.

1.2.3. Network configuration

Configure your network settings to allow the connections.

Important: The trusted CA bundle is available in the multicluster engine operator namespace, but that enhancement requires changes to your network. The trusted CA bundle ConfigMap uses the default name of trusted-ca-bundle. You can change this name by providing it to the operator in an environment variable named TRUSTED_CA_BUNDLE. See Configuring the cluster-wide proxy in the Networking section of Red Hat OpenShift Container Platform for more information.

Note: Registration Agent and Work Agent on the managed cluster do not support proxy settings because they communicate with apiserver on the hub cluster by establishing an mTLS connection, which cannot pass through the proxy.

For the multicluster engine operator cluster networking requirements, see the following table:

DirectionProtocolConnectionPort (if specified)

Outbound

 

Kubernetes API server of the provisioned managed cluster

6443

Outbound from the OpenShift Container Platform managed cluster to the hub cluster

TCP

Communication between the Ironic Python Agent and the bare metal operator on the hub cluster

6180, 6183, 6385, and 5050

Outbound from the hub cluster to the Ironic Python Agent on the managed cluster

TCP

Communication between the bare metal node where the Ironic Python Agent is running and the Ironic conductor service

9999

Outbound and inbound

 

The WorkManager service route on the managed cluster

443

Inbound

 

The Kubernetes API server of the multicluster engine for Kubernetes operator cluster from the managed cluster

6443

Note: The managed cluster must be able to reach the hub cluster control plane node IP addresses.

1.3. Installing and upgrading multicluster engine operator

The multicluster engine operator is a software operator that enhances cluster fleet management. The multicluster engine operator supportsRed Hat OpenShift Container Platform and Kubernetes cluster lifecycle management across clouds and data centers.

Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.

Best practice: Upgrade to the most recent version.

The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform.

For full support information, see the multicluster engine operator Support matrix. For life cycle information, see Red Hat OpenShift Container Platform Life Cycle policy.

Important: If you are using Red Hat Advanced Cluster Management version 2.5 or later, then multicluster engine for Kubernetes operator is already installed on the cluster.

See the following documentation:

1.3.1. Installing while connected online

The multicluster engine operator is installed with Operator Lifecycle Manager, which manages the installation, upgrade, and removal of the components that encompass the multicluster engine operator.

Required access: Cluster administrator

Important:

  • For OpenShift Container Platform Dedicated environment, you must have cluster-admin permissions. By default dedicated-admin role does not have the required permissions to create namespaces in the OpenShift Container Platform Dedicated environment.
  • By default, the multicluster engine operator components are installed on worker nodes of your OpenShift Container Platform cluster without any additional configuration. You can install multicluster engine operator onto worker nodes by using the OpenShift Container Platform OperatorHub web console interface, or by using the OpenShift Container Platform CLI.
  • If you have configured your OpenShift Container Platform cluster with infrastructure nodes, you can install multicluster engine operator onto those infrastructure nodes by using the OpenShift Container Platform CLI with additional resource parameters. See the Installing multicluster engine on infrastructure nodes section for those details.
  • If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or multicluster engine for Kubernetes operator, you will need to configure an image pull secret. For information on how to configure an image pull secret and other advanced configurations, see options in the Advanced configuration section of this documentation.

1.3.1.1. Prerequisites

Before you install multicluster engine for Kubernetes operator, see the following requirements:

  • Your Red Hat OpenShift Container Platform cluster must have access to the multicluster engine operator in the OperatorHub catalog from the OpenShift Container Platform console.
  • You need access to the catalog.redhat.com.
  • A supported version of OpenShift Container Platform must be deployed in your environment, and you must be logged into with the OpenShift Container Platform CLI. See the following install documentation:

  • Your OpenShift Container Platform command line interface (CLI) must be configured to run oc commands. See Getting started with the CLI for information about installing and configuring the OpenShift Container Platform CLI.
  • Your OpenShift Container Platform permissions must allow you to create a namespace.
  • You must have an Internet connection to access the dependencies for the operator.
  • To install in a OpenShift Container Platform Dedicated environment, see the following:

    • You must have the OpenShift Container Platform Dedicated environment configured and running.
    • You must have cluster-admin authority to the OpenShift Container Platform Dedicated environment where you are installing the engine.
  • If you plan to create managed clusters by using the Assisted Installer that is provided with Red Hat OpenShift Container Platform, see Preparing to install with the Assisted Installer topic in the OpenShift Container Platform documentation for the requirements.

1.3.1.2. Confirm your OpenShift Container Platform installation

You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working. For more information about installing OpenShift Container Platform, see the OpenShift Container Platform documentation.

  1. Verify that multicluster engine operator is not already installed on your OpenShift Container Platform cluster. The multicluster engine operator allows only one single installation on each OpenShift Container Platform cluster. Continue with the following steps if there is no installation.
  2. To ensure that the OpenShift Container Platform cluster is set up correctly, access the OpenShift Container Platform web console with the following command:

    kubectl -n openshift-console get route console

    See the following example output:

    console console-openshift-console.apps.new-coral.purple-chesterfield.com
    console   https   reencrypt/Redirect     None
  3. Open the URL in your browser and check the result. If the console URL displays console-openshift-console.router.default.svc.cluster.local, set the value for openshift_master_default_subdomain when you install OpenShift Container Platform. See the following example of a URL: https://console-openshift-console.apps.new-coral.purple-chesterfield.com.

You can proceed to install multicluster engine operator.

1.3.1.3. Installing from the OperatorHub web console interface

Best practice: From the Administrator view in your OpenShift Container Platform navigation, install the OperatorHub web console interface that is provided with OpenShift Container Platform.

  1. Select Operators > OperatorHub to access the list of available operators, and select multicluster engine for Kubernetes operator.
  2. Click Install.
  3. On the Operator Installation page, select the options for your installation:

    • Namespace:

      • The multicluster engine operator engine must be installed in its own namespace, or project.
      • By default, the OperatorHub console installation process creates a namespace titled multicluster-engine. Best practice: Continue to use the multicluster-engine namespace if it is available.
      • If there is already a namespace named multicluster-engine, select a different namespace.
    • Channel: The channel that you select corresponds to the release that you are installing. When you select the channel, it installs the identified release, and establishes that the future errata updates within that release are obtained.
    • Approval strategy: The approval strategy identifies the human interaction that is required for applying updates to the channel or release to which you subscribed.

      • Select Automatic, which is selected by default, to ensure any updates within that release are automatically applied.
      • Select Manual to receive a notification when an update is available. If you have concerns about when the updates are applied, this might be best practice for you.

    Note: To upgrade to the next minor release, you must return to the OperatorHub page and select a new channel for the more current release.

  4. Select Install to apply your changes and create the operator.
  5. See the following process to create the MultiClusterEngine custom resource.

    1. In the OpenShift Container Platform console navigation, select Installed Operators > multicluster engine for Kubernetes.
    2. Select the MultiCluster Engine tab.
    3. Select Create MultiClusterEngine.
    4. Update the default values in the YAML file. See options in the MultiClusterEngine advanced configuration section of the documentation.

      • The following example shows the default template that you can copy into the editor:
      apiVersion: multicluster.openshift.io/v1
      kind: MultiClusterEngine
      metadata:
        name: multiclusterengine
      spec: {}
  6. Select Create to initialize the custom resource. It can take up to 10 minutes for the multicluster engine operator engine to build and start.

    After the MultiClusterEngine resource is created, the status for the resource is Available on the MultiCluster Engine tab.

1.3.1.4. Installing from the OpenShift Container Platform CLI

  1. Create a multicluster engine operator engine namespace where the operator requirements are contained. Run the following command, where namespace is the name for your multicluster engine for Kubernetes operator namespace. The value for namespace might be referred to as Project in the OpenShift Container Platform environment:

    oc create namespace <namespace>
  2. Switch your project namespace to the one that you created. Replace namespace with the name of the multicluster engine for Kubernetes operator namespace that you created in step 1.

    oc project <namespace>
  3. Create a YAML file to configure an OperatorGroup resource. Each namespace can have only one operator group. Replace default with the name of your operator group. Replace namespace with the name of your project namespace. See the following example:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: <default>
      namespace: <namespace>
    spec:
      targetNamespaces:
      - <namespace>
  4. Run the following command to create the OperatorGroup resource. Replace operator-group with the name of the operator group YAML file that you created:

    oc apply -f <path-to-file>/<operator-group>.yaml
  5. Create a YAML file to configure an OpenShift Container Platform Subscription. Your file should look similar to the following example:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: multicluster-engine
    spec:
      sourceNamespace: openshift-marketplace
      source: redhat-operators
      channel: stable-2.6
      installPlanApproval: Automatic
      name: multicluster-engine

    Note: For installing the multicluster engine for Kubernetes operator on infrastructure nodes, the see Operator Lifecycle Manager Subscription additional configuration section.

  6. Run the following command to create the OpenShift Container Platform Subscription. Replace subscription with the name of the subscription file that you created:

    oc apply -f <path-to-file>/<subscription>.yaml
  7. Create a YAML file to configure the MultiClusterEngine custom resource. Your default template should look similar to the following example:

    apiVersion: multicluster.openshift.io/v1
    kind: MultiClusterEngine
    metadata:
      name: multiclusterengine
    spec: {}

    Note: For installing the multicluster engine operator on infrastructure nodes, see the MultiClusterEngine custom resource additional configuration section:

  8. Run the following command to create the MultiClusterEngine custom resource. Replace custom-resource with the name of your custom resource file:

    oc apply -f <path-to-file>/<custom-resource>.yaml

    If this step fails with the following error, the resources are still being created and applied. Run the command again in a few minutes when the resources are created:

    error: unable to recognize "./mce.yaml": no matches for kind "MultiClusterEngine" in version "operator.multicluster-engine.io/v1"
  9. Run the following command to get the custom resource. It can take up to 10 minutes for the MultiClusterEngine custom resource status to display as Available in the status.phase field after you run the following command:

    oc get mce -o=jsonpath='{.items[0].status.phase}'

If you are reinstalling the multicluster engine operator and the pods do not start, see Troubleshooting reinstallation failure for steps to work around this problem.

Notes:

  • A ServiceAccount with a ClusterRoleBinding automatically gives cluster administrator privileges to multicluster engine operator and to any user credentials with access to the namespace where you install multicluster engine operator.

1.3.1.5. Installing on infrastructure nodes

An OpenShift Container Platform cluster can be configured to contain infrastructure nodes for running approved management components. Running components on infrastructure nodes avoids allocating OpenShift Container Platform subscription quota for the nodes that are running those management components.

After adding infrastructure nodes to your OpenShift Container Platform cluster, follow the Installing from the OpenShift Container Platform CLI instructions and add the following configurations to the Operator Lifecycle Manager Subscription and MultiClusterEngine custom resource.

1.3.1.5.1. Add infrastructure nodes to the OpenShift Container Platform cluster

Follow the procedures that are described in Creating infrastructure machine sets in the OpenShift Container Platform documentation. Infrastructure nodes are configured with a Kubernetes taint and label to keep non-management workloads from running on them.

To be compatible with the infrastructure node enablement provided by multicluster engine operator, ensure your infrastructure nodes have the following taint and label applied:

metadata:
  labels:
    node-role.kubernetes.io/infra: ""
spec:
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/infra
1.3.1.5.2. Operator Lifecycle Manager Subscription additional configuration

Add the following additional configuration before applying the Operator Lifecycle Manager Subscription:

spec:
  config:
    nodeSelector:
      node-role.kubernetes.io/infra: ""
    tolerations:
    - key: node-role.kubernetes.io/infra
      effect: NoSchedule
      operator: Exists
1.3.1.5.3. MultiClusterEngine custom resource additional configuration

Add the following additional configuration before applying the MultiClusterEngine custom resource:

spec:
  nodeSelector:
    node-role.kubernetes.io/infra: ""

1.3.2. Install on disconnected networks

You might need to install the multicluster engine operator on Red Hat OpenShift Container Platform clusters that are not connected to the Internet. The procedure to install on a disconnected engine requires some of the same steps as the connected installation.

Important: You must install multicluster engine operator on a cluster that does not have Red Hat Advanced Cluster Management for Kubernetes earlier than 2.5 installed. The multicluster engine operator cannot co-exist with Red Hat Advanced Cluster Management for Kubernetes on versions earlier than 2.5 because they provide some of the same management components. It is recommended that you install multicluster engine operator on a cluster that has never previously installed Red Hat Advanced Cluster Management. If you are using Red Hat Advanced Cluster Management for Kubernetes at version 2.5.0 or later then multicluster engine operator is already installed on the cluster with it.

You must download copies of the packages to access them during the installation, rather than accessing them directly from the network during the installation.

1.3.2.1. Prerequisites

You must meet the following requirements before you install The multicluster engine operator:

  • A supported OpenShift Container Platform version must be deployed in your environment, and you must be logged in with the command line interface (CLI).
  • You need access to catalog.redhat.com.

    Note: For managing bare metal clusters, you need a supported OpenShift Container Platform version.

    See the OpenShift Container Platform Installing.

  • Your Red Hat OpenShift Container Platform permissions must allow you to create a namespace.
  • You must have a workstation with Internet connection to download the dependencies for the operator.

1.3.2.2. Confirm your OpenShift Container Platform installation

  • You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working in your cluster. For information about OpenShift Container Platform, see OpenShift Container Platform documentation.
  • When and if you are connected, accessing the OpenShift Container Platform web console with the following command to verify:

    oc -n openshift-console get route console

    See the following example output:

    console console-openshift-console.apps.new-name.purple-name.com
    console   https   reencrypt/Redirect     None

    The console URL in this example is: https:// console-openshift-console.apps.new-coral.purple-chesterfield.com. Open the URL in your browser and check the result.

    If the console URL displays console-openshift-console.router.default.svc.cluster.local, set the value for openshift_master_default_subdomain when you install OpenShift Container Platform.

1.3.2.3. Installing in a disconnected environment

Important: You need to download the required images to a mirroring registry to install the operators in a disconnected environment. Without the download, you might receive ImagePullBackOff errors during your deployment.

Follow these steps to install the multicluster engine operator in a disconnected environment:

  1. Create a mirror registry. If you do not already have a mirror registry, create one by completing the procedure in the Disconnected installation mirroring topic of the Red Hat OpenShift Container Platform documentation.

    If you already have a mirror registry, you can configure and use your existing one.

  2. Note: For bare metal only, you need to provide the certificate information for the disconnected registry in your install-config.yaml file. To access the image in a protected disconnected registry, you must provide the certificate information so the multicluster engine operator can access the registry.

    1. Copy the certificate information from the registry.
    2. Open the install-config.yaml file in an editor.
    3. Find the entry for additionalTrustBundle: |.
    4. Add the certificate information after the additionalTrustBundle line. The resulting content should look similar to the following example:

      additionalTrustBundle: |
        -----BEGIN CERTIFICATE-----
        certificate_content
        -----END CERTIFICATE-----
      sshKey: >-
  3. Important: Additional mirrors for disconnected image registries are needed if the following Governance policies are required:

    • Container Security Operator policy: Locate the images in the registry.redhat.io/quay source.
    • Compliance Operator policy: Locate the images in the registry.redhat.io/compliance source.
    • Gatekeeper Operator policy: Locate the images in the registry.redhat.io/gatekeeper source.

      See the following example of mirrors lists for all three operators:

        - mirrors:
          - <your_registry>/rhacm2
          source: registry.redhat.io/rhacm2
        - mirrors:
          - <your_registry>/quay
          source: registry.redhat.io/quay
        - mirrors:
          - <your_registry>/compliance
          source: registry.redhat.io/compliance
  4. Save the install-config.yaml file.
  5. Create a YAML file that contains the ImageContentSourcePolicy with the name mce-policy.yaml. Note: If you modify this on a running cluster, it causes a rolling restart of all nodes.

    apiVersion: operator.openshift.io/v1alpha1
    kind: ImageContentSourcePolicy
    metadata:
      name: mce-repo
    spec:
      repositoryDigestMirrors:
      - mirrors:
        - mirror.registry.com:5000/multicluster-engine
        source: registry.redhat.io/multicluster-engine
  6. Apply the ImageContentSourcePolicy file by entering the following command:

    oc apply -f mce-policy.yaml
  7. Enable the disconnected Operator Lifecycle Manager Red Hat Operators and Community Operators.

    the multicluster engine operator is included in the Operator Lifecycle Manager Red Hat Operator catalog.

  8. Configure the disconnected Operator Lifecycle Manager for the Red Hat Operator catalog. Follow the steps in the Using Operator Lifecycle Manager on restricted networks topic of theRed Hat OpenShift Container Platform documentation.
  9. Continue to install the multicluster engine operator for Kubernetes from the Operator Lifecycle Manager catalog.

See Installing while connected online for the required steps.

1.3.3. Advanced configuration

The multicluster engine operator is installed using an operator that deploys all of the required components. The multicluster engine operator can be further configured during or after installation. Learn more about the advanced configuration options.

1.3.3.1. Deployed components

Add one or more of the following attributes to the MultiClusterEngine custom resource:

Table 1.3. Table list of the deployed components

Name

Description

Enabled

assisted-service

Installs OpenShift Container Platform with minimal infrastructure prerequisites and comprehensive pre-flight validations

True

cluster-lifecycle

Provides cluster management capabilities for OpenShift Container Platform and Kubernetes hub clusters

True

cluster-manager

Manages various cluster-related operations within the cluster environment

True

cluster-proxy-addon

Automates the installation of apiserver-network-proxy on both hub and managed clusters using a reverse proxy server

True

console-mce

Enables the multicluster engine operator console plug-in

True

discovery

Discovers and identifies new clusters within the OpenShift Cluster Manager

True

hive

Provisions and performs initial configuration of OpenShift Container Platform clusters

True

hypershift

Hosts OpenShift Container Platform control planes at scale with cost and time efficiency, and cross-cloud portability

True

hypershift-local-hosting

Enables local hosting capabilities for within the local cluster environment

True

local-cluster

Enables the import and self-management of the local hub cluster where the multicluster engine operator is deployed

True

managedserviceacccount

Syncronizes service accounts to managed clusters and collects tokens as secret resources back to the hub cluster

False

server-foundation

Provides foundational services for server-side operations within the multicluster environment

True

When you install multicluster engine operator on to the cluster, not all of the listed components are enabled by default.

You can further configure multicluster engine operator during or after installation by adding one or more attributes to the MultiClusterEngine custom resource. Continue reading for information about the attributes that you can add.

1.3.3.2. Console and component configuration

The following example displays the spec.overrides default template that you can use to enable or disable the component:

apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  overrides:
    components:
    - name: <name> 1
      enabled: true
  1. Replace name with the name of the component.

Alternatively, you can run the following command. Replace namespace with the name of your project and name with the name of the component:

oc patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{"op": "add", "path": "/spec/overrides/components/-","value":{"name":"<name>","enabled":true}}]'

1.3.3.3. Local-cluster enablement

By default, the cluster that is running multicluster engine operator manages itself. To install multicluster engine operator without the cluster managing itself, specify the following values in the spec.overrides.components settings in the MultiClusterEngine section:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  overrides:
    components:
    - name: local-cluster
      enabled: false
  • The name value identifies the hub cluster as a local-cluster.
  • The enabled setting specifies whether the feature is enabled or disabled. When the value is true, the hub cluster manages itself. When the value is false, the hub cluster does not manage itself.

A hub cluster that is managed by itself is designated as the local-cluster in the list of clusters.

1.3.3.4. Custom image pull secret

If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or the multicluster engine operator, generate a secret that contains your OpenShift Container Platform pull secret information to access the entitled content from the distribution registry.

The secret requirements for OpenShift Container Platform clusters are automatically resolved by OpenShift Container Platform and multicluster engine for Kubernetes operator, so you do not have to create the secret if you are not importing other types of Kubernetes clusters to be managed.

Important: These secrets are namespace-specific, so make sure that you are in the namespace that you use for your engine.

  1. Download your OpenShift Container Platform pull secret file from cloud.redhat.com/openshift/install/pull-secret by selecting Download pull secret. Your OpenShift Container Platform pull secret is associated with your Red Hat Customer Portal ID, and is the same across all Kubernetes providers.
  2. Run the following command to create your secret:

    oc create secret generic <secret> -n <namespace> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson
    • Replace secret with the name of the secret that you want to create.
    • Replace namespace with your project namespace, as the secrets are namespace-specific.
    • Replace path-to-pull-secret with the path to your OpenShift Container Platform pull secret that you downloaded.

The following example displays the spec.imagePullSecret template to use if you want to use a custom pull secret. Replace secret with the name of your pull secret:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  imagePullSecret: <secret>

1.3.3.5. Target namespace

The operands can be installed in a designated namespace by specifying a location in the MultiClusterEngine custom resource. This namespace is created upon application of the MultiClusterEngine custom resource.

Important: If no target namespace is specified, the operator will install to the multicluster-engine namespace and will set it in the MultiClusterEngine custom resource specification.

The following example displays the spec.targetNamespace template that you can use to specify a target namespace. Replace target with the name of your destination namespace. Note: The target namespace cannot be the default namespace:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  targetNamespace: <target>

1.3.3.6. availabilityConfig

The hub cluster has two availabilities: High and Basic. By default, the hub cluster has an availability of High, which gives hub cluster components a replicaCount of 2. This provides better support in cases of failover but consumes more resources than the Basic availability, which gives components a replicaCount of 1.

Important: Set spec.availabilityConfig to Basic if you are using multicluster engine operator on a single-node OpenShift cluster.

The following examples shows the spec.availabilityConfig template with Basic availability:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  availabilityConfig: "Basic"

1.3.3.7. nodeSelector

You can define a set of node selectors in the MultiClusterEngine to install to specific nodes on your cluster. The following example shows spec.nodeSelector to assign pods to nodes with the label node-role.kubernetes.io/infra:

spec:
  nodeSelector:
    node-role.kubernetes.io/infra: ""

1.3.3.8. tolerations

You can define a list of tolerations to allow the MultiClusterEngine to tolerate specific taints defined on the cluster. The following example shows a spec.tolerations that matches a node-role.kubernetes.io/infra taint:

spec:
  tolerations:
  - key: node-role.kubernetes.io/infra
    effect: NoSchedule
    operator: Exists

The previous infra-node toleration is set on pods by default without specifying any tolerations in the configuration. Customizing tolerations in the configuration will replace this default behavior.

1.3.3.9. ManagedServiceAccount add-on

The ManagedServiceAccount add-on allows you to create or delete a service account on a managed cluster. To install with this add-on enabled, include the following in the MultiClusterEngine specification in spec.overrides:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  overrides:
    components:
    - name: managedserviceaccount
      enabled: true

The ManagedServiceAccount add-on can be enabled after creating MultiClusterEngine by editing the resource on the command line and setting the managedserviceaccount component to enabled: true. Alternatively, you can run the following command and replace <multiclusterengine-name> with the name of your MultiClusterEngine resource.

oc patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{"op": "add", "path": "/spec/overrides/components/-","value":{"name":"managedserviceaccount","enabled":true}}]'

1.3.4. Uninstalling

When you uninstall multicluster engine for Kubernetes operator, you see two different levels of the process: A custom resource removal and a complete operator uninstall. It might take up to five minutes to complete the uninstall process.

  • The custom resource removal is the most basic type of uninstall that removes the custom resource of the MultiClusterEngine instance but leaves other required operator resources. This level of uninstall is helpful if you plan to reinstall using the same settings and components.
  • The second level is a more complete uninstall that removes most operator components, excluding components such as custom resource definitions. When you continue with this step, it removes all of the components and subscriptions that were not removed with the custom resource removal. After this uninstall, you must reinstall the operator before reinstalling the custom resource.

1.3.4.1. Prerequisite: Detach enabled services

Before you uninstall the multicluster engine for Kubernetes operator, you must detach all of the clusters that are managed by that engine. To avoid errors, detach all clusters that are still managed by the engine, then try to uninstall again.

  • If you have managed clusters attached, you might see the following message.

    Cannot delete MultiClusterEngine resource because ManagedCluster resource(s) exist

    For more information about detaching clusters, see the Removing a cluster from management section by selecting the information for your provider in Cluster creation introduction.

1.3.4.2. Removing resources by using commands

  1. If you have not already. ensure that your OpenShift Container Platform CLI is configured to run oc commands. See Getting started with the OpenShift CLI in the OpenShift Container Platform documentation for more information about how to configure the oc commands.
  2. Change to your project namespace by entering the following command. Replace namespace with the name of your project namespace:

    oc project <namespace>
  3. Enter the following command to remove the MultiClusterEngine custom resource:

    oc delete multiclusterengine --all

    You can view the progress by entering the following command:

    oc get multiclusterengine -o yaml
  4. Enter the following commands to delete the multicluster-engine ClusterServiceVersion in the namespace it is installed in:
❯ oc get csv
NAME                         DISPLAY                              VERSION   REPLACES   PHASE
multicluster-engine.v2.0.0   multicluster engine for Kubernetes   2.0.0                Succeeded

❯ oc delete clusterserviceversion multicluster-engine.v2.0.0
❯ oc delete sub multicluster-engine

The CSV version shown here may be different.

1.3.4.3. Deleting the components by using the console

When you use the RedHat OpenShift Container Platform console to uninstall, you remove the operator. Complete the following steps to uninstall by using the console:

  1. In the OpenShift Container Platform console navigation, select Operators > Installed Operators > multicluster engine for Kubernetes.
  2. Remove the MultiClusterEngine custom resource.

    1. Select the tab for Multiclusterengine.
    2. Select the Options menu for the MultiClusterEngine custom resource.
    3. Select Delete MultiClusterEngine.
  3. Run the clean-up script according to the procedure in the following section.

    Tip: If you plan to reinstall the same multicluster engine for Kubernetes operator version, you can skip the rest of the steps in this procedure and reinstall the custom resource.

  4. Navigate to Installed Operators.
  5. Remove the _ multicluster engine for Kubernetes_ operator by selecting the Options menu and selecting Uninstall operator.

1.3.4.4. Troubleshooting Uninstall

If the multicluster engine custom resource is not being removed, remove any potential remaining artifacts by running the clean-up script.

  1. Copy the following script into a file:

    #!/bin/bash
    oc delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io
    oc delete validatingwebhookconfiguration multiclusterengines.multicluster.openshift.io
    oc delete mce --all

See Disconnected installation mirroring for more information.

1.4. Red Hat Advanced Cluster Management integration

If you are using multicluster engine operator with Red Hat Advanced Cluster Management installed, you can access more multicluster management features, such as Observability and Policy.

For integrated capability, see the following requirements:

See the following procedures for multicluster engine operator and Red Hat Advanced Cluster Management multicluster management:

1.4.1. Discovering multicluster engine operator hosted clusters in Red Hat Advanced Cluster Management

If you have multicluster engine operator clusters that are hosting multiple hosted clusters, you can bring those hosted clusters to a Red Hat Advanced Cluster Management hub cluster to manage with Red Hat Advanced Cluster Management management components, such as Application lifecycle and Governance.

You can have those hosted clusters automatically discovered and imported as managed clusters.

Note: Since the hosted control planes run on the managed multicluster engine operator cluster nodes, the number of hosted control planes that the cluster can host is determined by the resource availability of managed multicluster engine operator cluster nodes, as well as the number of managed multicluster engine operator clusters. You can add more nodes or managed clusters to host more hosted control planes.

Required access: Cluster administrator

1.4.1.1. Prerequisites

  • You need one or more multicluster engine operator clusters.
  • You need a Red Hat Advanced Cluster Management cluster set as your hub cluster.
  • Install the clusteradm CLI by running the following command:

    curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash

1.4.1.2. Configuring Red Hat Advanced Cluster Management to import multicluster engine operator clusters

multicluster engine operator has a local-cluster, which is a hub cluster that is managed. The following default addons are enabled for this local-cluster in the open-cluster-management-agent-addon namespace:

  • cluster-proxy
  • managed-serviceaccount
  • work-manager
1.4.1.2.1. Configuring add-ons

When your multicluster engine operator is imported into Red Hat Advanced Cluster Management, Red Hat Advanced Cluster Management enables the same set of add-ons to manage the multicluster engine operator.

Install those add-ons in a different multicluster engine operator namespace so that the multicluster engine operator can self-manage with the local-cluster add-ons while Red Hat Advanced Cluster Management manages multicluster engine operator at the same time. Complete the following procedure:

  1. Log in to your Red Hat Advanced Cluster Management with the CLI.
  2. Create the addonDeploymentConfig resource to specify a different add-on installation namespace. See the following example where agentInstallNamespace points to open-cluster-management-agent-addon-discovery:

    apiVersion: addon.open-cluster-management.io/v1alpha1
    kind: addonDeploymentConfig
    metadata:
      name: addon-ns-config
      namespace: multicluster-engine
    spec:
      agentInstallNamespace: open-cluster-management-agent-addon-discovery
  3. Run oc apply -f <filename>.yaml to apply the file.
  4. Update the existing ClusterManagementAddOn resources for the add-ons so that the add-ons are installed in the open-cluster-management-agent-addon-discovery namespace that is specified in the addonDeploymentConfig resource that you created. See the following example with open-cluster-management-global-set as the namespace:

    apiVersion: addon.open-cluster-management.io/v1alpha1
    kind: ClusterManagementAddOn
    metadata:
      name: work-manager
    spec:
      addonMeta:
        displayName: work-manager
      installStrategy:
        placements:
        - name: global
          namespace: open-cluster-management-global-set
          rolloutStrategy:
            type: All
        type: Placements
    1. Add the addonDeploymentConfigs to the ClusterManagementAddOn. See the following example:

      apiVersion: addon.open-cluster-management.io/v1alpha1
      kind: ClusterManagementAddOn
      metadata:
        name: work-manager
      spec:
        addonMeta:
          displayName: work-manager
        installStrategy:
          placements:
          - name: global
            namespace: open-cluster-management-global-set
            rolloutStrategy:
              type: All
            configs:
            - group: addon.open-cluster-management.io
              name: addon-ns-config
              namespace: multicluster-engine
              resource: addondeploymentconfigs
          type: Placements
    2. Add the addonDeploymentConfig to the managed-serviceaccount. See the following example:

      apiVersion: addon.open-cluster-management.io/v1alpha1
      kind: ClusterManagementAddOn
      metadata:
        name: managed-serviceaccount
      spec:
        addonMeta:
          displayName: managed-serviceaccount
        installStrategy:
          placements:
          - name: global
            namespace: open-cluster-management-global-set
            rolloutStrategy:
              type: All
            configs:
            - group: addon.open-cluster-management.io
              name: addon-ns-config
              namespace: multicluster-engine
              resource: addondeploymentconfigs
          type: Placements
    3. Add the addondeploymentconfigs value to the ClusterManagementAddOn resource named, cluster-proxy. See the following example:
    apiVersion: addon.open-cluster-management.io/v1alpha1
    kind: ClusterManagementAddOn
    metadata:
      name: cluster-proxy
    spec:
      addonMeta:
        displayName: cluster-proxy
      installStrategy:
        placements:
        - name: global
          namespace: open-cluster-management-global-set
          rolloutStrategy:
            type: All
          configs:
          - group: addon.open-cluster-management.io
            name: addon-ns-config
            namespace: multicluster-engine
            resource: addondeploymentconfigs
        type: Placements
  5. Run the following command to verify that the add-ons for the Red Hat Advanced Cluster Management local-cluster are re-installed into the namespace that you specified:

    oc get deployment -n open-cluster-management-agent-addon-discovery

    See the following output example:

    NAME                                 READY   UP-TO-DATE   AVAILABLE    AGE
    cluster-proxy-proxy-agent             1/1     1            1           24h
    klusterlet-addon-workmgr             1/1     1            1           24h
    managed-serviceaccount-addon-agent   1/1     1            1           24h
1.4.1.2.2. Creating a KlusterletConfig resource

multicluster engine operator has a local-cluster, which is a hub cluster that is managed. A resource named klusterlet is created for this local-cluster.

When your multicluster engine operator is imported into Red Hat Advanced Cluster Management, Red Hat Advanced Cluster Management installs the klusterlet with the same name, klusterlet, to manage the multicluster engine operator. This conflicts with the multicluster engine operator local-cluster klusterlet.

You need to create a KlusterletConfig resource that is used by ManagedCluster resources to import multicluster engine operator clusters so that the klusterlet is installed with a different name to avoid the conflict. Complete the following procedure:

  1. Create a KlusterletConfig resource using the following example. When this KlusterletConfig resource is referenced in a managed cluster, the value in the spec.installMode.noOperator.postfix field is used as a suffix to the klusterlet name, such as klusterlet-mce-import:

    kind: KlusterletConfig
    apiVersion: config.open-cluster-management.io/v1alpha1
    metadata:
      name: mce-import-klusterlet-config
    spec:
      installMode:
        type: noOperator
        noOperator:
           postfix: mce-import
  2. Run oc apply -f <filename>.yaml to apply the file.
1.4.1.2.3. Configure for backup and restore

Since you installed Red Hat Advanced Cluster Management, you can also use the Backup and restore feature.

If the hub cluster is restored in a disaster recovery scenario, the imported multicluster engine operator clusters and hosted clusters are imported to the newer Red Hat Advanced Cluster Management hub cluster.

In this scenario, you need to restore the previous configurations as part of Red Hat Advanced Cluster Management hub cluster restore.

Add the backup=true label to enable backup. See the following steps for each add-on:

  • For your addon-ns-config, run the following command:

    oc label addondeploymentconfig addon-ns-config -n multicluster-engine cluster.open-cluster-management.io/backup=true
  • For your hypershift-addon-deploy-config, run the following command:

    oc label addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine cluster.open-cluster-management.io/backup=true
  • For your work-manager, run the following command:

    oc label clustermanagementaddon work-manager cluster.open-cluster-management.io/backup=true
  • For your `cluster-proxy `, run the following command:

    oc label clustermanagementaddon cluster-proxy cluster.open-cluster-management.io/backup=true
  • For your managed-serviceaccount, run the following command:

    oc label clustermanagementaddon managed-serviceaccount cluster.open-cluster-management.io/backup=true
  • For your mce-import-klusterlet-config, run the following command:

    oc label KlusterletConfig mce-import-klusterlet-config cluster.open-cluster-management.io/backup=true

1.4.1.3. Importing multicluster engine operator manually

To manually import an multicluster engine operator cluster from your Red Hat Advanced Cluster Management cluster, complete the following procedure:

  1. From your Red Hat Advanced Cluster Management cluster, create a ManagedCluster resource manually to import an multicluster engine operator cluster. See the following file example:

    apiVersion: cluster.open-cluster-management.io/v1
    kind: ManagedCluster
    metadata:
      annotations:
        agent.open-cluster-management.io/klusterlet-config: mce-import-klusterlet-config 1
      name: mce-a 2
    spec:
      hubAcceptsClient: true
      leaseDurationSeconds: 60
    1 1
    The mce-import-klusterlet-config annotation references the KlusterletConfig resource that you created in the previous step to install the Red Hat Advanced Cluster Management klusterlet with a different name in multicluster engine operator.
    2
    The example imports an multicluster engine operator managed cluster named mce-a.
  2. Run oc apply -f <filename>.yaml to apply the file.
  3. Create the auto-import-secret secret that references the kubeconfig of the multicluster engine operator cluster. Go to Importing a cluster by using the auto import secret to add the auto import secret to complete the multicluster engine operator auto-import process.

    After you create the auto import secret in the multicluster engine operator managed cluster namespace in the Red Hat Advanced Cluster Management cluster, the managed cluster is registered.

  4. Run the following command to get the status:

    oc get managedcluster

    See following example output with the status and example URLs of managed clusters:

    NAME           HUB ACCEPTED   MANAGED CLUSTER URLS            JOINED   AVAILABLE   AGE
    local-cluster  true           https://<api.acm-hub.com:port>  True     True        44h
    mce-a          true           https://<api.mce-a.com:port>    True     True        27s

Important: Do not enable any other Red Hat Advanced Cluster Management add-ons for the imported multicluster engine operator.

1.4.1.4. Discovering hosted clusters

After all your multicluster engine operator clusters are imported into Red Hat Advanced Cluster Management, you need to enable the hypershift-addon for those managed multicluster engine operator clusters to discover the hosted clusters.

Default add-ons are installed into a different namespace in the previous procedures. Similarly, you install the hypershift-addon into a different namespace in multicluster engine operator so that the add-ons agent for multicluster engine operator local-cluster and the agent for Red Hat Advanced Cluster Management can work in multicluster engine operator.

Important: For all the following commands, replace <managed-cluster-names> with comma-separated managed cluster names for multicluster engine operator.

  1. Run the following command to set the agentInstallNamespace namespace of the add-on to open-cluster-management-agent-addon-discovery:

    oc patch addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine --type=merge -p '{"spec":{"agentInstallNamespace":"open-cluster-management-agent-addon-discovery"}}'
  2. Run the following command to disable metrics and to disable the HyperShift operator management:

    oc patch addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine --type=merge -p '{"spec":{"customizedVariables":[{"name":"disableMetrics","value": "true"},{"name":"disableHOManagement","value": "true"}]}}'
  3. Run the following command to enable the hypershift-addon for multicluster engine operator:

    clusteradm addon enable --names hypershift-addon --clusters <managed-cluster-names>
  4. You can get the multicluster engine operator managed cluster names by running the following command in Red Hat Advanced Cluster Management.

    oc get managedcluster
  5. Log into multicluster engine operator clusters and verify that the hypershift-addon is installed in the namespace that you specified. Run the following command:

    oc get deployment -n open-cluster-management-agent-addon-discovery

    See the following example output that lists the add-ons:

    NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
    cluster-proxy-proxy-agent            1/1     1            1           24h
    klusterlet-addon-workmgr            1/1     1            1           24h
    hypershift-addon-agent              1/1     1            1           24h
    managed-serviceaccount-addon-agent  1/1     1            1           24h

Red Hat Advanced Cluster Management deploys the hypershift-addon, which is the discovery agent that discovers hosted clusters from multicluster engine operator. The agent creates the corresponding DiscoveredCluster custom resource in the multicluster engine operator managed cluster namespace in the Red Hat Advanced Cluster Management hub cluster when the hosted cluster kube-apiserver becomes available.

You can view your discovered clusters in the console.

  1. Log into hub cluster console and navigate to All Clusters > Infrastructure > Clusters.
  2. Find the Discovered clusters tab to view all discovered hosted clusters from multicluster engine operator with type MultiClusterEngineHCP.

Next, visit Automating import for discovered hosted clusters to learn how to automatically import clusters.

1.4.2. Automating import for discovered hosted clusters

Automate the import of hosted clusters by using the DiscoveredCluster resource for faster cluster management, without manually importing individual clusters.

When you automatically import a discovered hosted cluster into Red Hat Advanced Cluster Management, all Red Hat Advanced Cluster Management add-ons are enabled so that you can start managing the hosted clusters with the available management tools.

The hosted cluster is also auto-imported into multicluster engine operator. Through the multicluster engine operator console, you can manage the hosted cluster lifecycle. However, you cannot manage the hosted cluster lifecycle from the Red Hat Advanced Cluster Management console.

Required access: Cluster administrator

1.4.2.1. Prerequisites

  • You need Red Hat Advanced Cluster Management installed. See the Red Hat Advanced Cluster Management Installing and upgrading documentation.
  • You need to learn about Policies. See the introduction to Governance in the Red Hat Advanced Cluster Management documentation.

1.4.2.2. Configuring settings for automatic import

Discovered hosted clusters from managed multicluster engine operator clusters are represented in DiscoveredCluster custom resources, which are located in the managed multicluster engine operator cluster namespace in Red Hat Advanced Cluster Management. See the following DiscoveredCluster resource and namespace example:

apiVersion: discovery.open-cluster-management.io/v1
kind: DiscoveredCluster
metadata:
  creationTimestamp: "2024-05-30T23:05:39Z"
  generation: 1
  labels:
    hypershift.open-cluster-management.io/hc-name: hosted-cluster-1
    hypershift.open-cluster-management.io/hc-namespace: clusters
  name: hosted-cluster-1
  namespace: mce-1
  resourceVersion: "1740725"
  uid: b4c36dca-a0c4-49f9-9673-f561e601d837
spec:
  apiUrl: https://a43e6fe6dcef244f8b72c30426fb6ae3-ea3fec7b113c88da.elb.us-west-1.amazonaws.com:6443
  cloudProvider: aws
  creationTimestamp: "2024-05-30T23:02:45Z"
  credential: {}
  displayName: mce-1-hosted-cluster-1
  importAsManagedCluster: false
  isManagedCluster: false
  name: hosted-cluster-1
  openshiftVersion: 0.0.0
  status: Active
  type: MultiClusterEngineHCP

These discovered hosted clusters are not automatically imported into Red Hat Advanced Cluster Management until the spec.importAsManagedCluster field is set to true. Learn how to use a Red Hat Advanced Cluster Management policy to automatically set this field to true for all type.MultiClusterEngineHCP within DiscoveredCluster resources so that discovered hosted clusters are immediately automatically imported into Red Hat Advanced Cluster Management.

Configure your Policy to import all your discovered hosted clusters automatically. Log in to your hub cluster from the CLI to complete the following procedure:

  1. Create a YAML file for your DiscoveredCluster custom resource and edit the configuration that is referenced in the following example:

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-mce-hcp-autoimport
      namespace: open-cluster-management-global-set
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/description: Discovered clusters that are of
          type MultiClusterEngineHCP can be automatically imported into ACM as managed clusters.
          This policy configure those discovered clusters so they are automatically imported.
          Fine tuning MultiClusterEngineHCP clusters to be automatically imported
          can be done by configure filters at the configMap or add annotation to the discoverd cluster.
    spec:
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: mce-hcp-autoimport-config
            spec:
              object-templates:
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: v1
                    kind: ConfigMap
                    metadata:
                      name: discovery-config
                      namespace: open-cluster-management-global-set
                    data:
                      rosa-filter: ""
              remediationAction: enforce 1
              severity: low
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-mce-hcp-autoimport
            spec:
              remediationAction: enforce
              severity: low
              object-templates-raw: |
                {{- /* find the MultiClusterEngineHCP DiscoveredClusters */ -}}
                {{- range $dc := (lookup "discovery.open-cluster-management.io/v1" "DiscoveredCluster" "" "").items }}
                  {{- /* Check for the flag that indicates the import should be skipped */ -}}
                  {{- $skip := "false" -}}
                  {{- range $key, $value := $dc.metadata.annotations }}
                    {{- if and (eq $key "discovery.open-cluster-management.io/previously-auto-imported")
                               (eq $value "true") }}
                      {{- $skip = "true" }}
                    {{- end }}
                  {{- end }}
                  {{- /* if the type is MultiClusterEngineHCP and the status is Active */ -}}
                  {{- if and (eq $dc.spec.status "Active")
                             (contains (fromConfigMap "open-cluster-management-global-set" "discovery-config" "mce-hcp-filter") $dc.spec.displayName)
                             (eq $dc.spec.type "MultiClusterEngineHCP")
                             (eq $skip "false") }}
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: discovery.open-cluster-management.io/v1
                    kind: DiscoveredCluster
                    metadata:
                      name: {{ $dc.metadata.name }}
                      namespace: {{ $dc.metadata.namespace }}
                    spec:
                      importAsManagedCluster: true 2
                  {{- end }}
                {{- end }}
    1
    To enable automatic import, change the spec.remediationAction to enforce.
    2
    To enable automatic import, change spec.importAsManagedCluster to true.
  2. Run oc apply -f <filename>.yaml -n <namespace> to apply the file.

1.4.2.3. Creating the placement definition

You need to create a placement definition that specifies the managed cluster for the policy deployment. Complete the following procedure:

  1. Create the Placement definition that selects only the local-cluster, which is a hub cluster that is managed. Use the following YAML sample:

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: policy-mce-hcp-autoimport-placement
      namespace: open-cluster-management-global-set
    spec:
      tolerations:
        - key: cluster.open-cluster-management.io/unreachable
          operator: Exists
        - key: cluster.open-cluster-management.io/unavailable
          operator: Exists
      clusterSets:
        - global
      predicates:
        - requiredClusterSelector:
            labelSelector:
              matchExpressions:
                - key: local-cluster
                  operator: In
                  values:
                    - "true"
  2. Run oc apply -f placement.yaml -n <namespace>, where namespace matches the namespace that you used for the policy that you previously created.

1.4.2.4. Binding the import policy to a placement definition

After you create the policy and the placement, you need to connect the two resources. Complete the following steps:

  1. Connect the resources by using a PlacementBinding resource. See the following example where placementRef points to the Placement that you created, and subjects points to the Policy that you created:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: policy-mce-hcp-autoimport-placement-binding
      namespace: open-cluster-management-global-set
    placementRef:
      name: policy-mce-hcp-autoimport-placement
      apiGroup: cluster.open-cluster-management.io
      kind: Placement
    subjects:
      - name: policy-mce-hcp-autoimport
        apiGroup: policy.open-cluster-management.io
        kind: Policy
  2. To verify, run the following command:

    oc get policy policy-mce-hcp-autoimport -n <namespace>

Important: You can detach a hosted cluster from Red Hat Advanced Cluster Management by using the Detach option in the Red Hat Advanced Cluster Management console, or by removing the corresponding ManagedCluster custom resource from the command line.

For best results, detach the managed hosted cluster before destroying the hosted cluster.

When a discovered cluster is detached, the following annotation is added to the DiscoveredCluster resource to prevent the policy to import the discovered cluster again.

  annotations:
    discovery.open-cluster-management.io/previously-auto-imported: "true"

If you want the detached discovered cluster to be reimported, remove this annotation.

1.4.3. Automating import for discovered OpenShift Service on AWS clusters

Automate the import of OpenShift Service on AWS clusters by using Red Hat Advanced Cluster Management policy enforcement for faster cluster management, without manually importing individual clusters.

Required access: Cluster administrator

1.4.3.1. Prerequisites

  • You need Red Hat Advanced Cluster Management installed. See the Red Hat Advanced Cluster Management Installing and upgrading documentation.
  • You need to learn about Policies. See the introduction to Governance in the Red Hat Advanced Cluster Management documentation.

1.4.3.2. Creating the automatic import policy

The following policy and procedure is an example of how to import all your discovered OpenShift Service on AWS clusters automatically.

Log in to your hub cluster from the CLI to complete the following procedure:

  1. Create a YAML file with the following example and apply the changes that are referenced:

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-rosa-autoimport
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/description: OpenShift Service on AWS discovered clusters can be automatically imported into
    Red Hat Advanced Cluster Management as managed clusters with this policy. You can select and configure those managed clusters so you can import. Configure filters or add an annotation if you do not want all of your OpenShift Service on AWS clusters to be automatically imported.
    spec:
      remediationAction: inform 1
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: rosa-autoimport-config
            spec:
              object-templates:
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: v1
                    kind: ConfigMap
                    metadata:
                      name: discovery-config
                      namespace: open-cluster-management-global-set
                    data:
                      rosa-filter: "" 2
              remediationAction: enforce
              severity: low
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-rosa-autoimport
            spec:
              remediationAction: enforce
              severity: low
              object-templates-raw: |
                {{- /* find the ROSA DiscoveredClusters */ -}}
                {{- range $dc := (lookup "discovery.open-cluster-management.io/v1" "DiscoveredCluster" "" "").items }}
                  {{- /* Check for the flag that indicates the import should be skipped */ -}}
                  {{- $skip := "false" -}}
                  {{- range $key, $value := $dc.metadata.annotations }}
                    {{- if and (eq $key "discovery.open-cluster-management.io/previously-auto-imported")
                               (eq $value "true") }}
                      {{- $skip = "true" }}
                    {{- end }}
                  {{- end }}
                  {{- /* if the type is ROSA and the status is Active */ -}}
                  {{- if and (eq $dc.spec.status "Active")
                             (contains (fromConfigMap "open-cluster-management-global-set" "discovery-config" "rosa-filter") $dc.spec.displayName)
                             (eq $dc.spec.type "ROSA")
                             (eq $skip "false") }}
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: discovery.open-cluster-management.io/v1
                    kind: DiscoveredCluster
                    metadata:
                      name: {{ $dc.metadata.name }}
                      namespace: {{ $dc.metadata.namespace }}
                    spec:
                      importAsManagedCluster: true
                  {{- end }}
                {{- end }}
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-rosa-managedcluster-status
            spec:
              remediationAction: enforce
              severity: low
              object-templates-raw: |
                {{- /* Use the same DiscoveredCluster list to check ManagedCluster status */ -}}
                {{- range $dc := (lookup "discovery.open-cluster-management.io/v1" "DiscoveredCluster" "" "").items }}
                  {{- /* Check for the flag that indicates the import should be skipped */ -}}
                  {{- $skip := "false" -}}
                  {{- range $key, $value := $dc.metadata.annotations }}
                    {{- if and (eq $key "discovery.open-cluster-management.io/previously-auto-imported")
                               (eq $value "true") }}
                      {{- $skip = "true" }}
                    {{- end }}
                  {{- end }}
                  {{- /* if the type is ROSA and the status is Active */ -}}
                  {{- if and (eq $dc.spec.status "Active")
                             (contains (fromConfigMap "open-cluster-management-global-set" "discovery-config" "rosa-filter") $dc.spec.displayName)
                             (eq $dc.spec.type "ROSA")
                             (eq $skip "false") }}
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: cluster.open-cluster-management.io/v1
                    kind: ManagedCluster
                    metadata:
                      name: {{ $dc.spec.displayName }}
                      namespace: {{ $dc.spec.displayName }}
                    status:
                      conditions:
                        - type: ManagedClusterConditionAvailable
                          status: "True"
                  {{- end }}
                {{- end }}
    1
    To enable automatic import, change the spec.remediationAction to enforce.
    2
    Optional: Specify a value here to select a subset of the matching OpenShift Service on AWS clusters, which are based on discovered cluster names. The rosa-filter has no value by default, so the filter does not restrict cluster names without a subset value.
  2. Run oc apply -f <filename>.yaml -n <namespace> to apply the file.

1.4.3.3. Creating the placement definition

You need to create a placement definition that specifies the managed cluster for the policy deployment.

  1. Create the placement definition that selects only the local-cluster, which is a hub cluster that is managed. Use the following YAML sample:

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-openshift-plus-hub
    spec:
      predicates:
      - requiredClusterSelector:
          labelSelector:
            matchExpressions:
            - key: name
          	    operator: In
          	    values:
          	    - local-cluster
  2. Run oc apply -f placement.yaml -n <namespace>, where namespace matches the namespace that you used for the policy that you previously created.

1.4.3.4. Binding the import policy to a placement definition

After you create the policy and the placement, you need to connect the two resources.

  1. Connect the resources by using a PlacementBinding. See the following example where placementRef points to the Placement that you created, and subjects points to the Policy that you created:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-rosa-autoimport
    placementRef:
      apiGroup: cluster.open-cluster-management.io
      kind: Placement
      name: placement-policy-rosa-autoimport
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: policy-rosa-autoimport
  2. To verify, run the following command:

    oc get policy policy-rosa-autoimport -n <namespace>

1.4.4. Observability integration

With the Red Hat Advanced Cluster Management Observability feature, you can view health and utilization of clusters across your fleet. You can install Red Hat Advanced Cluster Management and enable Observability.

1.4.4.1. Observing hosted control planes

After you enable the multicluster-observability pod, you can use Red Hat Advanced Cluster Management Observability Grafana dashboards to view the following information about your hosted control planes:

  • ACM > Hosted Control Planes Overview dashboard to see cluster capacity estimates for hosting hosted control planes, the related cluster resources, and the list and status of existing hosted control planes.
  • ACM > Resources > Hosted Control Plane dashboard that you can access from the Overview page to see the resource utilizations of the selected hosted control plane.

To enable, see Observability service.

1.5. Managing credentials

A credential is required to create and manage a Red Hat OpenShift Container Platform cluster on a cloud service provider with multicluster engine operator. The credential stores the access information for a cloud provider. Each provider account requires its own credential, as does each domain on a single provider.

You can create and manage your cluster credentials. Credentials are stored as Kubernetes secrets. Secrets are copied to the namespace of a managed cluster so that the controllers for the managed cluster can access the secrets. When a credential is updated, the copies of the secret are automatically updated in the managed cluster namespaces.

Note: Changes to the pull secret, SSH keys, or base domain of the cloud provider credentials are not reflected for existing managed clusters, as they have already been provisioned using the original credentials.

Required access: Edit

1.5.1. Creating a credential for Amazon Web Services

You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS).

Required access: Edit

Note: This procedure must be done before you can create a cluster with multicluster engine operator.

1.5.1.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster
  • Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on Amazon Web Services (AWS)
  • AWS login credentials, which include access key ID and secret access key. See Understanding and getting your security credentials.
  • Account permissions that allow installing clusters on AWS. See Configuring an AWS account for instructions on how to configure an AWS account.

1.5.1.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:

  1. Add your AWS access key ID for your AWS account. See Log in to AWS to find your ID.
  2. Provide the contents for your new AWS Secret Access Key.
  3. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  4. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  5. Add your SSH private key and SSH public key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on Amazon Web Services or Creating a cluster on Amazon Web Services GovCloud.

You can edit your credential in the console. If the cluster was created by using this provider connection, then the <cluster-name>-aws-creds> secret from <cluster-namespace> will get updated with the new credentials.

Note: Updating credentials does not work for cluster pool claimed clusters.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.5.1.2.1. Creating an S3 secret

To create an Amazon Simple Storage Service (S3) secret, complete the following task from the console:

  1. Click Add credential > AWS > S3 Bucket. If you click For Hosted Control Plane, the name and namespace are provided.
  2. Enter information for the following fields that are provided:

    • bucket name: Add the name of the S3 bucket.
    • aws_access_key_id: Add your AWS access key ID for your AWS account. Log in to AWS to find your ID.
    • aws_secret_access_key: Provide the contents for your new AWS Secret Access Key.
    • Region: Enter your AWS region.

1.5.1.3. Creating an opaque secret by using the API

To create an opaque secret for Amazon Web Services by using the API, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-aws-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    aws_access_key_id: $(echo -n "${AWS_KEY}" | base64 -w0)
    aws_secret_access_key: $(echo -n "${AWS_SECRET}" | base64 -w0)

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.
  • Add labels to your credentials to view your secret in the console. For example, the following AWS S3 Bucket oc label secret is appended with type=awss3 and credentials --from-file=…​.:
oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster "cluster.open-cluster-management.io/type=awss3"
oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster "cluster.open-cluster-management.io/credentials=credentials="

1.5.1.4. Additional resources

1.5.2. Creating a credential for Microsoft Azure

You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Microsoft Azure or on Microsoft Azure Government.

Required access: Edit

Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator.

1.5.2.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster.
  • Internet access for your multicluster engine operator hub cluster so that it can create the Kubernetes cluster on Azure.
  • Azure login credentials, which include your Base Domain Resource Group and Azure Service Principal JSON. See Microsoft Azure portal to get your login credentials.
  • Account permissions that allow installing clusters on Azure. See How to configure Cloud Services and Configuring an Azure account for more information.

1.5.2.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

  1. Optional: Add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential.
  2. Select whether the environment for your cluster is AzurePublicCloud or AzureUSGovernmentCloud. The settings are different for the Azure Government environment, so ensure that this is set correctly.
  3. Add your Base domain resource group name for your Azure account. This entry is the resource name that you created with your Azure account. You can find your Base Domain Resource Group Name by selecting Home > DNS Zones in the Azure interface. See Create an Azure service principal with the Azure CLI to find your base domain resource group name.
  4. Provide the contents for your Client ID. This value is generated as the appId property when you create a service principal with the following command:

    az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>

    Replace service_principal with the name of your service principal.

  5. Add your Client Secret. This value is generated as the password property when you create a service principal with the following command:

    az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>

    Replace service_principal with the name of your service principal.

  6. Add your Subscription ID. This value is the id property in the output of the following command:

    az account show
  7. Add your Tenant ID. This value is the tenantId property in the output of the following command:

    az account show
  8. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  9. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  10. Add your SSH private key and SSH public key to use to connect to the cluster. You can use an existing key pair, or create a new pair using a key generation program.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on Microsoft Azure.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.5.2.3. Creating an opaque secret by using the API

To create an opaque secret for Microsoft Azure by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-azure-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    baseDomainResourceGroupName: $(echo -n "${azure_resource_group_name}" | base64 -w0)
    osServicePrincipal.json: $(base64 -w0 "${AZURE_CRED_JSON}")

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.5.2.4. Additional resources

1.5.3. Creating a credential for Google Cloud Platform

You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP).

Required access: Edit

Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator.

1.5.3.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster
  • Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on GCP
  • GCP login credentials, which include user Google Cloud Platform Project ID and Google Cloud Platform service account JSON key. See Creating and managing projects.
  • Account permissions that allow installing clusters on GCP. See Configuring a GCP project for instructions on how to configure an account.

1.5.3.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, for both convenience and security.

You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:

  1. Add your Google Cloud Platform project ID for your GCP account. See Log in to GCP to retrieve your settings.
  2. Add your Google Cloud Platform service account JSON key. See the Create service accounts documentation to create your service account JSON key. Follow the steps for the GCP console.
  3. Provide the contents for your new Google Cloud Platform service account JSON key.
  4. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add and asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  5. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  6. Add your SSH private key and SSH public key so you can access the cluster. You can use an existing key pair, or create a new pair using a key generation program.

You can use this connection when you create a cluster by completing the steps in Creating a cluster on Google Cloud Platform.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.5.3.3. Creating an opaque secret by using the API

To create an opaque secret for Google Cloud Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-gcp-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    osServiceAccount.json: $(base64 -w0 "${GCP_CRED_JSON}")

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.5.3.4. Additional resources

Return to Creating a credential for Google Cloud Platform.

1.5.4. Creating a credential for VMware vSphere

You need a credential to use multicluster engine operator console to deploy and manage a Red Hat OpenShift Container Platform cluster on VMware vSphere.

Required access: Edit

1.5.4.1. Prerequisites

You must have the following prerequisites before you create a credential:

  • You must create a credential for VMware vSphere before you can create a cluster with multicluster engine operator.
  • A deployed hub cluster on a supported OpenShift Container Platform version.
  • Internet access for your hub cluster so it can create the Kubernetes cluster on VMware vSphere.
  • VMware vSphere login credentials and vCenter requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on vSphere with customizations. These credentials include the following information:

    • vCenter account privileges.
    • Cluster resources.
    • DHCP available.
    • ESXi hosts have time synchronized (for example, NTP).

1.5.4.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:

  1. Add your VMware vCenter server fully-qualified host name or IP address. The value must be defined in the vCenter server root CA certificate. If possible, use the fully-qualified host name.
  2. Add your VMware vCenter username.
  3. Add your VMware vCenter password.
  4. Add your VMware vCenter root CA certificate.

    1. You can download your certificate in the download.zip package with the certificate from your VMware vCenter server at: https://<vCenter_address>/certs/download.zip. Replace vCenter_address with the address to your vCenter server.
    2. Unpackage the download.zip.
    3. Use the certificates from the certs/<platform> directory that have a .0 extension.

      Tip: You can use the ls certs/<platform> command to list all of the available certificates for your platform.

      Replace <platform> with the abbreviation for your platform: lin, mac, or win.

      For example: certs/lin/3a343545.0

      Best practice: Link together multiple certificates with a .0 extension by running the cat certs/lin/*.0 > ca.crt command.

    4. Add your VMware vSphere cluster name.
    5. Add your VMware vSphere datacenter.
    6. Add your VMware vSphere default datastore.
    7. Add your VMware vSphere disk type.
    8. Add your VMware vSphere folder.
    9. Add your VMware vSphere resource pool.
  5. For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information:

    • Cluster OS image: This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines.
    • Image content source: This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example: repository.com:5000/openshift/ocp-release.

      The path creates an image content source policy mapping in the install-config.yaml to the Red Hat OpenShift Container Platform release images. As an example, repository.com:5000 produces this imageContentSource content:

      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release-nightly
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
    • Additional trust bundle: This value provides the contents of the certificate file that is required to access the mirror registry.

      Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the install-config.yaml file by using the YAML editor. A sample entry is shown in the following example:

      - mirrors:
        - registry.example.com:5000/rhacm2
        source: registry.redhat.io/rhacm2
  6. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add and asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  7. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  8. Add your SSH private key and SSH public key, which allows you to connect to the cluster.

    You can use an existing key pair, or create a new one with key generation program.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on VMware vSphere.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.5.4.3. Creating an opaque secret by using the API

To create an opaque secret for VMware vSphere by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-vsphere-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    username: $(echo -n "${VMW_USERNAME}" | base64 -w0)
    password.json: $(base64 -w0 "${VMW_PASSWORD}")

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.5.4.4. Additional resources

1.5.5. Creating a credential for Red Hat OpenStack

You need a credential to use multicluster engine operator console to deploy and manage a supported Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform.

Notes: You must create a credential for Red Hat OpenStack Platform before you can create a cluster with multicluster engine operator.

1.5.5.1. Prerequisites

You must have the following prerequisites before you create a credential:

  • A deployed hub cluster on a supported OpenShift Container Platform version.
  • Internet access for your hub cluster so it can create the Kubernetes cluster on Red Hat OpenStack Platform.
  • Red Hat OpenStack Platform login credentials and Red Hat OpenStack Platform requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on OpenStack with customizations.
  • Download or create a clouds.yaml file for accessing the CloudStack API. Within the clouds.yaml file:

    • Determine the cloud auth section name to use.
    • Add a line for the password, immediately following the username line.

1.5.5.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. To enhance security and convenience, you can create a namespace specifically to host your credentials.

  1. Optional: You can add a Base DNS domain for your credential. If you add the base DNS domain, it is automatically populated in the correct field when you create a cluster with this credential.
  2. Add your Red Hat OpenStack Platform clouds.yaml file contents. The contents of the clouds.yaml file, including the password, provide the required information for connecting to the Red Hat OpenStack Platform server. The file contents must include the password, which you add to a new line immediately after the username.
  3. Add your Red Hat OpenStack Platform cloud name. This entry is the name specified in the cloud section of the clouds.yaml to use for establishing communication to the Red Hat OpenStack Platform server.
  4. Optional: For configurations that use an internal certificate authority, enter your certificate in the Internal CA certificate field to automatically update your clouds.yaml with the certificate information.
  5. For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information:

    • Cluster OS image: This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines.
    • Image content sources: This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example: repository.com:5000/openshift/ocp-release.

      The path creates an image content source policy mapping in the install-config.yaml to the Red Hat OpenShift Container Platform release images. As an example, repository.com:5000 produces this imageContentSource content:

      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release-nightly
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
    • Additional trust bundle: This value provides the contents of the certificate file that is required to access the mirror registry.

      Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the install-config.yaml file by using the YAML editor. A sample entry is shown in the following example:

      - mirrors:
        - registry.example.com:5000/rhacm2
        source: registry.redhat.io/rhacm2
  6. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  7. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  8. Add your SSH Private Key and SSH Public Key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program.
  9. Click Create.
  10. Review the new credential information, then click Add. When you add the credential, it is added to the list of credentials.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on Red Hat OpenStack Platform.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.5.5.3. Creating an opaque secret by using the API

To create an opaque secret for Red Hat OpenStack Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-osp-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    clouds.yaml: $(base64 -w0 "${OSP_CRED_YAML}") cloud: $(echo -n "openstack" | base64 -w0)

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.5.5.4. Additional resources

1.5.6. Creating a credential for Red Hat OpenShift Cluster Manager

Add an OpenShift Cluster Manager credential so that you can discover clusters.

Required access: Administrator

1.5.6.1. Prerequisites

You need access to a console.redhat.com account. Later you will need the value that can be obtained from console.redhat.com/openshift/token.

1.5.6.2. Managing a credential by using the console

You need to add your credential to discover clusters. To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

Your OpenShift Cluster Manager API token can be obtained from console.redhat.com/openshift/token.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

If your credential is removed, or your OpenShift Cluster Manager API token expires or is revoked, then the associated discovered clusters are removed.

1.5.7. Creating a credential for Ansible Automation Platform

You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster that is using Red Hat Ansible Automation Platform.

Required access: Edit

Note: This procedure must be done before you can create an Automation template to enable automation on a cluster.

1.5.7.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster
  • Internet access for your multicluster engine operator hub cluster
  • Ansible login credentials, which includes Ansible Automation Platform hostname and OAuth token; see Credentials for Ansible Automation Platform.
  • Account permissions that allow you to install hub clusters and work with Ansible. Learn more about Ansible users.

1.5.7.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

The Ansible Token and host URL that you provide when you create your Ansible credential are automatically updated for the automations that use that credential when you edit the credential. The updates are copied to any automations that use that Ansible credential, including those related to cluster lifecycle, governance, and application management automations. This ensures that the automations continue to run after the credential is updated.

You can edit your credential in the console. Ansible credentials are automatically updated in your automation that use that credential when you update them in the credential.

You can create an Ansible Job that uses this credential by completing the steps in Configuring Ansible Automation Platform tasks to run on managed clusters.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.5.8. Creating a credential for an on-premises environment

You need a credential to use the console to deploy and manage a Red Hat OpenShift Container Platform cluster in an on-premises environment. The credential specifies the connections that are used for the cluster.

Required access: Edit

1.5.8.1. Prerequisites

You need the following prerequisites before creating a credential:

  • A hub cluster that is deployed.
  • Internet access for your hub cluster so it can create the Kubernetes cluster on your infrastructure environment.
  • For a disconnected environment, you must have a configured mirror registry where you can copy the release images for your cluster creation. See Disconnected installation mirroring in the OpenShift Container Platform documentation for more information.
  • Account permissions that support installing clusters on the on-premises environment.

1.5.8.2. Managing a credential by using the console

To create a credential from the console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

  1. Select Host inventory for your credential type.
  2. You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. If you do not add the DNS domain, you can add it when you create your cluster.
  3. Enter your Red Hat OpenShift pull secret. This pull secret is automatically entered when you create a cluster and specify this credential. You can download your pull secret from Pull secret. See Using image pull secrets for more information about pull secrets.
  4. Enter your SSH public key. This SSH public key is also automatically entered when you create a cluster and specify this credential.
  5. Select Add to create your credential.

You can create a cluster that uses this credential by completing the steps in Creating a cluster in an on-premises environment.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.6. Cluster lifecycle introduction

The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. The multicluster engine operator is a software operator that enhances cluster fleet management and supports OpenShift Container Platform cluster lifecycle management across clouds and data centers. You can use multicluster engine operator with or without Red Hat Advanced Cluster Management. Red Hat Advanced Cluster Management also installs multicluster engine operator automatically and offers further multicluster capabilities.

See the following documentation:

1.6.1. Cluster lifecycle architecture

Cluster lifecycle requires two types of clusters: hub clusters and managed clusters.

The hub cluster is the OpenShift Container Platform (or Red Hat Advanced Cluster Management) main cluster with the multicluster engine operator automatically installed. You can create, manage, and monitor other Kubernetes clusters with the hub cluster. You can create clusters by using the hub cluster, while you can also import existing clusters to be managed by the hub cluster.

When you create a managed cluster, the cluster is created using the Red Hat OpenShift Container Platform cluster installer with the Hive resource. You can find more information about the proces